WorldWideScience

Sample records for partial correctness framework

  1. Hitchhiker'S Guide to Voxel Segmentation for Partial Volume Correction of in Vivo Magnetic Resonance Spectroscopy

    Directory of Open Access Journals (Sweden)

    Scott Quadrelli

    2016-01-01

    Full Text Available Partial volume effects have the potential to cause inaccuracies when quantifying metabolites using proton magnetic resonance spectroscopy (MRS. In order to correct for cerebrospinal fluid content, a spectroscopic voxel needs to be segmented according to different tissue contents. This article aims to detail how automated partial volume segmentation can be undertaken and provides a software framework for researchers to develop their own tools. While many studies have detailed the impact of partial volume correction on proton magnetic resonance spectroscopy quantification, there is a paucity of literature explaining how voxel segmentation can be achieved using freely available neuroimaging packages.

  2. Compatriot partiality and cosmopolitan justice: Can we justify compatriot partiality within the cosmopolitan framework?

    Directory of Open Access Journals (Sweden)

    Rachelle Bascara

    2016-10-01

    Full Text Available This paper shows an alternative way in which compatriot partiality could be justified within the framework of global distributive justice. Philosophers who argue that compatriot partiality is similar to racial partiality capture something correct about compatriot partiality. However, the analogy should not lead us to comprehensively reject compatriot partiality. We can justify compatriot partiality on the same grounds that liberation movements and affirmative action have been justified. Hence, given cosmopolitan demands of justice, special consideration for the economic well-being of your nation as a whole is justified if and only if the country it identifies is an oppressed developing nation in an unjust global order.This justification is incomplete. We also need to say why Person A, qua national of Country A, is justified in helping her compatriots in Country A over similarly or slightly more oppressed non-compatriots in Country B. I argue that Person A’s partiality towards her compatriots admits further vindication because it is part of an oppressed group’s project of self-emancipation, which is preferable to paternalistic emancipation.Finally, I identify three benefits in my justification for compatriot partiality. First, I do not offer a blanket justification for all forms of compatriot partiality. Partiality between members of oppressed groups is only a temporary effective measure designed to level an unlevel playing field. Second, because history attests that sovereign republics could arise as a collective response to colonial oppression, justifying compatriot partiality on the grounds that I have identified is conducive to the development of sovereignty and even democracy in poor countries, thereby avoiding problems of infringement that many humanitarian poverty alleviation efforts encounter. Finally, my justification for compatriot partiality complies with the implicit cosmopolitan commitment to the realizability of global justice

  3. Detecting and correcting partial errors: Evidence for efficient control without conscious access.

    Science.gov (United States)

    Rochet, N; Spieser, L; Casini, L; Hasbroucq, T; Burle, B

    2014-09-01

    Appropriate reactions to erroneous actions are essential to keeping behavior adaptive. Erring, however, is not an all-or-none process: electromyographic (EMG) recordings of the responding muscles have revealed that covert incorrect response activations (termed "partial errors") occur on a proportion of overtly correct trials. The occurrence of such "partial errors" shows that incorrect response activations could be corrected online, before turning into overt errors. In the present study, we showed that, unlike overt errors, such "partial errors" are poorly consciously detected by participants, who could report only one third of their partial errors. Two parameters of the partial errors were found to predict detection: the surface of the incorrect EMG burst (larger for detected) and the correction time (between the incorrect and correct EMG onsets; longer for detected). These two parameters provided independent information. The correct(ive) responses associated with detected partial errors were larger than the "pure-correct" ones, and this increase was likely a consequence, rather than a cause, of the detection. The respective impacts of the two parameters predicting detection (incorrect surface and correction time), along with the underlying physiological processes subtending partial-error detection, are discussed.

  4. Semantics and correctness proofs for programs with partial functions

    International Nuclear Information System (INIS)

    Yakhnis, A.; Yakhnis, V.

    1996-01-01

    This paper presents a portion of the work on specification, design, and implementation of safety-critical systems such as reactor control systems. A natural approach to this problem, once all the requirements are captured, would be to state the requirements formally and then either to prove (preferably via automated tools) that the system conforms to spec (program verification), or to try to simultaneously generate the system and a mathematical proof that the requirements are being met (program derivation). An obstacle to this is frequent presence of partially defined operations within the software and its specifications. Indeed, the usual proofs via first order logic presuppose everywhere defined operations. Recognizing this problem, David Gries, in ''The Science of Programming,'' 1981, introduced the concept of partial functions into the mainstream of program correctness and gave hints how his treatment of partial functions could be formalized. Still, however, existing theorem provers and software verifiers have difficulties in checking software with partial functions, because of absence of uniform first order treatment of partial functions within classical 2-valued logic. Several rigorous mechanisms that took partiality into account were introduced [Wirsing 1990, Breu 1991, VDM 1986, 1990, etc.]. However, they either did not discuss correctness proofs or departed from first order logic. To fill this gap, the authors provide a semantics for software correctness proofs with partial functions within classical 2-valued 1st order logic. They formalize the Gries treatment of partial functions and also cover computations of functions whose argument lists may be only partially available. An example is nuclear reactor control relying on sensors which may fail to deliver sense data. This approach is sufficiently general to cover correctness proofs in various implementation languages

  5. SPAM-assisted partial volume correction algorithm for PET

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul [College of Medicine, Seoul National Univ., Seoul (Korea, Republic of)

    2000-07-01

    A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38{+-}6%, while those of hippocampus and amygdala by 4{+-}3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images.

  6. SPAM-assisted partial volume correction algorithm for PET

    International Nuclear Information System (INIS)

    Cho, Sung Il; Kang, Keon Wook; Lee, Jae Sung; Lee, Dong Soo; Chung, June Key; Soh, Kwang Sup; Lee, Myung Chul

    2000-01-01

    A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the International Consortium for Brain Mapping (ICBM). It will be a good frame for calculating volume of interest (VOI) according to statistical variability of human brain in many fields of brain images. We show that we can get more exact quantification of the counts in VOI by using SPAM in the correlation of partial volume effect for simulated PET image. The MRI of a patient with dementia was segmented into gray matter and white matter, and then they were smoothed to PET resolution. Simulated PET image was made by adding one third of the smoothed white matter to the smoothed gray matter. Spillover effect and partial volume effect were corrected for this simulated PET image with the aid of the segmented and smoothed MR images. The images were spatially normalized to the average brain MRI atlas of ICBM, and were multiplied by the probablities of 98 VOIs of SPAM images of Montreal Neurological Institute. After the correction of partial volume effect, the counts of frontal, partietal, temporal, and occipital lobes were increased by 38±6%, while those of hippocampus and amygdala by 4±3%. By calculating the counts in VOI using the product of probability of SPAM images and counts in the simulated PET image, the counts increase and become closer to the true values. SPAM-assisted partial volume correction is useful for quantification of VOIs in PET images

  7. Prefabricated light-polymerizing plastic pattern for partial denture framework

    Directory of Open Access Journals (Sweden)

    Atsushi Takaichi

    2011-01-01

    Full Text Available Our aim is to report an application of a prefabricated light-polymerizing plastic pattern to construction of removable partial denture framework without the use of a refractory cast. A plastic pattern for the lingual bar was adapted on the master cast of a mandibular Kennedy class I partially edentulous patient. The pattern was polymerized in a light chamber. Cobalt-chromium wires were employed to minimize the potential distortion of the plastic framework. The framework was carefully removed from the master cast and invested with phosphate-bonded investment for the subsequent casting procedures. A retentive clasp was constructed using 19-gauge wrought wire and was welded to the framework by means of laser welding machine. An excellent fit of the framework in the patient′s mouth was observed in the try-in and the insertion of the denture. The result suggests that this method minimizes laboratory cost and time for partial denture construction.

  8. Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging

    International Nuclear Information System (INIS)

    Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.

    1993-01-01

    Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images

  9. Different partial volume correction methods lead to different conclusions

    DEFF Research Database (Denmark)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) usin...

  10. Partial fourier and parallel MR image reconstruction with integrated gradient nonlinearity correction.

    Science.gov (United States)

    Tao, Shengzhen; Trzasko, Joshua D; Shu, Yunhong; Weavers, Paul T; Huston, John; Gray, Erin M; Bernstein, Matt A

    2016-06-01

    To describe how integrated gradient nonlinearity (GNL) correction can be used within noniterative partial Fourier (homodyne) and parallel (SENSE and GRAPPA) MR image reconstruction strategies, and demonstrate that performing GNL correction during, rather than after, these routines mitigates the image blurring and resolution loss caused by postreconstruction image domain based GNL correction. Starting from partial Fourier and parallel magnetic resonance imaging signal models that explicitly account for GNL, noniterative image reconstruction strategies for each accelerated acquisition technique are derived under the same core mathematical assumptions as their standard counterparts. A series of phantom and in vivo experiments on retrospectively undersampled data were performed to investigate the spatial resolution benefit of integrated GNL correction over conventional postreconstruction correction. Phantom and in vivo results demonstrate that the integrated GNL correction reduces the image blurring introduced by the conventional GNL correction, while still correcting GNL-induced coarse-scale geometrical distortion. Images generated from undersampled data using the proposed integrated GNL strategies offer superior depiction of fine image detail, for example, phantom resolution inserts and anatomical tissue boundaries. Noniterative partial Fourier and parallel imaging reconstruction methods with integrated GNL correction reduce the resolution loss that occurs during conventional postreconstruction GNL correction while preserving the computational efficiency of standard reconstruction techniques. Magn Reson Med 75:2534-2544, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  11. Optimized statistical parametric mapping for partial-volume-corrected amyloid positron emission tomography in patients with Alzheimer's disease and Lewy body dementia

    Science.gov (United States)

    Oh, Jungsu S.; Kim, Jae Seung; Chae, Sun Young; Oh, Minyoung; Oh, Seung Jun; Cha, Seung Nam; Chang, Ho-Jong; Lee, Chong Sik; Lee, Jae Hong

    2017-03-01

    We present an optimized voxelwise statistical parametric mapping (SPM) of partial-volume (PV)-corrected positron emission tomography (PET) of 11C Pittsburgh Compound B (PiB), incorporating the anatomical precision of magnetic resonance image (MRI) and amyloid β (A β) burden-specificity of PiB PET. First, we applied region-based partial-volume correction (PVC), termed the geometric transfer matrix (GTM) method, to PiB PET, creating MRI-based lobar parcels filled with mean PiB uptakes. Then, we conducted a voxelwise PVC by multiplying the original PET by the ratio of a GTM-based PV-corrected PET to a 6-mm-smoothed PV-corrected PET. Finally, we conducted spatial normalizations of the PV-corrected PETs onto the study-specific template. As such, we increased the accuracy of the SPM normalization and the tissue specificity of SPM results. Moreover, lobar smoothing (instead of whole-brain smoothing) was applied to increase the signal-to-noise ratio in the image without degrading the tissue specificity. Thereby, we could optimize a voxelwise group comparison between subjects with high and normal A β burdens (from 10 patients with Alzheimer's disease, 30 patients with Lewy body dementia, and 9 normal controls). Our SPM framework outperformed than the conventional one in terms of the accuracy of the spatial normalization (85% of maximum likelihood tissue classification volume) and the tissue specificity (larger gray matter, and smaller cerebrospinal fluid volume fraction from the SPM results). Our SPM framework optimized the SPM of a PV-corrected A β PET in terms of anatomical precision, normalization accuracy, and tissue specificity, resulting in better detection and localization of A β burdens in patients with Alzheimer's disease and Lewy body dementia.

  12. Establishment of an open database of realistic simulated data for evaluation of partial volume correction techniques in brain PET/MR

    Energy Technology Data Exchange (ETDEWEB)

    Mota, Ana [Instituto de Biofísica e Engenharia Biomédica, FC-UL, Lisboa (Portugal); Institute of Nuclear Medicine, UCL, London (United Kingdom); Cuplov, Vesna [Instituto de Biofísica e Engenharia Biomédica, FC-UL, Lisboa (Portugal); Schott, Jonathan; Hutton, Brian; Thielemans, Kris [Institute of Nuclear Medicine, UCL, London (United Kingdom); Drobnjak, Ivana [Centre of Medical Image Computing, UCL, London (United Kingdom); Dickson, John [Institute of Nuclear Medicine, UCL, London (United Kingdom); Bert, Julien [INSERM UMR1101, LaTIM, CHRU de Brest, Brest (France); Burgos, Ninon; Cardoso, Jorge; Modat, Marc; Ourselin, Sebastien [Centre of Medical Image Computing, UCL, London (United Kingdom); Erlandsson, Kjell [Institute of Nuclear Medicine, UCL, London (United Kingdom)

    2015-05-18

    The Partial Volume (PV) effect in Positron Emission Tomography (PET) imaging leads to loss in quantification accuracy, which manifests in PV effects (small objects occupy partially the sensitive volume of the imaging instrument, resulting in blurred images). Simultaneous acquisition of PET and Magnetic Resonance Imaging (MRI) produces concurrent metabolic and anatomical information. The latter has proved to be very helpful for the correction of PV effects. Currently, there are several techniques used for PV correction. They can be applied directly during the reconstruction process or as a post-processing step after image reconstruction. In order to evaluate the efficacy of the different PV correction techniques in brain- PET, we are constructing a database of simulated data. Here we present the framework and steps involved in constructing this database. Static 18F-FDG epilepsy and 18F-Florbetapir amyloid dementia PET/MR were selected because of their very different characteristics. The methodology followed was based on four main steps: Image pre-processing, Ground Truth (GT) generation, MRI and PET data simulation and reconstruction. All steps used Open Source software and can therefore be repeated at any centre. The framework as well as the database will be freely accessible. Tools used included GIF, FSL, POSSUM, GATE and STIR. The final data obtained after simulation, involving raw or reconstructed PET data together with corresponding MRI datasets, were close to the original patient data. Besides, there is the advantage that data can be compared with the GT. We indicate several parameters that can be improved and optimized.

  13. Establishment of an open database of realistic simulated data for evaluation of partial volume correction techniques in brain PET/MR

    International Nuclear Information System (INIS)

    Mota, Ana; Cuplov, Vesna; Schott, Jonathan; Hutton, Brian; Thielemans, Kris; Drobnjak, Ivana; Dickson, John; Bert, Julien; Burgos, Ninon; Cardoso, Jorge; Modat, Marc; Ourselin, Sebastien; Erlandsson, Kjell

    2015-01-01

    The Partial Volume (PV) effect in Positron Emission Tomography (PET) imaging leads to loss in quantification accuracy, which manifests in PV effects (small objects occupy partially the sensitive volume of the imaging instrument, resulting in blurred images). Simultaneous acquisition of PET and Magnetic Resonance Imaging (MRI) produces concurrent metabolic and anatomical information. The latter has proved to be very helpful for the correction of PV effects. Currently, there are several techniques used for PV correction. They can be applied directly during the reconstruction process or as a post-processing step after image reconstruction. In order to evaluate the efficacy of the different PV correction techniques in brain- PET, we are constructing a database of simulated data. Here we present the framework and steps involved in constructing this database. Static 18F-FDG epilepsy and 18F-Florbetapir amyloid dementia PET/MR were selected because of their very different characteristics. The methodology followed was based on four main steps: Image pre-processing, Ground Truth (GT) generation, MRI and PET data simulation and reconstruction. All steps used Open Source software and can therefore be repeated at any centre. The framework as well as the database will be freely accessible. Tools used included GIF, FSL, POSSUM, GATE and STIR. The final data obtained after simulation, involving raw or reconstructed PET data together with corresponding MRI datasets, were close to the original patient data. Besides, there is the advantage that data can be compared with the GT. We indicate several parameters that can be improved and optimized.

  14. Statistical properties of single-mode fiber coupling of satellite-to-ground laser links partially corrected by adaptive optics.

    Science.gov (United States)

    Canuet, Lucien; Védrenne, Nicolas; Conan, Jean-Marc; Petit, Cyril; Artaud, Geraldine; Rissons, Angelique; Lacan, Jerome

    2018-01-01

    In the framework of satellite-to-ground laser downlinks, an analytical model describing the variations of the instantaneous coupled flux into a single-mode fiber after correction of the incoming wavefront by partial adaptive optics (AO) is presented. Expressions for the probability density function and the cumulative distribution function as well as for the average fading duration and fading duration distribution of the corrected coupled flux are given. These results are of prime interest for the computation of metrics related to coded transmissions over correlated channels, and they are confronted by end-to-end wave-optics simulations in the case of a geosynchronous satellite (GEO)-to-ground and a low earth orbit satellite (LEO)-to-ground scenario. Eventually, the impact of different AO performances on the aforementioned fading duration distribution is analytically investigated for both scenarios.

  15. Partial volume correction in SPECT reconstruction with OSEM

    Energy Technology Data Exchange (ETDEWEB)

    Erlandsson, Kjell, E-mail: k.erlandsson@ucl.ac.uk [Institute of Nuclear Medicine, University College London and University College London Hospital, London NW1 2BU (United Kingdom); Thomas, Ben; Dickson, John; Hutton, Brian F. [Institute of Nuclear Medicine, University College London and University College London Hospital, London NW1 2BU (United Kingdom)

    2011-08-21

    SPECT images suffer from poor spatial resolution, which leads to partial volume effects due to cross-talk between different anatomical regions. By utilising high-resolution structural images (CT or MRI) it is possible to compensate for these effects. Traditional partial volume correction (PVC) methods suffer from various limitations, such as correcting a single region only, returning only regional mean values, or assuming a stationary point spread function (PSF). We recently presented a novel method in which PVC was combined with the reconstruction process in order to take into account the distance dependent PSF in SPECT, which was based on filtered backprojection (FBP) reconstruction. We now present a new method based on the iterative OSEM algorithm, which has advantageous noise properties compared to FBP. We have applied this method to a series of 10 brain SPECT studies performed on healthy volunteers using the DATSCAN tracer. T1-weighted MRI images were co-registered to the SPECT data and segmented into 33 anatomical regions. The SPECT data were reconstructed using OSEM, and PVC was applied in the projection domain at each iteration. The correction factors were calculated by forward projection of a piece-wise constant image, generated from the segmented MRI. Images were also reconstructed using FBP and standard OSEM with and without resolution recovery (RR) for comparison. The images were evaluated in terms of striatal contrast and regional variability (CoV). The mean striatal contrast obtained with OSEM, OSEM-RR and OSEM-PVC relative to FBP were 1.04, 1.42 and 1.53, respectively, and the mean striatal CoV values are 1.05, 1.53, 1.07. Both OSEM-RR and OSEM-PVC results in images with significantly higher contrast as compared to FBP or OSEM, but OSEM-PVC avoids the increased regional variability of OSEM-RR due to improved structural definition.

  16. Partial Volume Effects correction in emission tomography

    International Nuclear Information System (INIS)

    Le Pogam, Adrien

    2010-01-01

    Partial Volume Effects (PVE) designates the blur commonly found in nuclear medicine images and this PhD work is dedicated to their correction with the objectives of qualitative and quantitative improvement of such images. PVE arise from the limited spatial resolution of functional imaging with either Positron Emission Tomography (PET) or Single Photon Emission Computed Tomography (SPECT). They can be defined as a signal loss in tissues of size similar to the Full Width at Half Maximum (FWHM) of the PSF of the imaging device. In addition, PVE induce activity cross contamination between adjacent structures with different tracer uptakes. This can lead to under or over estimation of the real activity of such analyzed regions. Various methodologies currently exist to compensate or even correct for PVE and they may be classified depending on their place in the processing chain: either before, during or after the image reconstruction process, as well as their dependency on co-registered anatomical images with higher spatial resolution, for instance Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). The voxel-based and post-reconstruction approach was chosen for this work to avoid regions of interest definition and dependency on proprietary reconstruction developed by each manufacturer, in order to improve the PVE correction. Two different contributions were carried out in this work: the first one is based on a multi-resolution methodology in the wavelet domain using the higher resolution details of a co-registered anatomical image associated to the functional dataset to correct. The second one is the improvement of iterative deconvolution based methodologies by using tools such as directional wavelets and curvelets extensions. These various developed approaches were applied and validated using synthetic, simulated and clinical images, for instance with neurology and oncology applications in mind. Finally, as currently available PET/CT scanners incorporate more

  17. Anatomically guided voxel-based partial volume effect correction in brain PET : Impact of MRI segmentation

    NARCIS (Netherlands)

    Gutierrez, Daniel; Montandon, Marie-Louise; Assal, Frederic; Allaoua, Mohamed; Ratib, Osman; Loevblad, Karl-Olof; Zaidi, Habib

    2012-01-01

    Partial volume effect is still considered one of the main limitations in brain PET imaging given the limited spatial resolution of current generation PET scanners. The accuracy of anatomically guided partial volume effect correction (PVC) algorithms in brain PET is largely dependent on the

  18. Entropy correction of BTZ black holes in a tunneling framework

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper, using the Parikh-Wilczek tunneling framework, we first calculate the emission rates of non-rotating BTZ black holes and rotating BTZ black holes to second order accuracy. Then, by assuming that the emission process satisfies an underlying unitary theory, we obtain the corrected entropy of the BTZ black holes. A log term emerges naturally in the expression of the corrected entropy. A discussion about the inverse area term is also presented.

  19. Versatility of PEEK as a fixed partial denture framework.

    Science.gov (United States)

    Sinha, Nikita; Gupta, Nidhi; Reddy, K Mahendranadh; Shastry, Y M

    2017-01-01

    Materials used for fixed partial denture (FPD) frameworks have had properties of excellent strength, durability, and biocompatibility. Some of the materials which have been used till date include alloys, ceramics, and high-performance polymers such as zirconia, Ni-Cr, lithium disilicate, and so on. All these, though excellent, have their advantages and disadvantages. Hence, the search has always been on for a better material. One such material, which has made its foray into dentistry in the recent times, is polyetheretherketone (PEEK). It is a semicrystalline thermoplastic material. PEEK has an excellent chemical resistance and mechanical properties that are retained at high temperatures. The versatility of PEEK as a dental material for FPD framework was evaluated in this case report.

  20. Fit Analysis of Different Framework Fabrication Techniques for Implant-Supported Partial Prostheses.

    Science.gov (United States)

    Spazzin, Aloísio Oro; Bacchi, Atais; Trevisani, Alexandre; Farina, Ana Paula; Dos Santos, Mateus Bertolini

    2016-01-01

    This study evaluated the vertical misfit of implant-supported frameworks made using different techniques to obtain passive fit. Thirty three-unit fixed partial dentures were fabricated in cobalt-chromium alloy (n = 10) using three fabrication methods: one-piece casting, framework cemented on prepared abutments, and laser welding. The vertical misfit between the frameworks and the abutments was evaluated with an optical microscope using the single-screw test. Data were analyzed using one-way analysis of variance and Tukey test (α = .05). The one-piece casted frameworks presented significantly higher vertical misfit values than those found for framework cemented on prepared abutments and laser welding techniques (P Laser welding and framework cemented on prepared abutments are effective techniques to improve the adaptation of three-unit implant-supported prostheses. These techniques presented similar fit.

  1. An analysis of the nucleon spectrum from lattice partially-quenched QCD

    Energy Technology Data Exchange (ETDEWEB)

    Armour, W. [Swansea University, Swansea, SA2 8PP, Wales, U.K.; Allton, C. R. [Swansea University, Swansea, SA2 8PP, Wales, U.K.; Leinweber, Derek B. [Univ. of Adelaide, SA (Australia); Thomas, Anthony W. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); College of William and Mary, Williamsburg, VA (United States); Young, Ross D. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2010-09-01

    The chiral extrapolation of the nucleon mass, Mn, is investigated using data coming from 2-flavour partially-quenched lattice simulations. The leading one-loop corrections to the nucleon mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value of Mn in agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.

  2. Fast Computation of Solvation Free Energies with Molecular Density Functional Theory: Thermodynamic-Ensemble Partial Molar Volume Corrections.

    Science.gov (United States)

    Sergiievskyi, Volodymyr P; Jeanmairet, Guillaume; Levesque, Maximilien; Borgis, Daniel

    2014-06-05

    Molecular density functional theory (MDFT) offers an efficient implicit-solvent method to estimate molecule solvation free-energies, whereas conserving a fully molecular representation of the solvent. Even within a second-order approximation for the free-energy functional, the so-called homogeneous reference fluid approximation, we show that the hydration free-energies computed for a data set of 500 organic compounds are of similar quality as those obtained from molecular dynamics free-energy perturbation simulations, with a computer cost reduced by 2-3 orders of magnitude. This requires to introduce the proper partial volume correction to transform the results from the grand canonical to the isobaric-isotherm ensemble that is pertinent to experiments. We show that this correction can be extended to 3D-RISM calculations, giving a sound theoretical justification to empirical partial molar volume corrections that have been proposed recently.

  3. Noise suppressed partial volume correction for cardiac SPECT/CT

    Energy Technology Data Exchange (ETDEWEB)

    Chan, Chung; Liu, Chi, E-mail: chi.liu@yale.edu [Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut 06520 (United States); Liu, Hui [Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut 06520 and Key Laboratory of Particle and Radiation Imaging (Tsinghua University), Ministry of Education, Beijing 100084 (China); Grobshtein, Yariv [GE Healthcare, Haifa 3910101 (Israel); Stacy, Mitchel R. [Department of Internal Medicine, Yale University, New Haven, Connecticut 06520 (United States); Sinusas, Albert J. [Department of Radiology and Biomedical Imaging, Yale University, New Haven, Connecticut 06520 and Department of Internal Medicine, Yale University, New Haven, Connecticut 06520 (United States)

    2016-09-15

    Purpose: Partial volume correction (PVC) methods typically improve quantification at the expense of increased image noise and reduced reproducibility. In this study, the authors developed a novel voxel-based PVC method that incorporates anatomical knowledge to improve quantification while suppressing noise for cardiac SPECT/CT imaging. Methods: In the proposed method, the SPECT images were first reconstructed using anatomical-based maximum a posteriori (AMAP) with Bowsher’s prior to penalize noise while preserving boundaries. A sequential voxel-by-voxel PVC approach (Yang’s method) was then applied on the AMAP reconstruction using a template response. This template response was obtained by forward projecting a template derived from a contrast-enhanced CT image, and then reconstructed using AMAP to model the partial volume effects (PVEs) introduced by both the system resolution and the smoothing applied during reconstruction. To evaluate the proposed noise suppressed PVC (NS-PVC), the authors first simulated two types of cardiac SPECT studies: a {sup 99m}Tc-tetrofosmin myocardial perfusion scan and a {sup 99m}Tc-labeled red blood cell (RBC) scan on a dedicated cardiac multiple pinhole SPECT/CT at both high and low count levels. The authors then applied the proposed method on a canine equilibrium blood pool study following injection with {sup 99m}Tc-RBCs at different count levels by rebinning the list-mode data into shorter acquisitions. The proposed method was compared to MLEM reconstruction without PVC, two conventional PVC methods, including Yang’s method and multitarget correction (MTC) applied on the MLEM reconstruction, and AMAP reconstruction without PVC. Results: The results showed that the Yang’s method improved quantification, however, yielded increased noise and reduced reproducibility in the regions with higher activity. MTC corrected for PVE on high count data with amplified noise, although yielded the worst performance among all the methods

  4. Clinical experiences of implant-supported prostheses with laser-welded titanium frameworks in the partially edentulous jaw: a 5-year follow-up study.

    Science.gov (United States)

    Ortorp, A; Jemt, T

    1999-01-01

    Titanium frameworks have been used in the endentulous implant patient for the last 10 years. However, knowledge of titanium frameworks for the partially dentate patient is limited. To report the 5-year clinical performance of implant-supported prostheses with laser-welded titanium frameworks in the partially edentulous jaw. A consecutive group of 383 partially edentulous patients were, on a routine basis, provided with fixed partial prostheses supported by Brånemark implants in the mandible or maxilla. Besides conventional frameworks in cast gold alloy, 58 patients were provided with titanium frameworks with three different veneering techniques, and clinical and radiographic 5-year data were collected for this group. The overall cumulative survival rate was 95.6% for titanium-framework prostheses and 93.6% for implants. Average bone loss during the follow-up period was 0.4 mm. The most common complications were minor veneering fractures. Loose and fractured implant screw components were fewer than 2%. An observation was that patients on medications for cardiovascular problems may lose more implants than others (p laser-welded titanium frameworks was similar to that reported for conventional cast frames in partially edentulous jaws. Low-fusing porcelain veneers also showed clinical performance comparable to that reported for conventional porcelain-fused-to-metal techniques.

  5. Additive Manufacturing: A Novel Method for Fabricating Cobalt-Chromium Removable Partial Denture Frameworks.

    Science.gov (United States)

    Alifui-Segbaya, Frank; Williams, Robert John; George, Roy

    2017-06-01

    Additive manufacturing (AM) often referred to as 3D printing (3DP) has shown promise of being significantly viable in the construction of cobalt-chromium removable partial denture (RPD) frameworks. The current paper seeks to discuss AM technologies (photopolymerization processes and selective laser melting) and review their scope. The review also discusses the clinical relevance of cobalt-chromium RPD frameworks. All relevant publications in English over the last 10 years, when the first 3D-printed RPD framework was reported, are examined. The review notes that AM offers significant benefits in terms of speed of the manufacturing processes however cost and other aspects of current technologies remain a hindrance. Copyright© 2017 Dennis Barber Ltd.

  6. A 2 × 2 taxonomy of multilevel latent contextual models: accuracy-bias trade-offs in full and partial error correction models.

    Science.gov (United States)

    Lüdtke, Oliver; Marsh, Herbert W; Robitzsch, Alexander; Trautwein, Ulrich

    2011-12-01

    In multilevel modeling, group-level variables (L2) for assessing contextual effects are frequently generated by aggregating variables from a lower level (L1). A major problem of contextual analyses in the social sciences is that there is no error-free measurement of constructs. In the present article, 2 types of error occurring in multilevel data when estimating contextual effects are distinguished: unreliability that is due to measurement error and unreliability that is due to sampling error. The fact that studies may or may not correct for these 2 types of error can be translated into a 2 × 2 taxonomy of multilevel latent contextual models comprising 4 approaches: an uncorrected approach, partial correction approaches correcting for either measurement or sampling error (but not both), and a full correction approach that adjusts for both sources of error. It is shown mathematically and with simulated data that the uncorrected and partial correction approaches can result in substantially biased estimates of contextual effects, depending on the number of L1 individuals per group, the number of groups, the intraclass correlation, the number of indicators, and the size of the factor loadings. However, the simulation study also shows that partial correction approaches can outperform full correction approaches when the data provide only limited information in terms of the L2 construct (i.e., small number of groups, low intraclass correlation). A real-data application from educational psychology is used to illustrate the different approaches.

  7. A Hybrid Framework to Bias Correct and Empirically Downscale Daily Temperature and Precipitation from Regional Climate Models

    Science.gov (United States)

    Tan, P.; Abraham, Z.; Winkler, J. A.; Perdinan, P.; Zhong, S. S.; Liszewska, M.

    2013-12-01

    Bias correction and statistical downscaling are widely used approaches for postprocessing climate simulations generated by global and/or regional climate models. The skills of these approaches are typically assessed in terms of their ability to reproduce historical climate conditions as well as the plausibility and consistency of the derived statistical indicators needed by end users. Current bias correction and downscaling approaches often do not adequately satisfy the two criteria of accurate prediction and unbiased estimation. To overcome this limitation, a hybrid regression framework was developed to both minimize prediction errors and preserve the distributional characteristics of climate observations. Specifically, the framework couples the loss functions of standard (linear or nonlinear) regression methods with a regularization term that penalizes for discrepancies between the predicted and observed distributions. The proposed framework can also be extended to generate physically-consistent outputs across multiple response variables, and to incorporate both reanalysis-driven and GCM-driven RCM outputs into a unified learning framework. The effectiveness of the framework is demonstrated using daily temperature and precipitation simulations from the North American Regional Climate Change Program (NARCCAP) . The accuracy of the framework is comparable to standard regression methods, but, unlike the standard regression methods, the proposed framework is able to preserve many of the distribution properties of the response variables, akin to bias correction approaches such as quantile mapping and bivariate geometric quantile mapping.

  8. Three-dimensional finite element analysis of zirconia all-ceramic cantilevered fixed partial dentures with different framework designs.

    Science.gov (United States)

    Miura, Shoko; Kasahara, Shin; Yamauchi, Shinobu; Egusa, Hiroshi

    2017-06-01

    The purpose of this study were: to perform stress analyses using three-dimensional finite element analysis methods; to analyze the mechanical stress of different framework designs; and to investigate framework designs that will provide for the long-term stability of both cantilevered fixed partial dentures (FPDs) and abutment teeth. An analysis model was prepared for three units of cantilevered FPDs that assume a missing mandibular first molar. Four types of framework design (Design 1, basic type; Design 2, framework width expanded buccolingually by 2 mm; Design 3, framework height expanded by 0.5 mm to the occlusal surface side from the end abutment to the connector area; and Design 4, a combination of Designs 2 and 3) were created. Two types of framework material (yttrium-oxide partially stabilized zirconia and a high precious noble metal gold alloy) and two types of abutment material (dentin and brass) were used. In the framework designs, Design 1 exhibited the highest maximum principal stress value for both zirconia and gold alloy. In the abutment tooth, Design 3 exhibited the highest maximum principal stress value for all abutment teeth. In the present study, Design 4 (the design with expanded framework height and framework width) could contribute to preventing the concentration of stress and protecting abutment teeth. © 2017 Eur J Oral Sci.

  9. Correction for Eddy Current-Induced Echo-Shifting Effect in Partial-Fourier Diffusion Tensor Imaging.

    Science.gov (United States)

    Truong, Trong-Kha; Song, Allen W; Chen, Nan-Kuei

    2015-01-01

    In most diffusion tensor imaging (DTI) studies, images are acquired with either a partial-Fourier or a parallel partial-Fourier echo-planar imaging (EPI) sequence, in order to shorten the echo time and increase the signal-to-noise ratio (SNR). However, eddy currents induced by the diffusion-sensitizing gradients can often lead to a shift of the echo in k-space, resulting in three distinct types of artifacts in partial-Fourier DTI. Here, we present an improved DTI acquisition and reconstruction scheme, capable of generating high-quality and high-SNR DTI data without eddy current-induced artifacts. This new scheme consists of three components, respectively, addressing the three distinct types of artifacts. First, a k-space energy-anchored DTI sequence is designed to recover eddy current-induced signal loss (i.e., Type 1 artifact). Second, a multischeme partial-Fourier reconstruction is used to eliminate artificial signal elevation (i.e., Type 2 artifact) associated with the conventional partial-Fourier reconstruction. Third, a signal intensity correction is applied to remove artificial signal modulations due to eddy current-induced erroneous T2(∗) -weighting (i.e., Type 3 artifact). These systematic improvements will greatly increase the consistency and accuracy of DTI measurements, expanding the utility of DTI in translational applications where quantitative robustness is much needed.

  10. Accurate structures and energetics of neutral-framework zeotypes from dispersion-corrected DFT calculations

    Science.gov (United States)

    Fischer, Michael; Angel, Ross J.

    2017-05-01

    Density-functional theory (DFT) calculations incorporating a pairwise dispersion correction were employed to optimize the structures of various neutral-framework compounds with zeolite topologies. The calculations used the PBE functional for solids (PBEsol) in combination with two different dispersion correction schemes, the D2 correction devised by Grimme and the TS correction of Tkatchenko and Scheffler. In the first part of the study, a benchmarking of the DFT-optimized structures against experimental crystal structure data was carried out, considering a total of 14 structures (8 all-silica zeolites, 4 aluminophosphate zeotypes, and 2 dense phases). Both PBEsol-D2 and PBEsol-TS showed an excellent performance, improving significantly over the best-performing approach identified in a previous study (PBE-TS). The temperature dependence of lattice parameters and bond lengths was assessed for those zeotypes where the available experimental data permitted such an analysis. In most instances, the agreement between DFT and experiment improved when the experimental data were corrected for the effects of thermal motion and when low-temperature structure data rather than room-temperature structure data were used as a reference. In the second part, a benchmarking against experimental enthalpies of transition (with respect to α-quartz) was carried out for 16 all-silica zeolites. Excellent agreement was obtained with the PBEsol-D2 functional, with the overall error being in the same range as the experimental uncertainty. Altogether, PBEsol-D2 can be recommended as a computationally efficient DFT approach that simultaneously delivers accurate structures and energetics of neutral-framework zeotypes.

  11. An analysis of the nucleon spectrum from lattice partially-quenched QCD.

    Energy Technology Data Exchange (ETDEWEB)

    Armour, W.; Allton, C. R.; Leinweber, D. B.; Thomas, A. W.; Young, R. D.; Physics; Swansea Univ.; Univ. of Adelaide; Coll. of William and Mary

    2010-09-01

    The chiral extrapolation of the nucleon mass, M{sub n}, is investigated using data coming from 2-flavour partially-quenched lattice simulations. A large sample of lattice results from the CP-PACS Collaboration is analysed using the leading one-loop corrections, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite-range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value of Mn in agreement with experiment. Furthermore, determinations of the low energy constants of the nucleon mass's chiral expansion are in agreement with previous methods, but with significantly reduced errors. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.

  12. An analysis of the nucleon spectrum from lattice partially-quenched QCD

    Energy Technology Data Exchange (ETDEWEB)

    Armour, W. [Department of Physics, Swansea University, Swansea SA2 8PP, Wales (United Kingdom); Allton, C.R., E-mail: c.allton@swan.ac.u [Department of Physics, Swansea University, Swansea SA2 8PP, Wales (United Kingdom); Leinweber, D.B. [Special Research Centre for the Subatomic Structure of Matter (CSSM), School of Chemistry and Physics, University of Adelaide, 5005 (Australia); Thomas, A.W. [Jefferson Lab, 12000 Jefferson Ave., Newport News, VA 23606 (United States); College of William and Mary, Williamsburg, VA 23187 (United States); Young, R.D. [Physics Division, Argonne National Laboratory, Argonne, IL 60439 (United States)

    2010-09-01

    The chiral extrapolation of the nucleon mass, M{sub n}, is investigated using data coming from 2-flavour partially-quenched lattice simulations. A large sample of lattice results from the CP-PACS Collaboration is analysed using the leading one-loop corrections, with explicit corrections for finite lattice spacing artifacts. The extrapolation is studied using finite-range regularised chiral perturbation theory. The analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value of M{sub n} in agreement with experiment. Furthermore, determinations of the low energy constants of the nucleon mass's chiral expansion are in agreement with previous methods, but with significantly reduced errors. This procedure is also compared with extrapolations based on polynomial forms, where the results are less encouraging.

  13. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    International Nuclear Information System (INIS)

    Barbee, David L; Holden, James E; Nickles, Robert J; Jeraj, Robert; Flynn, Ryan T

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated

  14. [Computer aided design for fixed partial denture framework based on reverse engineering technology].

    Science.gov (United States)

    Sun, Yu-chun; Lü, Pei-jun; Wang, Yong

    2006-03-01

    To explore a computer aided design (CAD) route for the framework of domestic fixed partial denture (FPD) and confirm the suitable method of 3-D CAD. The working area of a dentition model was scanned with a 3-D mechanical scanner. Using the reverse engineering (RE) software, margin and border curves were extracted and several reference curves were created to ensure the dimension and location of pontic framework that was taken from the standard database. The shoulder parts of the retainers were created after axial surfaces constructed. The connecting areas, axial line and curving surface of the framework connector were finally created. The framework of a three-unit FPD was designed with RE technology, which showed smooth surfaces and continuous contours. The design route is practical. The result of this study is significant in theory and practice, which will provide a reference for establishing the computer aided design/computer aided manufacture (CAD/CAM) system of domestic FPD.

  15. A Framework for Hardware-Accelerated Services Using Partially Reconfigurable SoCs

    Directory of Open Access Journals (Sweden)

    MACHIDON, O. M.

    2016-05-01

    Full Text Available The current trend towards ?Everything as a Service? fosters a new approach on reconfigurable hardware resources. This innovative, service-oriented approach has the potential of bringing a series of benefits for both reconfigurable and distributed computing fields by favoring a hardware-based acceleration of web services and increasing service performance. This paper proposes a framework for accelerating web services by offloading the compute-intensive tasks to reconfigurable System-on-Chip (SoC devices, as integrated IP (Intellectual Property cores. The framework provides a scalable, dynamic management of the tasks and hardware processing cores, based on dynamic partial reconfiguration of the SoC. We have enhanced security of the entire system by making use of the built-in detection features of the hardware device and also by implementing active counter-measures that protect the sensitive data.

  16. Effects of Impression Material, Impression Tray Type, and Type of Partial Edentulism on the Fit of Cobalt-Chromium Partial Denture Frameworks on Initial Clinical Insertion: A Retrospective Clinical Evaluation.

    Science.gov (United States)

    Baig, Mirza Rustum; Akbar, Jaber Hussain; Qudeimat, Muawia; Omar, Ridwaan

    2018-02-15

    To evaluate the effects of impression material, impression tray type, and type of partial edentulism (ie, Kennedy class) on the accuracy of fit of cobalt-chromium (Co-Cr) partial removable dental prostheses (PRDP) in terms of the number of fabricated frameworks required until the attainment of adequate fit. Electronic case documentations of 120 partially edentulous patients provided with Co-Cr PRDP treatment for one or both arches were examined. Statistical analyses of data were performed using analysis of variance and Tukey honest significant difference test to compare the relationships between the different factors and the number of frameworks that needed to be fabricated for each patient (α = .05). Statistical analysis of data derived from 143 records (69 maxillary and 74 mandibular) revealed no significant correlation between impression material, tray type, or Kennedy class and the number of construction attempts for the pooled or individual arch data (P ≥ .05). In PRDP treatment, alginate can be chosen as a first-choice material, and metal stock trays can be a preferred option for making final impressions to fabricate Co-Cr frameworks.

  17. A multimodal imaging framework for enhanced robot-assisted partial nephrectomy guidance

    Science.gov (United States)

    Halter, Ryan J.; Wu, Xiaotian; Hartov, Alex; Seigne, John; Khan, Shadab

    2015-03-01

    Robot-assisted laparoscopic partial nephrectomies (RALPN) are performed to treat patients with locally confined renal carcinoma. There are well-documented benefits to performing partial (opposed to radical) kidney resections and to using robot-assisted laparoscopic (opposed to open) approaches. However, there are challenges in identifying tumor margins and critical benign structures including blood vessels and collecting systems during current RALPN procedures. The primary objective of this effort is to couple multiple image and data streams together to augment visual information currently provided to surgeons performing RALPN and ultimately ensure complete tumor resection and minimal damage to functional structures (i.e. renal vasculature and collecting systems). To meet this challenge we have developed a framework and performed initial feasibility experiments to couple pre-operative high-resolution anatomic images with intraoperative MRI, ultrasound (US) and optical-based surface mapping and kidney tracking. With these registered images and data streams, we aim to overlay the high-resolution contrast-enhanced anatomic (CT or MR) images onto the surgeon's view screen for enhanced guidance. To date we have integrated the following components of our framework: 1) a method for tracking an intraoperative US probe to extract the kidney surface and a set of embedded kidney markers, 2) a method for co-registering intraoperative US scans with pre-operative MR scans, and 3) a method for deforming pre-op scans to match intraoperative scans. These components have been evaluated through phantom studies to demonstrate protocol feasibility.

  18. PyPWA: A partial-wave/amplitude analysis software framework

    Science.gov (United States)

    Salgado, Carlos

    2016-05-01

    The PyPWA project aims to develop a software framework for Partial Wave and Amplitude Analysis of data; providing the user with software tools to identify resonances from multi-particle final states in photoproduction. Most of the code is written in Python. The software is divided into two main branches: one general-shell where amplitude's parameters (or any parametric model) are to be estimated from the data. This branch also includes software to produce simulated data-sets using the fitted amplitudes. A second branch contains a specific realization of the isobar model (with room to include Deck-type and other isobar model extensions) to perform PWA with an interface into the computer resources at Jefferson Lab. We are currently implementing parallelism and vectorization using the Intel's Xeon Phi family of coprocessors.

  19. Impact of motion compensation and partial volume correction for 18F-NaF PET/CT imaging of coronary plaque

    Science.gov (United States)

    Cal-González, J.; Tsoumpas, C.; Lassen, M. L.; Rasul, S.; Koller, L.; Hacker, M.; Schäfers, K.; Beyer, T.

    2018-01-01

    Recent studies have suggested that 18F-NaF-PET enables visualization and quantification of plaque micro-calcification in the coronary tree. However, PET imaging of plaque calcification in the coronary arteries is challenging because of the respiratory and cardiac motion as well as partial volume effects. The objective of this work is to implement an image reconstruction framework, which incorporates compensation for respiratory as well as cardiac motion (MoCo) and partial volume correction (PVC), for cardiac 18F-NaF PET imaging in PET/CT. We evaluated the effect of MoCo and PVC on the quantification of vulnerable plaques in the coronary arteries. Realistic simulations (Biograph TPTV, Biograph mCT) and phantom acquisitions (Biograph mCT) were used for these evaluations. Different uptake values in the calcified plaques were evaluated in the simulations, while three ‘plaque-type’ lesions of 36, 31 and 18 mm3 were included in the phantom experiments. After validation, the MoCo and PVC methods were applied in four pilot NaF-PET patient studies. In all cases, the MoCo-based image reconstruction was performed using the STIR software. The PVC was obtained from a local projection (LP) method, previously evaluated in preclinical and clinical PET. The results obtained show a significant increase of the measured lesion-to-background ratios (LBR) in the MoCo  +  PVC images. These ratios were further enhanced when using directly the tissue-activities from the LP method, making this approach more suitable for the quantitative evaluation of coronary plaques. When using the LP method on the MoCo images, LBR increased between 200% and 1119% in the simulated data, between 212% and 614% in the phantom experiments and between 46% and 373% in the plaques with positive uptake observed in the pilot patients. In conclusion, we have built and validated a STIR framework incorporating MoCo and PVC for 18F-NaF PET imaging of coronary plaques. First results indicate an improved

  20. A multiresolution image based approach for correction of partial volume effects in emission tomography

    International Nuclear Information System (INIS)

    Boussion, N; Hatt, M; Lamare, F; Bizais, Y; Turzo, A; Rest, C Cheze-Le; Visvikis, D

    2006-01-01

    Partial volume effects (PVEs) are consequences of the limited spatial resolution in emission tomography. They lead to a loss of signal in tissues of size similar to the point spread function and induce activity spillover between regions. Although PVE can be corrected for by using algorithms that provide the correct radioactivity concentration in a series of regions of interest (ROIs), so far little attention has been given to the possibility of creating improved images as a result of PVE correction. Potential advantages of PVE-corrected images include the ability to accurately delineate functional volumes as well as improving tumour-to-background ratio, resulting in an associated improvement in the analysis of response to therapy studies and diagnostic examinations, respectively. The objective of our study was therefore to develop a methodology for PVE correction not only to enable the accurate recuperation of activity concentrations, but also to generate PVE-corrected images. In the multiresolution analysis that we define here, details of a high-resolution image H (MRI or CT) are extracted, transformed and integrated in a low-resolution image L (PET or SPECT). A discrete wavelet transform of both H and L images is performed by using the 'a trous' algorithm, which allows the spatial frequencies (details, edges, textures) to be obtained easily at a level of resolution common to H and L. A model is then inferred to build the lacking details of L from the high-frequency details in H. The process was successfully tested on synthetic and simulated data, proving the ability to obtain accurately corrected images. Quantitative PVE correction was found to be comparable with a method considered as a reference but limited to ROI analyses. Visual improvement and quantitative correction were also obtained in two examples of clinical images, the first using a combined PET/CT scanner with a lymphoma patient and the second using a FDG brain PET and corresponding T1-weighted MRI in

  1. Impact of partial-volume correction in oncological PET studies. A systematic review and meta-analysis

    Energy Technology Data Exchange (ETDEWEB)

    Cysouw, Matthijs C.F.; Kramer, Gerbrand M.; Hoekstra, Otto S. [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Schoonmade, Linda J. [VU University Medical Centre, Department of Medical Library, Amsterdam (Netherlands); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); University Medical Centre Groningen, Department of Nuclear Medicine and Molecular Imaging, Groningen (Netherlands); Vet, Henrica C.W. de [VU University Medical Centre, Department of Epidemiology and Biostatistics, Amsterdam (Netherlands)

    2017-11-15

    Positron-emission tomography can be useful in oncology for diagnosis, (re)staging, determining prognosis, and response assessment. However, partial-volume effects hamper accurate quantification of lesions <2-3 x the PET system's spatial resolution, and the clinical impact of this is not evident. This systematic review provides an up-to-date overview of studies investigating the impact of partial-volume correction (PVC) in oncological PET studies. We searched in PubMed and Embase databases according to the PRISMA statement, including studies from inception till May 9, 2016. Two reviewers independently screened all abstracts and eligible full-text articles and performed quality assessment according to QUADAS-2 and QUIPS criteria. For a set of similar diagnostic studies, we statistically pooled the results using bivariate meta-regression. Thirty-one studies were eligible for inclusion. Overall, study quality was good. For diagnosis and nodal staging, PVC yielded a strong trend of increased sensitivity at expense of specificity. Meta-analysis of six studies investigating diagnosis of pulmonary nodules (679 lesions) showed no significant change in diagnostic accuracy after PVC (p = 0.222). Prognostication was not improved for non-small cell lung cancer and esophageal cancer, whereas it did improve for head and neck cancer. Response assessment was not improved by PVC for (locally advanced) breast cancer or rectal cancer, and it worsened in metastatic colorectal cancer. The accumulated evidence to date does not support routine application of PVC in standard clinical PET practice. Consensus on the preferred PVC methodology in oncological PET should be reached. Partial-volume-corrected data should be used as adjuncts to, but not yet replacement for, uncorrected data. (orig.)

  2. An MGF-based unified framework to determine the joint statistics of partial sums of ordered random variables

    KAUST Repository

    Nam, Sungsik; Alouini, Mohamed-Slim; Yang, Hongchuan

    2010-01-01

    Order statistics find applications in various areas of communications and signal processing. In this paper, we introduce an unified analytical framework to determine the joint statistics of partial sums of ordered random variables (RVs

  3. Twisted finite-volume corrections to K{sub l3} decays with partially-quenched and rooted-staggered quarks

    Energy Technology Data Exchange (ETDEWEB)

    Bernard, Claude [Department of Physics, Washington University,One Brookings Drive, Saint Louis (United States); Bijnens, Johan [Department of Astronomy and Theoretical Physics, Lund University,Sölvegatan 14A, SE 223-62 Lund (Sweden); Gámiz, Elvira [CAFPE and Departamento de Física Teórica y del Cosmos, Universidad de Granada,Campus de Fuente Nueva, E-18002 Granada (Spain); Relefors, Johan [Department of Astronomy and Theoretical Physics, Lund University,Sölvegatan 14A, SE 223-62 Lund (Sweden)

    2017-03-23

    The determination of |V{sub us}| from kaon semileptonic decays requires the value of the form factor f{sub +}(q{sup 2}=0) which can be calculated precisely on the lattice. We provide the one-loop partially quenched chiral perturbation theory expressions both with and without including the effects of staggered quarks for all form factors at finite volume and with partially twisted boundary conditions for both the vector current and scalar density matrix elements at all q{sup 2}. We point out that at finite volume there are more form factors than just f{sub +} and f{sub −} for the vector current matrix element but that the Ward identity is fully satisfied. The size of the finite-volume corrections at present lattice sizes is small. This will help improve the lattice determination of f{sub +}(q{sup 2}=0) since the finite-volume error is the dominant error source for some calculations. The size of the finite-volume corrections may be estimated on a single lattice ensemble by comparing results for various twist choices.

  4. Evaluation of a three-dimensional ultrasound localisation system incorporating probe pressure correction for use in partial breast irradiation.

    Science.gov (United States)

    Harris, E J; Symonds-Taylor, R; Treece, G M; Gee, A H; Prager, R W; Brabants, P; Evans, P M

    2009-10-01

    This work evaluates a three-dimensional (3D) freehand ultrasound-based localisation system with new probe pressure correction for use in partial breast irradiation. Accuracy and precision of absolute position measurement was measured as a function of imaging depth (ID), object depth, scanning direction and time using a water phantom containing crossed wires. To quantify the improvement in accuracy due to pressure correction, 3D scans of a breast phantom containing ball bearings were obtained with and without pressure. Ball bearing displacements were then measured with and without pressure correction. Using a single scan direction (for all imaging depths), the mean error was <1.3 mm, with the exception of the wires at 68.5 mm imaged with an ID of 85 mm, which gave a mean error of -2.3 mm. Precision was greater than 1 mm for any single scan direction. For multiple scan directions, precision was within 1.7 mm. Probe pressure corrections of between 0 mm and 2.2 mm have been observed for pressure displacements of 1.1 mm to 4.2 mm. Overall, anteroposterior position measurement accuracy increased from 2.2 mm to 1.6 mm and to 1.4 mm for the two opposing scanning directions. Precision is comparable to that reported for other commercially available ultrasound localisation systems, provided that 3D image acquisition is performed in the same scan direction. The existing temporal calibration is imperfect and a "per installation" calibration would further improve the accuracy and precision. Probe pressure correction was shown to improve the accuracy and will be useful for the localisation of the excision cavity in partial breast radiotherapy.

  5. Combining MRI with PET for partial volume correction improves image-derived input functions in mice

    Energy Technology Data Exchange (ETDEWEB)

    Evans, Eleanor; Buonincontri, Guido [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Izquierdo, David [Athinoula A Martinos Centre, Harvard University, Cambridge, MA (United States); Methner, Carmen [Department of Medicine, University of Cambridge, Cambridge (United Kingdom); Hawkes, Rob C [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Ansorge, Richard E [Department of Physics, University of Cambridge, Cambridge (United Kingdom); Kreig, Thomas [Department of Medicine, University of Cambridge, Cambridge (United Kingdom); Carpenter, T Adrian [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Sawiak, Stephen J [Wolfson Brain Imaging Centre, University of Cambridge, Cambridge (United Kingdom); Behavioural and Clinical Neurosciences Institute, University of Cambridge, Cambridge (United Kingdom)

    2014-07-29

    Kinetic modelling in PET requires the arterial input function (AIF), defined as the time-activity curve (TAC) in plasma. This measure is challenging to obtain in mice due to low blood volumes, resulting in a reliance on image-based methods for AIF derivation. We present a comparison of PET- and MR-based region-of-interest (ROI) analysis to obtain image-derived AIFs from the left ventricle (LV) of a mouse model. ROI-based partial volume correction (PVC) was performed to improve quantification.

  6. Combining MRI with PET for partial volume correction improves image-derived input functions in mice

    International Nuclear Information System (INIS)

    Evans, Eleanor; Buonincontri, Guido; Izquierdo, David; Methner, Carmen; Hawkes, Rob C; Ansorge, Richard E; Kreig, Thomas; Carpenter, T Adrian; Sawiak, Stephen J

    2014-01-01

    Kinetic modelling in PET requires the arterial input function (AIF), defined as the time-activity curve (TAC) in plasma. This measure is challenging to obtain in mice due to low blood volumes, resulting in a reliance on image-based methods for AIF derivation. We present a comparison of PET- and MR-based region-of-interest (ROI) analysis to obtain image-derived AIFs from the left ventricle (LV) of a mouse model. ROI-based partial volume correction (PVC) was performed to improve quantification.

  7. Baseline correction combined partial least squares algorithm and its application in on-line Fourier transform infrared quantitative analysis.

    Science.gov (United States)

    Peng, Jiangtao; Peng, Silong; Xie, Qiong; Wei, Jiping

    2011-04-01

    In order to eliminate the lower order polynomial interferences, a new quantitative calibration algorithm "Baseline Correction Combined Partial Least Squares (BCC-PLS)", which combines baseline correction and conventional PLS, is proposed. By embedding baseline correction constraints into PLS weights selection, the proposed calibration algorithm overcomes the uncertainty in baseline correction and can meet the requirement of on-line attenuated total reflectance Fourier transform infrared (ATR-FTIR) quantitative analysis. The effectiveness of the algorithm is evaluated by the analysis of glucose and marzipan ATR-FTIR spectra. BCC-PLS algorithm shows improved prediction performance over PLS. The root mean square error of cross-validation (RMSECV) on marzipan spectra for the prediction of the moisture is found to be 0.53%, w/w (range 7-19%). The sugar content is predicted with a RMSECV of 2.04%, w/w (range 33-68%). Copyright © 2011 Elsevier B.V. All rights reserved.

  8. On Neglecting Chemical Exchange Effects When Correcting in Vivo 31P MRS Data for Partial Saturation

    Science.gov (United States)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-02-01

    Signal acquisition in most MRS experiments requires a correction for partial saturation that is commonly based on a single exponential model for T1 that ignores effects of chemical exchange. We evaluated the errors in 31P MRS measurements introduced by this approximation in two-, three-, and four-site chemical exchange models under a range of flip-angles and pulse sequence repetition times (TR) that provide near-optimum signal-to-noise ratio (SNR). In two-site exchange, such as the creatine-kinase reaction involving phosphocreatine (PCr) and γ-ATP in human skeletal and cardiac muscle, errors in saturation factors were determined for the progressive saturation method and the dual-angle method of measuring T1. The analysis shows that these errors are negligible for the progressive saturation method if the observed T1 is derived from a three-parameter fit of the data. When T1 is measured with the dual-angle method, errors in saturation factors are less than 5% for all conceivable values of the chemical exchange rate and flip-angles that deliver useful SNR per unit time over the range T1/5 ≤ TR ≤ 2T1. Errors are also less than 5% for three- and four-site exchange when TR ≥ T1*/2, the so-called "intrinsic" T1's of the metabolites. The effect of changing metabolite concentrations and chemical exchange rates on observed T1's and saturation corrections was also examined with a three-site chemical exchange model involving ATP, PCr, and inorganic phosphate in skeletal muscle undergoing up to 95% PCr depletion. Although the observed T1's were dependent on metabolite concentrations, errors in saturation corrections for TR = 2 s could be kept within 5% for all exchanging metabolites using a simple interpolation of two dual-angle T1 measurements performed at the start and end of the experiment. Thus, the single-exponential model appears to be reasonably accurate for correcting 31P MRS data for partial saturation in the presence of chemical exchange. Even in systems where

  9. Partial F-tests with multiply imputed data in the linear regression framework via coefficient of determination.

    Science.gov (United States)

    Chaurasia, Ashok; Harel, Ofer

    2015-02-10

    Tests for regression coefficients such as global, local, and partial F-tests are common in applied research. In the framework of multiple imputation, there are several papers addressing tests for regression coefficients. However, for simultaneous hypothesis testing, the existing methods are computationally intensive because they involve calculation with vectors and (inversion of) matrices. In this paper, we propose a simple method based on the scalar entity, coefficient of determination, to perform (global, local, and partial) F-tests with multiply imputed data. The proposed method is evaluated using simulated data and applied to suicide prevention data. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Simulation-based partial volume correction for dopaminergic PET imaging. Impact of segmentation accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Rong, Ye; Winz, Oliver H. [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; Vernaleken, Ingo [University Hospital Aachen (Germany). Dept. of Psychiatry, Psychotherapy and Psychosomatics; Goedicke, Andreas [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; High Tech Campus, Philips Research Lab., Eindhoven (Netherlands); Mottaghy, Felix M. [University Hospital Aachen (Germany). Dept. of Nuclear Medicine; Maastricht University Medical Center (Netherlands). Dept. of Nuclear Medicine; Rota Kops, Elena [Forschungszentrum Juelich (Germany). Inst. of Neuroscience and Medicine-4

    2015-07-01

    Partial volume correction (PVC) is an essential step for quantitative positron emission tomography (PET). In the present study, PVELab, a freely available software, is evaluated for PVC in {sup 18}F-FDOPA brain-PET, with a special focus on the accuracy degradation introduced by various MR-based segmentation approaches. Methods Four PVC algorithms (M-PVC; MG-PVC; mMG-PVC; and R-PVC) were analyzed on simulated {sup 18}F-FDOPA brain-PET images. MR image segmentation was carried out using FSL (FMRIB Software Library) and SPM (Statistical Parametric Mapping) packages, including additional adaptation for subcortical regions (SPM{sub L}). Different PVC and segmentation combinations were compared with respect to deviations in regional activity values and time-activity curves (TACs) of the occipital cortex (OCC), caudate nucleus (CN), and putamen (PUT). Additionally, the PVC impact on the determination of the influx constant (K{sub i}) was assessed. Results Main differences between tissue-maps returned by three segmentation algorithms were found in the subcortical region, especially at PUT. Average misclassification errors in combination with volume reduction was found to be lowest for SPM{sub L} (PUT < 30%) and highest for FSL (PUT > 70%). Accurate recovery of activity data at OCC is achieved by M-PVC (apparent recovery coefficient varies between 0.99 and 1.10). The other three evaluated PVC algorithms have demonstrated to be more suitable for subcortical regions with MG-PVC and mMG-PVC being less prone to the largest tissue misclassification error simulated in this study. Except for M-PVC, quantification accuracy of K{sub i} for CN and PUT was clearly improved by PVC. Conclusions The regional activity value of PUT was appreciably overcorrected by most of the PVC approaches employing FSL or SPM segmentation, revealing the importance of accurate MR image segmentation for the presented PVC framework. The selection of a PVC approach should be adapted to the anatomical

  11. Quantifying [{sup 18}F]fluorodeoxyglucose uptake in the arterial wall: the effects of dual time-point imaging and partial volume effect correction

    Energy Technology Data Exchange (ETDEWEB)

    Blomberg, Bjoern A. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Odense University Hospital, Department of Nuclear Medicine, Odense (Denmark); Bashyam, Arjun; Ramachandran, Abhinay; Gholami, Saeid; Houshmand, Sina; Salavati, Ali; Werner, Tom; Alavi, Abass [Hospital of the University of Pennsylvania, Department of Radiology, Philadelphia, PA (United States); Zaidi, Habib [Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva (Switzerland); University of Groningen, Department of Nuclear Medicine and Molecular Imaging, University Medical Center Groningen, Groningen (Netherlands)

    2015-08-15

    The human arterial wall is smaller than the spatial resolution of current positron emission tomographs. Therefore, partial volume effects should be considered when quantifying arterial wall {sup 18}F-FDG uptake. We evaluated the impact of a novel method for partial volume effect (PVE) correction with contrast-enhanced CT (CECT) assistance on quantification of arterial wall {sup 18}F-FDG uptake at different imaging time-points. Ten subjects were assessed by CECT imaging and dual time-point PET/CT imaging at approximately 60 and 180 min after {sup 18}F-FDG administration. For both time-points, uptake of {sup 18}F-FDG was determined in the aortic wall by calculating the blood pool-corrected maximum standardized uptake value (cSUV{sub MAX}) and cSUV{sub MEAN}. The PVE-corrected SUV{sub MEAN} (pvcSUV{sub MEAN}) was also calculated using {sup 18}F-FDG PET/CT and CECT images. Finally, corresponding target-to-background ratios (TBR) were calculated. At 60 min, pvcSUV{sub MEAN} was on average 3.1 times greater than cSUV{sub MAX} (P <.0001) and 8.5 times greater than cSUV{sub MEAN} (P <.0001). At 180 min, pvcSUV{sub MEAN} was on average 2.6 times greater than cSUV{sub MAX} (P <.0001) and 6.6 times greater than cSUV{sub MEAN} (P <.0001). This study demonstrated that CECT-assisted PVE correction significantly influences quantification of arterial wall {sup 18}F-FDG uptake. Therefore, partial volume effects should be considered when quantifying arterial wall {sup 18}F-FDG uptake with PET. (orig.)

  12. An Electronic Method for Measuring the Fit of Removable Partial Denture Frameworks to Dental Casts

    Directory of Open Access Journals (Sweden)

    Robert J Williams

    2009-06-01

    Full Text Available It is well established that the Removable Partial Denture (RPD is an effective treatment prosthesis. The objectives of a successful RPD are: to preserve the health of remaining oral structure, restore function and restore esthetics. To achieve these objectives, an RPD framework must fit accurately to the supporting structures. This paper presents a method for measuring the gaps or spaces present between the RPD framework and supporting structures which will enable the dentist and the dental technician to evaluate the accuracy of fitting of the prosthesis before it is delivered to the patient. The method used in this research is based on the principle of electric capacitance and uses a specially designed prototype measurement system.

  13. A comparison of laser-welded titanium and conventional cast frameworks supported by implants in the partially edentulous jaw: a 3-year prospective multicenter study.

    Science.gov (United States)

    Jemt, T; Henry, P; Lindén, B; Naert, I; Weber, H; Bergström, C

    2000-01-01

    The purpose of this prospective multicenter study was to evaluate and compare the clinical performance of laser-welded titanium fixed partial implant-supported prostheses with conventional cast frameworks. Forty-two partially edentulous patients were provided with Brånemark system implants and arranged into 2 groups. Group A was provided with a conventional cast framework with porcelain veneers in one side of the jaw and a laser-welded titanium framework with low-fusing porcelain on the other side. The patients in group B had an old implant prosthesis replaced by a titanium framework prosthesis. The patients were followed for 3 years after prosthesis placement. Clinical and radiographic data were collected and analyzed. Only one implant was lost, and all prostheses were still in function after 3 years. The 2 framework designs showed similar clinical performance with few clinical complications. Only one abutment screw (1%) and 9 porcelain tooth units (5%) fractured. Four prostheses experienced loose gold screws (6%). In group A, marginal bone loss was similar for both designs of prostheses, with a mean of 1.0 mm and 0.3 mm in the maxilla and mandible, respectively. No bone loss was observed on average in group B. No significant relationship (P > 0.05) was observed between marginal bone loss and placement of prosthesis margin or prosthesis design. The use of laser-welded titanium frameworks seems to present similar clinical performance to conventional cast frameworks in partial implant situations after 3 years.

  14. Partial volume effect-corrected FDG PET and grey matter volume loss in patients with mild Alzheimer's disease

    International Nuclear Information System (INIS)

    Samuraki, Miharu; Yanase, Daisuke; Yamada, Masahito; Matsunari, Ichiro; Chen, Wei-Ping; Yajima, Kazuyoshi; Fujikawa, Akihiko; Takeda, Nozomi; Nishimura, Shintaro; Matsuda, Hiroshi

    2007-01-01

    Although 18 F-fluorodeoxyglucose (FDG) PET is an established imaging technique to assess brain glucose utilisation, accurate measurement of tracer concentration is confounded by the presence of partial volume effect (PVE) due to the limited spatial resolution of PET, which is particularly true in atrophic brains such as those encountered in patients with Alzheimer's disease (AD). Our aim was to investigate the effects of PVE correction on FDG PET in conjunction with voxel-based morphometry (VBM) in patients with mild AD. Thirty-nine AD patients and 73 controls underwent FDG PET and MRI. The PVE-corrected grey matter PET images were obtained using an MRI-based three-compartment method. Additionally, the results of PET were compared with grey matter loss detected by VBM. Before PVE correction, reduced FDG uptake was observed in posterior cingulate gyri (PCG) and parieto-temporal lobes (PTL) in AD patients, which persisted after PVE correction. Notably, PVE correction revealed relatively preserved FDG uptake in hippocampal areas, despite the grey matter loss in medial temporal lobe (MTL) revealed by VBM. FDG uptake in PCG and PTL is reduced in AD regardless of whether or not PVE correction is applied, supporting the notion that the reduced FDG uptake in these areas is not the result of atrophy. Furthermore, FDG uptake by grey matter tissue in the MTL, including hippocampal areas, is relatively preserved, suggesting that compensatory mechanisms may play a role in patients with mild AD. (orig.)

  15. A Label Correcting Algorithm for Partial Disassembly Sequences in the Production Planning for End-of-Life Products

    Directory of Open Access Journals (Sweden)

    Pei-Fang (Jennifer Tsai

    2012-01-01

    Full Text Available Remanufacturing of used products has become a strategic issue for cost-sensitive businesses. Due to the nature of uncertain supply of end-of-life (EoL products, the reverse logistic can only be sustainable with a dynamic production planning for disassembly process. This research investigates the sequencing of disassembly operations as a single-period partial disassembly optimization (SPPDO problem to minimize total disassembly cost. AND/OR graph representation is used to include all disassembly sequences of a returned product. A label correcting algorithm is proposed to find an optimal partial disassembly plan if a specific reusable subpart is retrieved from the original return. Then, a heuristic procedure that utilizes this polynomial-time algorithm is presented to solve the SPPDO problem. Numerical examples are used to demonstrate the effectiveness of this solution procedure.

  16. An MGF-based unified framework to determine the joint statistics of partial sums of ordered i.n.d. random variables

    KAUST Repository

    Nam, Sungsik

    2014-08-01

    The joint statistics of partial sums of ordered random variables (RVs) are often needed for the accurate performance characterization of a wide variety of wireless communication systems. A unified analytical framework to determine the joint statistics of partial sums of ordered independent and identically distributed (i.i.d.) random variables was recently presented. However, the identical distribution assumption may not be valid in several real-world applications. With this motivation in mind, we consider in this paper the more general case in which the random variables are independent but not necessarily identically distributed (i.n.d.). More specifically, we extend the previous analysis and introduce a new more general unified analytical framework to determine the joint statistics of partial sums of ordered i.n.d. RVs. Our mathematical formalism is illustrated with an application on the exact performance analysis of the capture probability of generalized selection combining (GSC)-based RAKE receivers operating over frequency-selective fading channels with a non-uniform power delay profile. © 1991-2012 IEEE.

  17. Partial volume effect correction in nuclear medicine: assessment with a mathematical phantom

    International Nuclear Information System (INIS)

    Silva, Murilo Collete da; Pozzo, Lorena

    2009-01-01

    Objective: assessment of Van Cittert partial volume effect correction method in nuclear medicine images, with a mathematical phantom. Material and method: we simulated an image of four circular sources of different diameters and intensity of 255 per pixel. The iterative algorithm was applied with 20 iterations and α = 1. We obtained the maximum and average counts on the entire image and in regions of interest placed on each of the sources. We also extracted count profiles plotted along the diameter of each of the sources. Results: the local convergence depends on the size of the source studied: the smaller the source, the greater the number of iterations required. It also depends on the information extracted: the use of average counts provides more homogeneous results than the maximum count. There is a significant improvement in image contrast. Conclusion: this study showed the possibility of qualitative and quantitative improvement in applying the bidimensional iterative Van Cittert method to images of simple geometry. (author)

  18. Partial versus complete fundoplication for the correction of pediatric GERD: a systematic review and meta-analysis.

    Directory of Open Access Journals (Sweden)

    Peter Glen

    Full Text Available There is no consensus as to what extent of "wrap" is required in a fundoplication for correction of gastroesophageal reflux disease (GERD.To evaluate if a complete (360 degree or partial fundoplication gives better control of GERD.A systematic search of MEDLINE and Scopus identified interventional and observational studies of fundoplication in children. Screening identified those comparing techniques. The primary outcome was recurrence of GERD following surgery. Dysphagia and complications were secondary outcomes of interest. Meta-analysis was performed when appropriate. Study quality was assessed using the Cochrane Risk of Bias Tool.2289 abstracts were screened, yielding 2 randomized controlled trials (RCTs and 12 retrospective cohort studies. The RCTs were pooled. There was no difference in surgical success between partial and complete fundoplication, OR 1.33 [0.67,2.66]. In the 12 cohort studies, 3 (25% used an objective assessment of the surgery, one of which showed improved outcomes with complete fundoplication. Twenty-five different complications were reported; common were dysphagia and gas-bloat syndrome. Overall study quality was poor.The comparison of partial fundoplication with complete fundoplication warrants further study. The evidence does not demonstrate superiority of one technique. The lack of high quality RCTs and the methodological heterogeneity of observational studies limits a powerful meta-analysis.

  19. Different partial volume correction methods lead to different conclusions: An (18)F-FDG-PET study of aging.

    Science.gov (United States)

    Greve, Douglas N; Salat, David H; Bowen, Spencer L; Izquierdo-Garcia, David; Schultz, Aaron P; Catana, Ciprian; Becker, J Alex; Svarer, Claus; Knudsen, Gitte M; Sperling, Reisa A; Johnson, Keith A

    2016-05-15

    A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using 99 subjects aged 65-87years from the Harvard Aging Brain study. Sensitivity to parameter selection was tested for MZ and MG. The various methods and parameter settings resulted in an extremely wide range of conclusions as to the effects of age on metabolism, from almost no changes to virtually all of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas correlated with age. MG and SGTM were found to be similar; however, MG was sensitive to a thresholding parameter that can result in data loss. CSF uptake was surprisingly high at about 15% of that in gray matter. The exclusion of CSF from SGTM and MG models, which is almost universally done, caused a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant age-related changes of about 7% per decade were found in frontal and cingulate cortices as well as primary visual and insular cortices. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Different Partial Volume Correction Methods Lead to Different Conclusions: an 18F-FDG PET Study of Aging

    Science.gov (United States)

    Greve, Douglas N.; Salat, David H.; Bowen, Spencer L.; Izquierdo-Garcia, David; Schultz, Aaron P.; Catana, Ciprian; Becker, J. Alex; Svarer, Claus; Knudsen, Gitte; Sperling, Reisa A.; Johnson, Keith A.

    2016-01-01

    A cross-sectional group study of the effects of aging on brain metabolism as measured with 18F-FDG PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using 99 subjects aged 65-87 from the Harvard Aging Brain study. Sensitivity to parameter selection was tested for MZ and MG. The various methods and parameter settings resulted in an extremely wide range of conclusions as to the effects of age on metabolism, from almost no changes to virtually all of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas correlated with age. MG and SGTM were found to be similar; however, MG was sensitive to a thresholding parameter that can result in data loss. CSF uptake was surprisingly high at about 15% of that in gray matter. Exclusion of CSF from SGTM and MG models, which is almost universally done, caused a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant age-related changes of about 7% per decade were found in frontal and cingulate cortices as well as primary visual and insular cortices. PMID:26915497

  1. Partial volume effect estimation and correction in the aortic vascular wall in PET imaging

    International Nuclear Information System (INIS)

    Burg, S; Le Guludec, D; Dupas, A; Stute, S; Dieudonné, A; Huet, P; Buvat, I

    2013-01-01

    We evaluated the impact of partial volume effect (PVE) in the assessment of arterial diseases with 18 FDG PET. An anthropomorphic digital phantom enabling the modeling of aorta related diseases like atherosclerosis and arteritis was used. Based on this phantom, we performed GATE Monte Carlo simulations to produce realistic PET images with a known organ segmentation and ground truth activity values. Images corresponding to 15 different activity-concentration ratios between the aortic wall and the blood and to 7 different wall thicknesses were generated. Using the PET images, we compared the theoretical wall-to-blood activity-concentration ratios (WBRs) with the measured WBRs obtained with five measurement methods: (1) measurement made by a physician (Expert), (2) automated measurement supposed to mimic the physician measurements (Max), (3) simple correction based on a recovery coefficient (Max-RC), (4) measurement based on an ideal VOI segmentation (Mean-VOI) and (5) measurement corrected for PVE using an ideal geometric transfer matrix (GTM) method. We found that Mean-VOI WBRs values were strongly affected by PVE. WBRs obtained by the physician measurement, by the Max method and by the Max-RC method were more accurate than WBRs obtained with the Mean-VOI approach. However Expert, Max and Max-RC WBRs strongly depended on the wall thickness. Only the GTM corrected WBRs did not depend on the wall thickness. Using the GTM method, we obtained more reproducible ratio values that could be compared across wall thickness. Yet, the feasibility of the implementation of a GTM-like method on real data remains to be studied. (paper)

  2. An MGF-based unified framework to determine the joint statistics of partial sums of ordered i.n.d. random variables

    KAUST Repository

    Nam, Sungsik; Yang, Hongchuan; Alouini, Mohamed-Slim; Kim, Dongin

    2014-01-01

    framework to determine the joint statistics of partial sums of ordered i.n.d. RVs. Our mathematical formalism is illustrated with an application on the exact performance analysis of the capture probability of generalized selection combining (GSC)-based RAKE

  3. Fabrication of a Customized Ball Abutment to Correct a Nonparallel Implant Abutment for a Mandibular Implant-Supported Removable Partial Prosthesis: A Case Report

    Directory of Open Access Journals (Sweden)

    Hossein Dasht

    2017-12-01

    Full Text Available Introduction: While using an implant-supported removable partial prosthesis, the implant abutments should be parallel to one another along the path of insertion. If the implants and their attachments are placed vertically on a similar occlusal plane, not only is the retention improved, the prosthesis will also be maintained for a longer period. Case Report: A 65-year-old male patient referred to the School of Dentistry in Mashhad, Iran with complaints of discomfort with the removable partial dentures for his lower mandible. Due to the lack of parallelism in the supporting implants, prefabricated ball abutment could not be used. As a result, a customized ball abutment was fabricated in order to correct the non-parallelism of the implants. Conclusion: Using UCLA abutments could be a cost-efficient approach for the correction of misaligned implant abutments in implant-supported overdentures.

  4. Maximally Localized States and Quantum Corrections of Black Hole Thermodynamics in the Framework of a New Generalized Uncertainty Principle

    International Nuclear Information System (INIS)

    Zhang, Shao-Jun; Miao, Yan-Gang; Zhao, Ying-Jie

    2015-01-01

    As a generalized uncertainty principle (GUP) leads to the effects of the minimal length of the order of the Planck scale and UV/IR mixing, some significant physical concepts and quantities are modified or corrected correspondingly. On the one hand, we derive the maximally localized states—the physical states displaying the minimal length uncertainty associated with a new GUP proposed in our previous work. On the other hand, in the framework of this new GUP we calculate quantum corrections to the thermodynamic quantities of the Schwardzschild black hole, such as the Hawking temperature, the entropy, and the heat capacity, and give a remnant mass of the black hole at the end of the evaporation process. Moreover, we compare our results with that obtained in the frameworks of several other GUPs. In particular, we observe a significant difference between the situations with and without the consideration of the UV/IR mixing effect in the quantum corrections to the evaporation rate and the decay time. That is, the decay time can greatly be prolonged in the former case, which implies that the quantum correction from the UV/IR mixing effect may give rise to a radical rather than a tiny influence to the Hawking radiation.

  5. Effect of partial volume correction on muscarinic cholinergic receptor imaging with single-photon emission tomography in patients with temporal lobe epilepsy

    International Nuclear Information System (INIS)

    Weckesser, M.; Ziemons, K.; Griessmeier, M.; Sonnenberg, F.; Langen, K.J.; Mueller-Gaertner, H.W.; Hufnagel, A.; Elger, C.E.; Hacklaender, T.; Holschbach, M.

    1997-01-01

    Animal experiments and preliminary results in humans have indicated alterations of hippocampal muscarinic acetylcholine receptors (mAChR) in temporal lobe epilepsy. Patients with temporal lobe epilepsy often present with a reduction in hippocampal volume. The aim of this study was to investigate the influence of hippocampal atrophy on the quantification of mAChR with single photon emission tomography (SPET) in patients with temporal lobe epilepsy. Cerebral uptake of the muscarinic cholinergic antagonist [ 123 I]4-iododexetimide (IDex) was investigated by SPET in patients suffering from temporal lobe epilepsy of unilateral (n=6) or predominantly unilateral (n=1) onset. Regions of interest were drawn on co-registered magnetic resonance images. Hippocampal volume was determined in these regions and was used to correct the SPET results for partial volume effects. A ratio of hippocampal IDex binding on the affected side to that on the unaffected side was used to detect changes in muscarinic cholinergic receptor density. Before partial volume correction a decrease in hippocampal IDex binding on the focus side was found in each patient. After partial volume no convincing differences remained. Our results indicate that the reduction in hippocampal IDex binding in patients with epilepsy is due to a decrease in hippocampal volume rather than to a decrease in receptor concentration. (orig.). With 2 figs., 2 tabs

  6. An Improved Generalized Predictive Control in a Robust Dynamic Partial Least Square Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2015-01-01

    Full Text Available To tackle the sensitivity to outliers in system identification, a new robust dynamic partial least squares (PLS model based on an outliers detection method is proposed in this paper. An improved radial basis function network (RBFN is adopted to construct the predictive model from inputs and outputs dataset, and a hidden Markov model (HMM is applied to detect the outliers. After outliers are removed away, a more robust dynamic PLS model is obtained. In addition, an improved generalized predictive control (GPC with the tuning weights under dynamic PLS framework is proposed to deal with the interaction which is caused by the model mismatch. The results of two simulations demonstrate the effectiveness of proposed method.

  7. A comparison of five partial volume correction methods for Tau and Amyloid PET imaging with [18F]THK5351 and [11C]PIB.

    Science.gov (United States)

    Shidahara, Miho; Thomas, Benjamin A; Okamura, Nobuyuki; Ibaraki, Masanobu; Matsubara, Keisuke; Oyama, Senri; Ishikawa, Yoichi; Watanuki, Shoichi; Iwata, Ren; Furumoto, Shozo; Tashiro, Manabu; Yanai, Kazuhiko; Gonda, Kohsuke; Watabe, Hiroshi

    2017-08-01

    To suppress partial volume effect (PVE) in brain PET, there have been many algorithms proposed. However, each methodology has different property due to its assumption and algorithms. Our aim of this study was to investigate the difference among partial volume correction (PVC) method for tau and amyloid PET study. We investigated two of the most commonly used PVC methods, Müller-Gärtner (MG) and geometric transfer matrix (GTM) and also other three methods for clinical tau and amyloid PET imaging. One healthy control (HC) and one Alzheimer's disease (AD) PET studies of both [ 18 F]THK5351 and [ 11 C]PIB were performed using a Eminence STARGATE scanner (Shimadzu Inc., Kyoto, Japan). All PET images were corrected for PVE by MG, GTM, Labbé (LABBE), Regional voxel-based (RBV), and Iterative Yang (IY) methods, with segmented or parcellated anatomical information processed by FreeSurfer, derived from individual MR images. PVC results of 5 algorithms were compared with the uncorrected data. In regions of high uptake of [ 18 F]THK5351 and [ 11 C]PIB, different PVCs demonstrated different SUVRs. The degree of difference between PVE uncorrected and corrected depends on not only PVC algorithm but also type of tracer and subject condition. Presented PVC methods are straight-forward to implement but the corrected images require careful interpretation as different methods result in different levels of recovery.

  8. [Removable partial dentures. Oral functions and types

    OpenAIRE

    Creugers, N.H.J.; Baat, C. de

    2009-01-01

    A removable partial denture enables the restoration or improvement of 4 oral functions: aesthetics, mandibular stability, mastication, and speech. However, wearing a removable partial denture should not cause oral comfort to deteriorate. There are 3 types of removable partial dentures: acrylic tissue-supported dentures, dentures with cast metal frameworks en dentures with cast metal frameworks and (semi)precision attachments. Interrupted tooth arches,free-ending tooth arches, and a combinatio...

  9. Partial correction of a severe molecular defect in hemophilia A, because of errors during expression of the factor VIII gene

    Energy Technology Data Exchange (ETDEWEB)

    Young, M.; Antonarakis, S.E. [Univ. of Geneva (Switzerland); Inaba, Hiroshi [Tokyo Medical College (Japan)] [and others

    1997-03-01

    Although the molecular defect in patients in a Japanese family with mild to moderately severe hemophilia A was a deletion of a single nucleotide T within an A{sub 8}TA{sub 2} sequence of exon 14 of the factor VIII gene, the severity of the clinical phenotype did not correspond to that expected of a frameshift mutation. A small amount of functional factor VIII protein was detected in the patient`s plasma. Analysis of DNA and RNA molecules from normal and affected individuals and in vitro transcription/translation suggested a partial correction of the molecular defect, because of the following: (i) DNA replication/RNA transcription errors resulting in restoration of the reading frame and/or (ii) {open_quotes}ribosomal frameshifting{close_quotes} resulting in the production of normal factor VIII polypeptide and, thus, in a milder than expected hemophilia A. All of these mechanisms probably were promoted by the longer run of adenines, A{sub 10} instead of A{sub 8}TA{sub 2}, after the delT. Errors in the complex steps of gene expression therefore may partially correct a severe frameshift defect and ameliorate an expected severe phenotype. 36 refs., 6 figs.

  10. Cerebral blood flow in temporal lobe epilepsy: a partial volume correction study

    International Nuclear Information System (INIS)

    Giovacchini, Giampiero; Bonwetsch, Robert; Theodore, William H.; Herscovitch, Peter; Carson, Richard E.

    2007-01-01

    Previous studies in temporal lobe epilepsy (TLE) have shown that, owing to brain atrophy, positron emission tomography (PET) can overestimate deficits in measures of cerebral function such as glucose metabolism (CMR glu ) and neuroreceptor binding. The magnitude of this effect on cerebral blood flow (CBF) is unexplored. The aim of this study was to assess CBF deficits in TLE before and after magnetic resonance imaging-based partial volume correction (PVC). Absolute values of CBF for 21 TLE patients and nine controls were computed before and after PVC. In TLE patients, quantitative CMR glu measurements also were obtained. Before PVC, regional values of CBF were significantly (p glu in middle and inferior temporal cortex, fusiform gyrus and hippocampus both before and after PVC. A significant positive relationship between disease duration and AIs for CMR glu , but not CBF, was detected in hippocampus and amygdala, before but not after PVC. PVC should be used for PET CBF measurements in patients with TLE. Reduced blood flow, in contrast to glucose metabolism, is mainly due to structural changes. (orig.)

  11. An MGF-based unified framework to determine the joint statistics of partial sums of ordered random variables

    KAUST Repository

    Nam, Sungsik

    2010-11-01

    Order statistics find applications in various areas of communications and signal processing. In this paper, we introduce an unified analytical framework to determine the joint statistics of partial sums of ordered random variables (RVs). With the proposed approach, we can systematically derive the joint statistics of any partial sums of ordered statistics, in terms of the moment generating function (MGF) and the probability density function (PDF). Our MGF-based approach applies not only when all the K ordered RVs are involved but also when only the Ks(Ks < K) best RVs are considered. In addition, we present the closed-form expressions for the exponential RV special case. These results apply to the performance analysis of various wireless communication systems over fading channels. © 2006 IEEE.

  12. Remarkably enhanced gas separation by partial self-conversion of a laminated membrane to metal-organic frameworks.

    Science.gov (United States)

    Liu, Yi; Pan, Jia Hong; Wang, Nanyi; Steinbach, Frank; Liu, Xinlei; Caro, Jürgen

    2015-03-02

    Separation methods based on 2D interlayer galleries are currently gaining widespread attention. The potential of such galleries as high-performance gas-separation membranes is however still rarely explored. Besides, it is well recognized that gas permeance and separation factor are often inversely correlated in membrane-based gas separation. Therefore, breaking this trade-off becomes highly desirable. Here, the gas-separation performance of a 2D laminated membrane was improved by its partial self-conversion to metal-organic frameworks. A ZIF-8-ZnAl-NO3 layered double hydroxide (LDH) composite membrane was thus successfully prepared in one step by partial conversion of the ZnAl-NO3 LDH membrane, ultimately leading to a remarkably enhanced H2 /CH4 separation factor and H2 permeance. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. [A preliminary study on the forming quality of titanium alloy removable partial denture frameworks fabricated by selective laser melting].

    Science.gov (United States)

    Liu, Y F; Yu, H; Wang, W N; Gao, B

    2017-06-09

    Objective: To evaluate the processing accuracy, internal quality and suitability of the titanium alloy frameworks of removable partial denture (RPD) fabricated by selective laser melting (SLM) technique, and to provide reference for clinical application. Methods: The plaster model of one clinical patient was used as the working model, and was scanned and reconstructed into a digital working model. A RPD framework was designed on it. Then, eight corresponding RPD frameworks were fabricated using SLM technique. Three-dimensional (3D) optical scanner was used to scan and obtain the 3D data of the frameworks and the data was compared with the original computer aided design (CAD) model to evaluate their processing precision. The traditional casting pure titanium frameworks was used as the control group, and the internal quality was analyzed by X-ray examination. Finally, the fitness of the frameworks was examined on the plaster model. Results: The overall average deviation of the titanium alloy RPD framework fabricated by SLM technology was (0.089±0.076) mm, the root mean square error was 0.103 mm. No visible pores, cracks and other internal defects was detected in the frameworks. The framework fits on the plaster model completely, and its tissue surface fitted on the plaster model well. There was no obvious movement. Conclusions: The titanium alloy RPD framework fabricated by SLM technology is of good quality.

  14. Flexible Thermoplastic Denture Base Materials for Aesthetical Removable Partial Denture Framework

    OpenAIRE

    Singh, Kunwarjeet; Aeran, Himanshu; Kumar, Narender; Gupta, Nidhi

    2013-01-01

    Conventional fixed partial dentures, implant supported Fixed Partial Dentures (FDPs) and removable partial dentures are the most common treatment modalities for the aesthetic and functional rehabilitation of partially edentulous patients. Although implants and FDP have certain advantages over removable partial dentures, in some cases, removable partial dentures may be the only choice which is available. Removable cast partial dentures are used as definitive removable prostheses when indicated...

  15. Flexible thermoplastic denture base materials for aesthetical removable partial denture framework.

    Science.gov (United States)

    Singh, Kunwarjeet; Aeran, Himanshu; Kumar, Narender; Gupta, Nidhi

    2013-10-01

    Conventional fixed partial dentures, implant supported Fixed Partial Dentures (FDPs) and removable partial dentures are the most common treatment modalities for the aesthetic and functional rehabilitation of partially edentulous patients. Although implants and FDP have certain advantages over removable partial dentures, in some cases, removable partial dentures may be the only choice which is available. Removable cast partial dentures are used as definitive removable prostheses when indicated, but location of clasps may affect aesthetics. So, when patient is concerned about aesthetics, flexible partial dentures which is aesthetically superior to flipper and cast partial dentures, may be considered. But for the success of flexible removable partial denture, proper diagnosis, treatment planning and insertion technique of this prosthesis is very important, which have been thoroughly described in this article.

  16. Apparent CBF decrease with normal aging due to partial volume effects: MR-based partial volume correction on CBF SPECT.

    Science.gov (United States)

    Inoue, Kentaro; Ito, Hiroshi; Goto, Ryoi; Nakagawa, Manabu; Kinomura, Shigeo; Sato, Tachio; Sato, Kazunori; Fukuda, Hiroshi

    2005-06-01

    Several studies using single photon emission tomography (SPECT) have shown changes in cerebral blood flow (CBF) with age, which were associated with partial volume effects by some authors. Some studies have also demonstrated gender-related differences in CBF. The present study aimed to examine age and gender effects on CBF SPECT images obtained using the 99mTc-ethyl cysteinate dimer and a SPECT scanner, before and after partial volume correction (PVC) using magnetic resonance (MR) imaging. Forty-four healthy subjects (29 males and 15 females; age range, 27-64 y; mean age, 50.0 +/- 9.8 y) participated. Each MR image was segmented to yield grey and white matter images and coregistered to a corresponding SPECT image, followed by convolution to approximate the SPECT spatial resolution. PVC-SPECT images were produced using the convoluted grey matter MR (GM-MR) and white matter MR images. The age and gender effects were assessed using SPM99. Decreases with age were detected in the anterolateral prefrontal cortex and in areas along the lateral sulcus and the lateral ventricle, bilaterally, in the GM-MR images and the SPECT images. In the PVC-SPECT images, decreases in CBF in the lateral prefrontal cortex lost their statistical significance. Decreases in CBF with age found along the lateral sulcus and the lateral ventricle, on the other hand, remained statistically significant, but observation of the spatially normalized MR images suggests that these findings are associated with the dilatation of the lateral sulcus and lateral ventricle, which was not completely compensated for by the spatial normalization procedure. Our present study demonstrated that age effects on CBF in healthy subjects could reflect morphological differences with age in grey matter.

  17. PETPVC: a toolbox for performing partial volume correction techniques in positron emission tomography

    Science.gov (United States)

    Thomas, Benjamin A.; Cuplov, Vesna; Bousse, Alexandre; Mendes, Adriana; Thielemans, Kris; Hutton, Brian F.; Erlandsson, Kjell

    2016-11-01

    Positron emission tomography (PET) images are degraded by a phenomenon known as the partial volume effect (PVE). Approaches have been developed to reduce PVEs, typically through the utilisation of structural information provided by other imaging modalities such as MRI or CT. These methods, known as partial volume correction (PVC) techniques, reduce PVEs by compensating for the effects of the scanner resolution, thereby improving the quantitative accuracy. The PETPVC toolbox described in this paper comprises a suite of methods, both classic and more recent approaches, for the purposes of applying PVC to PET data. Eight core PVC techniques are available. These core methods can be combined to create a total of 22 different PVC techniques. Simulated brain PET data are used to demonstrate the utility of toolbox in idealised conditions, the effects of applying PVC with mismatched point-spread function (PSF) estimates and the potential of novel hybrid PVC methods to improve the quantification of lesions. All anatomy-based PVC techniques achieve complete recovery of the PET signal in cortical grey matter (GM) when performed in idealised conditions. Applying deconvolution-based approaches results in incomplete recovery due to premature termination of the iterative process. PVC techniques are sensitive to PSF mismatch, causing a bias of up to 16.7% in GM recovery when over-estimating the PSF by 3 mm. The recovery of both GM and a simulated lesion was improved by combining two PVC techniques together. The PETPVC toolbox has been written in C++, supports Windows, Mac and Linux operating systems, is open-source and publicly available.

  18. Brain extraction in partial volumes T2*@7T by using a quasi-anatomic segmentation with bias field correction.

    Science.gov (United States)

    Valente, João; Vieira, Pedro M; Couto, Carlos; Lima, Carlos S

    2018-02-01

    Poor brain extraction in Magnetic Resonance Imaging (MRI) has negative consequences in several types of brain post-extraction such as tissue segmentation and related statistical measures or pattern recognition algorithms. Current state of the art algorithms for brain extraction work on weighted T1 and T2, being not adequate for non-whole brain images such as the case of T2*FLASH@7T partial volumes. This paper proposes two new methods that work directly in T2*FLASH@7T partial volumes. The first is an improvement of the semi-automatic threshold-with-morphology approach adapted to incomplete volumes. The second method uses an improved version of a current implementation of the fuzzy c-means algorithm with bias correction for brain segmentation. Under high inhomogeneity conditions the performance of the first method degrades, requiring user intervention which is unacceptable. The second method performed well for all volumes, being entirely automatic. State of the art algorithms for brain extraction are mainly semi-automatic, requiring a correct initialization by the user and knowledge of the software. These methods can't deal with partial volumes and/or need information from atlas which is not available in T2*FLASH@7T. Also, combined volumes suffer from manipulations such as re-sampling which deteriorates significantly voxel intensity structures making segmentation tasks difficult. The proposed method can overcome all these difficulties, reaching good results for brain extraction using only T2*FLASH@7T volumes. The development of this work will lead to an improvement of automatic brain lesions segmentation in T2*FLASH@7T volumes, becoming more important when lesions such as cortical Multiple-Sclerosis need to be detected. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Towards a universal method for calculating hydration free energies: a 3D reference interaction site model with partial molar volume correction

    International Nuclear Information System (INIS)

    Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V

    2010-01-01

    We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol -1 for a test set of 120 organic molecules). (fast track communication)

  20. Towards a universal method for calculating hydration free energies: a 3D reference interaction site model with partial molar volume correction.

    Science.gov (United States)

    Palmer, David S; Frolov, Andrey I; Ratkova, Ekaterina L; Fedorov, Maxim V

    2010-12-15

    We report a simple universal method to systematically improve the accuracy of hydration free energies calculated using an integral equation theory of molecular liquids, the 3D reference interaction site model. A strong linear correlation is observed between the difference of the experimental and (uncorrected) calculated hydration free energies and the calculated partial molar volume for a data set of 185 neutral organic molecules from different chemical classes. By using the partial molar volume as a linear empirical correction to the calculated hydration free energy, we obtain predictions of hydration free energies in excellent agreement with experiment (R = 0.94, σ = 0.99 kcal mol (- 1) for a test set of 120 organic molecules).

  1. [Removable partial dentures. Oral functions and types].

    Science.gov (United States)

    Creugers, N H J; de Baat, C

    2009-11-01

    A removable partial denture enables the restoration or improvement of 4 oral functions: aesthetics, mandibular stability, mastication, and speech. However, wearing a removable partial denture should not cause oral comfort to deteriorate. There are 3 types of removable partial dentures: acrylic tissue-supported dentures, dentures with cast metal frameworks en dentures with cast metal frameworks and (semi)precision attachments. Interrupted tooth arches,free-ending tooth arches, and a combination of interrupted as well as free-ending tooth arches can be restored using these dentures. Well-known disadvantages of removable partial dentures are problematic oral hygiene, negative influence on the remaining dentition and limited oral comfort. Due to the advanced possibilities of fixed tooth- or implant-supported partial dentures, whether or not free-ending, or tooth- as well as implant-supported partial dentures, the indication of removable partial dentures is restricted. Nevertheless, for the time being the demand for removable partial dentures is expected to continue.

  2. Recurrent Partial Words

    Directory of Open Access Journals (Sweden)

    Francine Blanchet-Sadri

    2011-08-01

    Full Text Available Partial words are sequences over a finite alphabet that may contain wildcard symbols, called holes, which match or are compatible with all letters; partial words without holes are said to be full words (or simply words. Given an infinite partial word w, the number of distinct full words over the alphabet that are compatible with factors of w of length n, called subwords of w, refers to a measure of complexity of infinite partial words so-called subword complexity. This measure is of particular interest because we can construct partial words with subword complexities not achievable by full words. In this paper, we consider the notion of recurrence over infinite partial words, that is, we study whether all of the finite subwords of a given infinite partial word appear infinitely often, and we establish connections between subword complexity and recurrence in this more general framework.

  3. Healthy brain ageing assessed with 18F-FDG PET and age-dependent recovery factors after partial volume effect correction

    Energy Technology Data Exchange (ETDEWEB)

    Bonte, Stijn [IBiTech, Ghent, (Belgium); Ghent University, iMinds - Medical Image and Signal Processing (MEDISIP), Department of Electronics and Information Systems, Ghent (Belgium); University Hospital, Department of Radiology and Nuclear Medicine, Ghent (Belgium); Vandemaele, Pieter; Deblaere, Karel; Goethals, Ingeborg [University Hospital, Department of Radiology and Nuclear Medicine, Ghent (Belgium); Verleden, Stijn; Audenaert, Kurt [University Hospital, Department of Psychiatry, Ghent (Belgium); Holen, Roel van [Ghent University, iMinds - Medical Image and Signal Processing (MEDISIP), Department of Electronics and Information Systems, Ghent (Belgium)

    2017-05-15

    The mechanisms of ageing of the healthy brain are not entirely clarified to date. In recent years several authors have tried to elucidate this topic by using {sup 18}F-FDG positron emission tomography. However, when correcting for partial volume effects (PVE), divergent results were reported. Therefore, it is necessary to evaluate these methods in the presence of atrophy due to ageing. In this paper we first evaluate the performance of two PVE correction techniques with a phantom study: the Rousset method and iterative deconvolution. We show that the ability of the latter method to recover the true activity in a small region decreases with increasing age due to brain atrophy. Next, we have calculated age-dependent recovery factors to correct for this incomplete recovery. These factors were applied to PVE-corrected {sup 18}F-FDG PET scans of healthy subjects for mapping the agedependent metabolism in the brain. Many regions in the brain show a reduced metabolism with ageing, especially in grey matter in the frontal and temporal lobe. An increased metabolism is found in grey matter of the cerebellum and thalamus. Our study resulted in age-dependent recovery factors which can be applied following standard PVE correction methods. Cancelling the effect of atrophy, we found regional changes in {sup 18}F-FDG metabolism with ageing. A decreasing trend is found in the frontal and temporal lobe, whereas an increasing metabolism with ageing is observed in the thalamus and cerebellum.

  4. Clinical Fit of Partial Removable Dental Prostheses Based on Alginate or Polyvinyl Siloxane Impressions.

    Science.gov (United States)

    Fokkinga, Wietske A; Witter, Dick J; Bronkhorst, Ewald M; Creugers, Nico H

    The aim of this study was to analyze the clinical fit of metal-frame partial removable dental prostheses (PRDPs) based on custom trays used with alginate or polyvinyl siloxane impression material. Fifth-year students of the Nijmegen Dental School made 25 correct impressions for 23 PRDPs for 21 patients using alginate, and 31 correct impressions for 30 PRDPs for 28 patients using polyvinyl siloxane. Clinical fit of the framework as a whole and of each retainer separately were evaluated by calibrated supervisors during framework try-in before (first evaluation) and after (second evaluation) possible adjustments (score 0 = poor fit, up to score 3 = good fit). Framework fit and fit of the denture base were evaluated at delivery (third evaluation). Finally, postinsertion sessions were evaluated and total number of sessions needed, sore spots, adjustments to the denture base, and reported food-impaction were recorded. No significant differences in clinical fit (of the framework as a whole, for the retainers, or for the denture base) were found between the groups in the three evaluation sessions. Differences were not found for postinsertion sessions with one exception: in the alginate group, four subjects reported food impaction, versus none in the polyvinyl siloxane group. Clinical fit of metal-frame PRDPs based on impressions with custom trays combined with alginate or polyvinyl siloxane was similar.

  5. Unifying framework for multimodal brain MRI segmentation based on Hidden Markov Chains.

    Science.gov (United States)

    Bricq, S; Collet, Ch; Armspach, J P

    2008-12-01

    In the frame of 3D medical imaging, accurate segmentation of multimodal brain MR images is of interest for many brain disorders. However, due to several factors such as noise, imaging artifacts, intrinsic tissue variation and partial volume effects, tissue classification remains a challenging task. In this paper, we present a unifying framework for unsupervised segmentation of multimodal brain MR images including partial volume effect, bias field correction, and information given by a probabilistic atlas. Here-proposed method takes into account neighborhood information using a Hidden Markov Chain (HMC) model. Due to the limited resolution of imaging devices, voxels may be composed of a mixture of different tissue types, this partial volume effect is included to achieve an accurate segmentation of brain tissues. Instead of assigning each voxel to a single tissue class (i.e., hard classification), we compute the relative amount of each pure tissue class in each voxel (mixture estimation). Further, a bias field estimation step is added to the proposed algorithm to correct intensity inhomogeneities. Furthermore, atlas priors were incorporated using probabilistic brain atlas containing prior expectations about the spatial localization of different tissue classes. This atlas is considered as a complementary sensor and the proposed method is extended to multimodal brain MRI without any user-tunable parameter (unsupervised algorithm). To validate this new unifying framework, we present experimental results on both synthetic and real brain images, for which the ground truth is available. Comparison with other often used techniques demonstrates the accuracy and the robustness of this new Markovian segmentation scheme.

  6. Partial volume effect correction in PET using regularized iterative deconvolution with variance control based on local topology

    International Nuclear Information System (INIS)

    Kirov, A S; Schmidtlein, C R; Piao, J Z

    2008-01-01

    Correcting positron emission tomography (PET) images for the partial volume effect (PVE) due to the limited resolution of PET has been a long-standing challenge. Various approaches including incorporation of the system response function in the reconstruction have been previously tested. We present a post-reconstruction PVE correction based on iterative deconvolution using a 3D maximum likelihood expectation-maximization (MLEM) algorithm. To achieve convergence we used a one step late (OSL) regularization procedure based on the assumption of local monotonic behavior of the PET signal following Alenius et al. This technique was further modified to selectively control variance depending on the local topology of the PET image. No prior 'anatomic' information is needed in this approach. An estimate of the noise properties of the image is used instead. The procedure was tested for symmetric and isotropic deconvolution functions with Gaussian shape and full width at half-maximum (FWHM) ranging from 6.31 mm to infinity. The method was applied to simulated and experimental scans of the NEMA NU 2 image quality phantom with the GE Discovery LS PET/CT scanner. The phantom contained uniform activity spheres with diameters ranging from 1 cm to 3.7 cm within uniform background. The optimal sphere activity to variance ratio was obtained when the deconvolution function was replaced by a step function few voxels wide. In this case, the deconvolution method converged in ∼3-5 iterations for most points on both the simulated and experimental images. For the 1 cm diameter sphere, the contrast recovery improved from 12% to 36% in the simulated and from 21% to 55% in the experimental data. Recovery coefficients between 80% and 120% were obtained for all larger spheres, except for the 13 mm diameter sphere in the simulated scan (68%). No increase in variance was observed except for a few voxels neighboring strong activity gradients and inside the largest spheres. Testing the method for

  7. CMS Partial Releases Model, Tools, and Applications. Online and Framework-Light Releases

    CERN Document Server

    Jones, Christopher D; Meschi, Emilio; Shahzad Muzaffar; Andreas Pfeiffer; Ratnikova, Natalia; Sexton-Kennedy, Elizabeth

    2009-01-01

    The CMS Software project CMSSW embraces more than a thousand packages organized in subsystems for analysis, event display, reconstruction, simulation, detector description, data formats, framework, utilities and tools. The release integration process is highly automated by using tools developed or adopted by CMS. Packaging in rpm format is a built-in step in the software build process. For several well-defined applications it is highly desirable to have only a subset of the CMSSW full package bundle. For example, High Level Trigger algorithms that run on the Online farm, and need to be rebuilt in a special way, require no simulation, event display, or analysis packages. Physics analysis applications in Root environment require only a few core libraries and the description of CMS specific data formats. We present a model of CMS Partial Releases, used for preparation of the customized CMS software builds, including description of the tools used, the implementation, and how we deal with technical challenges, suc...

  8. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    Science.gov (United States)

    Sattarivand, Mike; Kusano, Maggie; Poon, Ian; Caldwell, Curtis

    2012-11-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  9. Symmetric geometric transfer matrix partial volume correction for PET imaging: principle, validation and robustness

    International Nuclear Information System (INIS)

    Sattarivand, Mike; Caldwell, Curtis; Kusano, Maggie; Poon, Ian

    2012-01-01

    Limited spatial resolution of positron emission tomography (PET) often requires partial volume correction (PVC) to improve the accuracy of quantitative PET studies. Conventional region-based PVC methods use co-registered high resolution anatomical images (e.g. computed tomography (CT) or magnetic resonance images) to identify regions of interest. Spill-over between regions is accounted for by calculating regional spread functions (RSFs) in a geometric transfer matrix (GTM) framework. This paper describes a new analytically derived symmetric GTM (sGTM) method that relies on spill-over between RSFs rather than between regions. It is shown that the sGTM is mathematically equivalent to Labbe's method; however it is a region-based method rather than a voxel-based method and it avoids handling large matrices. The sGTM method was validated using two three-dimensional (3D) digital phantoms and one physical phantom. A 3D digital sphere phantom with sphere diameters ranging from 5 to 30 mm and a sphere-to-background uptake ratio of 3-to-1 was used. A 3D digital brain phantom was used with four different anatomical regions and a background region with different activities assigned to each region. A physical sphere phantom with the same geometry and uptake as the digital sphere phantom was manufactured and PET-CT images were acquired. Using these three phantoms, the performance of the sGTM method was assessed against that of the GTM method in terms of accuracy, precision, noise propagation and robustness. The robustness was assessed by applying mis-registration errors and errors in estimates of PET point spread function (PSF). In all three phantoms, the results showed that the sGTM method has accuracy similar to that of the GTM method and within 5%. However, the sGTM method showed better precision and noise propagation than the GTM method, especially for spheres smaller than 13 mm. Moreover, the sGTM method was more robust than the GTM method when mis-registration errors or

  10. The partial Siberian snake experiment at the Brookhaven AGS

    International Nuclear Information System (INIS)

    Huang, H.; Caussyn, D.D.; Ellison, T.; Jones, B.; Lee, S.Y.; Schwandt, P.; Ahren, L.; Alessi, J.; Bleser, E.J.; Bunce, G.; Cameron, P.; Courant, E.D.; Foelsche, H.W.; Gardner, C.J.; Geller, J.; Lee, Y.Y.; Makdisi, Y.I.; Mane, S.R.; Ratner, L.; Reece, K.; Roser, T.; Skelly, J.F.; Soukas, A.; Tepikian, S.; Thern, R.E.; van Asselt, W.; Spinka, H.; Teng, L.; Underwood, D.G.; Yokosawa, A.; Wienands, U.; Bharadwaj, V.; Hsueh, S.; Hiramatsu, S.; Mori, Y.; Sato, H.; Yokoya, K.

    1992-01-01

    We are building a 4.7 Tesla-meter room temperature solenoid to be installed in a 10-foot long AGS straight section. This experiment will test the idea of using a partial snake to correct all depolarizing imperfection resonances and also test the feasibility of betatron tune jump in correction intrinsic resonances in the presence of a partial snake

  11. Partial Tmem106b reduction does not correct abnormalities due to progranulin haploinsufficiency.

    Science.gov (United States)

    Arrant, Andrew E; Nicholson, Alexandra M; Zhou, Xiaolai; Rademakers, Rosa; Roberson, Erik D

    2018-06-22

    Loss of function mutations in progranulin (GRN) are a major cause of frontotemporal dementia (FTD). Progranulin is a secreted glycoprotein that localizes to lysosomes and is critical for proper lysosomal function. Heterozygous GRN mutation carriers develop FTD with TDP-43 pathology and exhibit signs of lysosomal dysfunction in the brain, with increased levels of lysosomal proteins and lipofuscin accumulation. Homozygous GRN mutation carriers develop neuronal ceroid lipofuscinosis (NCL), an earlier-onset lysosomal storage disorder caused by severe lysosomal dysfunction. Multiple genome-wide association studies have shown that risk of FTD in GRN mutation carriers is modified by polymorphisms in TMEM106B, which encodes a lysosomal membrane protein. Risk alleles of TMEM106B may increase TMEM106B levels through a variety of mechanisms. Brains from FTD patients with GRN mutations exhibit increased TMEM106B expression, and protective TMEM106B polymorphisms are associated with decreased TMEM106B expression. Together, these data raise the possibility that reduction of TMEM106B levels may protect against the pathogenic effects of progranulin haploinsufficiency. We crossed Tmem106b +/- mice with Grn +/- mice, which model the progranulin haploinsufficiency of GRN mutation carriers and develop age-dependent social deficits and lysosomal abnormalities in the brain. We tested whether partial Tmem106b reduction could normalize the social deficits and lysosomal abnormalities of Grn +/- mice. Partial reduction of Tmem106b levels did not correct the social deficits of Grn +/- mice. Tmem106b reduction also failed to normalize most lysosomal abnormalities of Grn +/- mice, except for β-glucuronidase activity, which was suppressed by Tmem106b reduction and increased by progranulin insufficiency. These data do not support the hypothesis that Tmem106b reduction protects against the pathogenic effects of progranulin haploinsufficiency, but do show that Tmem106b reduction normalizes some

  12. Quantification accuracy and partial volume effect in dependence of the attenuation correction of a state-of-the-art small animal PET scanner

    International Nuclear Information System (INIS)

    Mannheim, Julia G; Judenhofer, Martin S; Schmid, Andreas; Pichler, Bernd J; Tillmanns, Julia; Stiller, Detlef; Sossi, Vesna

    2012-01-01

    Quantification accuracy and partial volume effect (PVE) of the Siemens Inveon PET scanner were evaluated. The influence of transmission source activities (40 and 160 MBq) on the quantification accuracy and the PVE were determined. Dynamic range, object size and PVE for different sphere sizes, contrast ratios and positions in the field of view (FOV) were evaluated. The acquired data were reconstructed using different algorithms and correction methods. The activity level of the transmission source and the total emission activity in the FOV strongly influenced the attenuation maps. Reconstruction algorithms, correction methods, object size and location within the FOV had a strong influence on the PVE in all configurations. All evaluated parameters potentially influence the quantification accuracy. Hence, all protocols should be kept constant during a study to allow a comparison between different scans. (paper)

  13. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    Directory of Open Access Journals (Sweden)

    Guo Xiao-Mao

    2010-10-01

    Full Text Available Abstract Background The cone beam CT (CBCT guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT guided accelerated partial breast irradiation (APBI. Methods Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. Results A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR, 3.1 mm and 2.3 mm in the superior-inferior (SI, and 2.3 mm and 2.0 mm in the anterior-posterior (AP directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10

  14. Impact of residual and intrafractional errors on strategy of correction for image-guided accelerated partial breast irradiation

    International Nuclear Information System (INIS)

    Cai, Gang; Hu, Wei-Gang; Chen, Jia-Yi; Yu, Xiao-Li; Pan, Zi-Qiang; Yang, Zhao-Zhi; Guo, Xiao-Mao; Shao, Zhi-Min; Jiang, Guo-Liang

    2010-01-01

    The cone beam CT (CBCT) guided radiation can reduce the systematic and random setup errors as compared to the skin-mark setup. However, the residual and intrafractional (RAIF) errors are still unknown. The purpose of this paper is to investigate the magnitude of RAIF errors and correction action levels needed in cone beam computed tomography (CBCT) guided accelerated partial breast irradiation (APBI). Ten patients were enrolled in the prospective study of CBCT guided APBI. The postoperative tumor bed was irradiated with 38.5 Gy in 10 fractions over 5 days. Two cone-beam CT data sets were obtained with one before and one after the treatment delivery. The CBCT images were registered online to the planning CT images using the automatic algorithm followed by a fine manual adjustment. An action level of 3 mm, meaning that corrections were performed for translations exceeding 3 mm, was implemented in clinical treatments. Based on the acquired data, different correction action levels were simulated, and random RAIF errors, systematic RAIF errors and related margins before and after the treatments were determined for varying correction action levels. A total of 75 pairs of CBCT data sets were analyzed. The systematic and random setup errors based on skin-mark setup prior to treatment delivery were 2.1 mm and 1.8 mm in the lateral (LR), 3.1 mm and 2.3 mm in the superior-inferior (SI), and 2.3 mm and 2.0 mm in the anterior-posterior (AP) directions. With the 3 mm correction action level, the systematic and random RAIF errors were 2.5 mm and 2.3 mm in the LR direction, 2.3 mm and 2.3 mm in the SI direction, and 2.3 mm and 2.2 mm in the AP direction after treatments delivery. Accordingly, the margins for correction action levels of 3 mm, 4 mm, 5 mm, 6 mm and no correction were 7.9 mm, 8.0 mm, 8.0 mm, 7.9 mm and 8.0 mm in the LR direction; 6.4 mm, 7.1 mm, 7.9 mm, 9.2 mm and 10.5 mm in the SI direction; 7.6 mm, 7.9 mm, 9.4 mm, 10.1 mm and 12.7 mm in the AP direction

  15. A simple algorithm for subregional striatal uptake analysis with partial volume correction in dopaminergic PET imaging

    International Nuclear Information System (INIS)

    Lue Kunhan; Lin Hsinhon; Chuang Kehshih; Kao Chihhao, K.; Hsieh Hungjen; Liu Shuhsin

    2014-01-01

    In positron emission tomography (PET) of the dopaminergic system, quantitative measurements of nigrostriatal dopamine function are useful for differential diagnosis. A subregional analysis of striatal uptake enables the diagnostic performance to be more powerful. However, the partial volume effect (PVE) induces an underestimation of the true radioactivity concentration in small structures. This work proposes a simple algorithm for subregional analysis of striatal uptake with partial volume correction (PVC) in dopaminergic PET imaging. The PVC algorithm analyzes the separate striatal subregions and takes into account the PVE based on the recovery coefficient (RC). The RC is defined as the ratio of the PVE-uncorrected to PVE-corrected radioactivity concentration, and is derived from a combination of the traditional volume of interest (VOI) analysis and the large VOI technique. The clinical studies, comprising 11 patients with Parkinson's disease (PD) and 6 healthy subjects, were used to assess the impact of PVC on the quantitative measurements. Simulations on a numerical phantom that mimicked realistic healthy and neurodegenerative situations were used to evaluate the performance of the proposed PVC algorithm. In both the clinical and the simulation studies, the striatal-to-occipital ratio (SOR) values for the entire striatum and its subregions were calculated with and without PVC. In the clinical studies, the SOR values in each structure (caudate, anterior putamen, posterior putamen, putamen, and striatum) were significantly higher by using PVC in contrast to those without. Among the PD patients, the SOR values in each structure and quantitative disease severity ratings were shown to be significantly related only when PVC was used. For the simulation studies, the average absolute percentage error of the SOR estimates before and after PVC were 22.74% and 1.54% in the healthy situation, respectively; those in the neurodegenerative situation were 20.69% and 2

  16. Using Intraoral Scanning Technology for Three-Dimensional Printing of Kennedy Class I Removable Partial Denture Metal Framework: A Clinical Report.

    Science.gov (United States)

    Hu, Feng; Pei, Zhenhua; Wen, Ying

    2017-11-16

    Removable partial dentures (RPDs) are used to restore missing teeth and are traditionally fabricated using the lost-wax casting technique. The casting process is arduous, time-consuming, and requires a skilled technician. The development of intraoral scanning and 3D printing technology has made rapid prototyping of the RPD more achievable. This article reports a completed case of direct fabrication of a maxillary RPD metal framework (Kennedy Class I) using intraoral scanning and 3D printing techniques. Acceptable fit and satisfactory clinical outcome were demonstrated. Intraoral scanning and 3D printing for fabrication of the RPD metal framework is a useful alternative to conventional impression and casting techniques, especially for patients suffering from nasal obstruction or intolerance. © 2017 by the American College of Prosthodontists.

  17. Simultaneous determination of penicillin G salts by infrared spectroscopy: Evaluation of combining orthogonal signal correction with radial basis function-partial least squares regression

    Science.gov (United States)

    Talebpour, Zahra; Tavallaie, Roya; Ahmadi, Seyyed Hamid; Abdollahpour, Assem

    2010-09-01

    In this study, a new method for the simultaneous determination of penicillin G salts in pharmaceutical mixture via FT-IR spectroscopy combined with chemometrics was investigated. The mixture of penicillin G salts is a complex system due to similar analytical characteristics of components. Partial least squares (PLS) and radial basis function-partial least squares (RBF-PLS) were used to develop the linear and nonlinear relation between spectra and components, respectively. The orthogonal signal correction (OSC) preprocessing method was used to correct unexpected information, such as spectral overlapping and scattering effects. In order to compare the influence of OSC on PLS and RBF-PLS models, the optimal linear (PLS) and nonlinear (RBF-PLS) models based on conventional and OSC preprocessed spectra were established and compared. The obtained results demonstrated that OSC clearly enhanced the performance of both RBF-PLS and PLS calibration models. Also in the case of some nonlinear relation between spectra and component, OSC-RBF-PLS gave satisfactory results than OSC-PLS model which indicated that the OSC was helpful to remove extrinsic deviations from linearity without elimination of nonlinear information related to component. The chemometric models were tested on an external dataset and finally applied to the analysis commercialized injection product of penicillin G salts.

  18. Leading gravitational corrections and a unified universe

    DEFF Research Database (Denmark)

    Codello, Alessandro; Jain, Rajeev Kumar

    2016-01-01

    Leading order gravitational corrections to the Einstein-Hilbert action can lead to a consistent picture of the universe by unifying the epochs of inflation and dark energy in a single framework. While the leading local correction induces an inflationary phase in the early universe, the leading...... nonlocal term leads to an accelerated expansion of the universe at the present epoch. We argue that both the leading UV and IR terms can be obtained within the framework of a covariant effective field theory of gravity. The perturbative gravitational corrections therefore provide a fundamental basis...

  19. A novel baseline correction method using convex optimization framework in laser-induced breakdown spectroscopy quantitative analysis

    Science.gov (United States)

    Yi, Cancan; Lv, Yong; Xiao, Han; Ke, Ke; Yu, Xun

    2017-12-01

    For laser-induced breakdown spectroscopy (LIBS) quantitative analysis technique, baseline correction is an essential part for the LIBS data preprocessing. As the widely existing cases, the phenomenon of baseline drift is generated by the fluctuation of laser energy, inhomogeneity of sample surfaces and the background noise, which has aroused the interest of many researchers. Most of the prevalent algorithms usually need to preset some key parameters, such as the suitable spline function and the fitting order, thus do not have adaptability. Based on the characteristics of LIBS, such as the sparsity of spectral peaks and the low-pass filtered feature of baseline, a novel baseline correction and spectral data denoising method is studied in this paper. The improved technology utilizes convex optimization scheme to form a non-parametric baseline correction model. Meanwhile, asymmetric punish function is conducted to enhance signal-noise ratio (SNR) of the LIBS signal and improve reconstruction precision. Furthermore, an efficient iterative algorithm is applied to the optimization process, so as to ensure the convergence of this algorithm. To validate the proposed method, the concentration analysis of Chromium (Cr),Manganese (Mn) and Nickel (Ni) contained in 23 certified high alloy steel samples is assessed by using quantitative models with Partial Least Squares (PLS) and Support Vector Machine (SVM). Because there is no prior knowledge of sample composition and mathematical hypothesis, compared with other methods, the method proposed in this paper has better accuracy in quantitative analysis, and fully reflects its adaptive ability.

  20. Framework for the construction of a Monte Carlo simulated brain PET–MR image database

    International Nuclear Information System (INIS)

    Thomas, B.A.; Erlandsson, K.; Drobnjak, I.; Pedemonte, S.; Vunckx, K.; Bousse, A.; Reilhac-Laborde, A.; Ourselin, S.; Hutton, B.F.

    2014-01-01

    Simultaneous PET–MR acquisition reduces the possibility of registration mismatch between the two modalities. This facilitates the application of techniques, either during reconstruction or post-reconstruction, that aim to improve the PET resolution by utilising structural information provided by MR. However, in order to validate such methods for brain PET–MR studies it is desirable to evaluate the performance using data where the ground truth is known. In this work, we present a framework for the production of datasets where simulations of both the PET and MR, based on real data, are generated such that reconstruction and post-reconstruction approaches can be fairly compared. -- Highlights: • A framework for simulating realistic brain PET–MR images is proposed. • The imaging data created is formed from real acquisitions. • Partial volume correction techniques can be fairly compared using this framework

  1. Framework for the construction of a Monte Carlo simulated brain PET–MR image database

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, B.A., E-mail: benjamin.thomas2@uclh.nhs.uk [Institute of Nuclear Medicine, UCL, London (United Kingdom); Erlandsson, K. [Institute of Nuclear Medicine, UCL, London (United Kingdom); Drobnjak, I.; Pedemonte, S. [Centre for Medical Image Computing, UCL, London (United Kingdom); Vunckx, K. [Department of Nuclear Medicine, Katholieke Universiteit Leuven, Leuven (Belgium); Bousse, A. [Institute of Nuclear Medicine, UCL, London (United Kingdom); Reilhac-Laborde, A. [Australian Nuclear Science and Technology Organization, Sydney (Australia); Ourselin, S. [Centre for Medical Image Computing, UCL, London (United Kingdom); Hutton, B.F. [Institute of Nuclear Medicine, UCL, London (United Kingdom); Centre for Medical Radiation Physics, University of Wollongong, NSW (Australia)

    2014-01-11

    Simultaneous PET–MR acquisition reduces the possibility of registration mismatch between the two modalities. This facilitates the application of techniques, either during reconstruction or post-reconstruction, that aim to improve the PET resolution by utilising structural information provided by MR. However, in order to validate such methods for brain PET–MR studies it is desirable to evaluate the performance using data where the ground truth is known. In this work, we present a framework for the production of datasets where simulations of both the PET and MR, based on real data, are generated such that reconstruction and post-reconstruction approaches can be fairly compared. -- Highlights: • A framework for simulating realistic brain PET–MR images is proposed. • The imaging data created is formed from real acquisitions. • Partial volume correction techniques can be fairly compared using this framework.

  2. Effect of storage time and framework design on the accuracy of maxillary cobalt-chromium cast removable partial dentures

    Science.gov (United States)

    Viswambaran, M.; Sundaram, R. K.

    2015-01-01

    Statement of Problem: Inaccuracies in the fit of palatal major connectors may be related to distortion of the wax pattern due to prolonged storage time and faulty major connector design. Purpose: This in vitro study was carried out to find out the effect of storage time and major connector design on the accuracy of cobalt-chromium cast removable partial dentures (RPDs). Materials and Methods: A brass metal die with a Kennedy Class III, modification 1, the partially edentulous arch was used as a master die. Thirty-six refractory casts were fabricated from the master die. The refractory casts were divided into three groups (Group I: Anterior-posterior palatal bar, Group II: Palatal strap and Group III: Palatal plate) based on the design of maxillary major connector and subdivided into four groups (sub Group A: 01 h, sub Group B: 03 h, Sub Group C: 06 h, and sub Group D: 24 h) based on the storage time. For each group, 12 frameworks were fabricated. The influence of wax pattern storage time and the accuracy of the fit palatal major connector designs on the master die were compared. Casting defects (nodules/incompleteness) of the frameworks were also evaluated before finishing and polishing. Repeated measures analysis of variance was used to analyze the data. Results: The gap discrepancy was least in sub Group A (01 h) followed by sub Group B (03 h) and C (06 h) and most in sub Group D (24 h). Statistically significant differences (P < 0.05 in all locations L1–L5) in the fit of the framework were related to the design of the major connector. The gap discrepancy was least in Group I (anterior-posterior palatal bar) followed by Group II (palatal strap) and most in Group II (palatal plate). Conclusions: It is recommended that the wax patterns for RPD to be invested immediately on completion of the wax procedure. The selection of a major connector design is crucial for an accurate fit of RPD. PMID:26681850

  3. Effect of storage time and framework design on the accuracy of maxillary cobalt-chromium cast removable partial dentures

    Directory of Open Access Journals (Sweden)

    M Viswambaran

    2015-01-01

    Full Text Available Statement of Problem: Inaccuracies in the fit of palatal major connectors may be related to distortion of the wax pattern due to prolonged storage time and faulty major connector design. Purpose: This in vitro study was carried out to find out the effect of storage time and major connector design on the accuracy of cobalt-chromium cast removable partial dentures (RPDs. Materials and Methods: A brass metal die with a Kennedy Class III, modification 1, the partially edentulous arch was used as a master die. Thirty-six refractory casts were fabricated from the master die. The refractory casts were divided into three groups (Group I: Anterior-posterior palatal bar, Group II: Palatal strap and Group III: Palatal plate based on the design of maxillary major connector and subdivided into four groups (sub Group A: 01 h, sub Group B: 03 h, Sub Group C: 06 h, and sub Group D: 24 h based on the storage time. For each group, 12 frameworks were fabricated. The influence of wax pattern storage time and the accuracy of the fit palatal major connector designs on the master die were compared. Casting defects (nodules/incompleteness of the frameworks were also evaluated before finishing and polishing. Repeated measures analysis of variance was used to analyze the data. Results: The gap discrepancy was least in sub Group A (01 h followed by sub Group B (03 h and C (06 h and most in sub Group D (24 h. Statistically significant differences (P < 0.05 in all locations L1–L5 in the fit of the framework were related to the design of the major connector. The gap discrepancy was least in Group I (anterior-posterior palatal bar followed by Group II (palatal strap and most in Group II (palatal plate. Conclusions: It is recommended that the wax patterns for RPD to be invested immediately on completion of the wax procedure. The selection of a major connector design is crucial for an accurate fit of RPD.

  4. Quantitation of regional cerebral blood flow corrected for partial volume effect using O-15 water and PET

    DEFF Research Database (Denmark)

    IIda, H.; Law, I.; Pakkenberg, B.

    2000-01-01

    Limited spatial resolution of positron emission tomography (PET) can cause significant underestimation in the observed regional radioactivity concentration (so-called partial volume effect or PVE) resulting in systematic errors in estimating quantitative physiologic parameters. The authors have...... formulated four mathematical models that describe the dynamic behavior of a freely diffusible tracer (H215O) in a region of interest (ROI) incorporating estimates of regional tissue flow that are independent of PVE. The current study was intended to evaluate the feasibility of these models and to establish...... a methodology to accurately quantify regional cerebral blood flow (CBF) corrected for PVE in cortical gray matter regions. Five monkeys were studied with PET after IV H2(15)O two times (n = 3) or three times (n = 2) in a row. Two ROIs were drawn on structural magnetic resonance imaging (MRI) scans and projected...

  5. Exact two-loop vacuum polarization correction to the Lamb shift in hydrogenlike ions

    International Nuclear Information System (INIS)

    Plunien, G.; Beier, T.; Soff, G.

    1998-01-01

    We present a calculation scheme for the two-loop vacuum polarization correction of order α 2 to the Lamb shift of hydrogenlike high-Z atoms. The interaction with the external Coulomb field is taken into account to all orders in (Zα). By means of a modified potential approach the problem is reduced to the evaluation of effective one-loop vacuumpolarization potentials. An expression for the energy shift is deduced within the framework of partial wave decomposition performing appropriate subtractions. Exact results for the two-loop vacuum polarization contribution to the Lamb shift of K- and L-shell electron states in hydrogenlike lead and uranium are presented. (orig.)

  6. Globally COnstrained Local Function Approximation via Hierarchical Modelling, a Framework for System Modelling under Partial Information

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman

    2000-01-01

    be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....

  7. Partial gigantism

    Directory of Open Access Journals (Sweden)

    М.М. Karimova

    2017-05-01

    Full Text Available A girl with partial gigantism (the increased I and II fingers of the left foot is being examined. This condition is a rare and unresolved problem, as the definite reason of its development is not determined. Wait-and-see strategy is recommended, as well as correcting operations after closing of growth zones, and forming of data pool for generalization and development of schemes of drug and radial therapeutic methods.

  8. Intraperitoneal implant of recombinant encapsulated cells overexpressing alpha-L-iduronidase partially corrects visceral pathology in mucopolysaccharidosis type I mice.

    Science.gov (United States)

    Baldo, Guilherme; Mayer, Fabiana Quoos; Martinelli, Barbara; Meyer, Fabiola Schons; Burin, Maira; Meurer, Luise; Tavares, Angela Maria Vicente; Giugliani, Roberto; Matte, Ursula

    2012-08-01

    Mucopolysaccharidosis type I (MPS I) is characterized by deficiency of the enzyme alpha-L-iduronidase (IDUA) and storage of glycosaminoglycans (GAG) in several tissues. Current available treatments present limitations, thus the search for new therapies. Encapsulation of recombinant cells within polymeric structures combines gene and cell therapy and is a promising approach for treating MPS I. We produced alginate microcapsules containing baby hamster kidney (BHK) cells overexpressing IDUA and implanted these capsules in the peritoneum of MPS I mice. An increase in serum and tissue IDUA activity was observed at early time-points, as well as a reduction in GAG storage; however, correction in the long term was only partially achieved, with a drop in the IDUA activity being observed a few weeks after the implant. Analysis of the capsules obtained from the peritoneum revealed inflammation and a pericapsular fibrotic process, which could be responsible for the reduction in IDUA levels observed in the long term. In addition, treated mice developed antibodies against the enzyme. The results suggest that the encapsulation process is effective in the short term but improvements must be achieved in order to reduce the immune response and reach a stable correction.

  9. Cerebral blood flow in temporal lobe epilepsy: a partial volume correction study

    Energy Technology Data Exchange (ETDEWEB)

    Giovacchini, Giampiero [University Milano-Bicocca, Milan (Italy); Bonwetsch, Robert; Theodore, William H. [National Institute of Neurological Diseases and Strokes, Clinical Epilepsy Section, Bethesda, MD (United States); Herscovitch, Peter [National Institutes of Health, PET Department, Clinical Center, Bethesda, MD (United States); Carson, Richard E. [Yale PET Center, New Haven, CT (United States)

    2007-12-15

    Previous studies in temporal lobe epilepsy (TLE) have shown that, owing to brain atrophy, positron emission tomography (PET) can overestimate deficits in measures of cerebral function such as glucose metabolism (CMR{sub glu}) and neuroreceptor binding. The magnitude of this effect on cerebral blood flow (CBF) is unexplored. The aim of this study was to assess CBF deficits in TLE before and after magnetic resonance imaging-based partial volume correction (PVC). Absolute values of CBF for 21 TLE patients and nine controls were computed before and after PVC. In TLE patients, quantitative CMR{sub glu} measurements also were obtained. Before PVC, regional values of CBF were significantly (p<0.05) lower in TLE patients than in controls in all regions, except the fusiform gyrus contralateral to the epileptic focus. After PVC, statistical significance was maintained in only four regions: ipsilateral inferior temporal cortex, bilateral insula and contralateral amygdala. There was no significant difference between patients and controls in CBF asymmetry indices (AIs) in any region before or after PVC. In TLE patients, AIs for CBF were significantly smaller than for CMR{sub glu} in middle and inferior temporal cortex, fusiform gyrus and hippocampus both before and after PVC. A significant positive relationship between disease duration and AIs for CMR{sub glu}, but not CBF, was detected in hippocampus and amygdala, before but not after PVC. PVC should be used for PET CBF measurements in patients with TLE. Reduced blood flow, in contrast to glucose metabolism, is mainly due to structural changes. (orig.)

  10. Theoretical Framework for Evaluating Partial Checksum Protection in Wireless Video Streaming

    DEFF Research Database (Denmark)

    Korhonen, Jari; Forchhammer, Søren; Larsen, Knud J.

    2012-01-01

    The benefits of passing partially corrupted packets to the application instead of discarding them have been debated actively, since Lightweight User Data gram Protocol (UDP Lite) was introduced. UDP Lite allows partial check summing in order to omit bit errors in the non-critical part of the pack...

  11. Correct-by-design output feedback of LTI systems

    NARCIS (Netherlands)

    Haesaert, S.; Abate, A.; van den Hof, P.M.J.

    2016-01-01

    Current state-of-the-art correct-by-design controllers are designed for full-state measurable systems. This work extends the applicability of correct-by-design controllers to partially observable linear, time-invariant (LTI) models. Towards the certification of the synthesised controllers,

  12. Load transfer characteristics of unilateral distal extension removable partial dentures with polyacetal resin supporting components.

    Science.gov (United States)

    Jiao, T; Chang, T; Caputo, A A

    2009-03-01

    To photoelastically examine load transfer by unilateral distal extension removable partial dentures with supporting and retentive components made of the lower stiffness polyacetal resins. A mandibular photoelastic model, with edentulous space distal to the right second premolar and missing the left first molar, was constructed to determine the load transmission characteristics of a unilateral distal extension base removable partial denture. Individual simulants were used for tooth structure, periodontal ligament, and alveolar bone. Three designs were fabricated: a major connector and clasps made from polyacetal resin, a metal framework as the major connector with polyacetal resin clasp and denture base, and a traditional metal framework I-bar removable partial denture. Simulated posterior bilateral and unilateral occlusal loads were applied to the removable partial dentures. Under bilateral and left side unilateral loading, the highest stress was observed adjacent to the left side posterior teeth with the polyacetal removable partial denture. The lowest stress was seen with the traditional metal framework. Unilateral loads on the right edentulous region produced similar distributed stress under the denture base with all three designs but a somewhat higher intensity with the polyacetal framework. The polyacetal resin removable partial denture concentrated the highest stresses to the abutment and the bone. The traditional metal framework I-bar removable partial denture most equitably distributed force. The hybrid design that combined a metal framework and polyacetal clasp and denture base may be a viable alternative when aesthetics are of primary concern.

  13. Fit accuracy of metal partial removable dental prosthesis frameworks fabricated by traditional or light curing modeling material technique: An in vitro study

    Science.gov (United States)

    Anan, Mohammad Tarek M.; Al-Saadi, Mohannad H.

    2015-01-01

    Objective The aim of this study was to compare the fit accuracies of metal partial removable dental prosthesis (PRDP) frameworks fabricated by the traditional technique (TT) or the light-curing modeling material technique (LCMT). Materials and methods A metal model of a Kennedy class III modification 1 mandibular dental arch with two edentulous spaces of different spans, short and long, was used for the study. Thirty identical working casts were used to produce 15 PRDP frameworks each by TT and by LCMT. Every framework was transferred to a metal master cast to measure the gap between the metal base of the framework and the crest of the alveolar ridge of the cast. Gaps were measured at three points on each side by a USB digital intraoral camera at ×16.5 magnification. Images were transferred to a graphics editing program. A single examiner performed all measurements. The two-tailed t-test was performed at the 5% significance level. Results The mean gap value was significantly smaller in the LCMT group compared to the TT group. The mean value of the short edentulous span was significantly smaller than that of the long edentulous span in the LCMT group, whereas the opposite result was obtained in the TT group. Conclusion Within the limitations of this study, it can be concluded that the fit of the LCMT-fabricated frameworks was better than the fit of the TT-fabricated frameworks. The framework fit can differ according to the span of the edentate ridge and the fabrication technique for the metal framework. PMID:26236129

  14. Self-consistency corrections in effective-interaction calculations

    International Nuclear Information System (INIS)

    Starkand, Y.; Kirson, M.W.

    1975-01-01

    Large-matrix extended-shell-model calculations are used to compute self-consistency corrections to the effective interaction and to the linked-cluster effective interaction. The corrections are found to be numerically significant and to affect the rate of convergence of the corresponding perturbation series. The influence of various partial corrections is tested. It is concluded that self-consistency is an important effect in determining the effective interaction and improving the rate of convergence. (author)

  15. QED radiative corrections under the SANC project

    International Nuclear Information System (INIS)

    Christova, P.

    2003-01-01

    Automatic calculations of the QED radiative corrections in the framework of the SANC computer system is described. A collection of the computer programs written in FORM3 language is aimed at compiling a database of analytic results to be used to theoretically support the experiments on high-energy accelerators. Presented here is the scheme of automatic analytical calculations of the QED radiative corrections to the fermionic decays of the Z, H and W boson in the framework of the SANC system

  16. Electroweak corrections to H->ZZ/WW->4 leptons

    International Nuclear Information System (INIS)

    Bredenstein, A.; Denner, A.; Dittmaier, S.; Weber, M.M.

    2006-01-01

    We provide predictions for the decays H->ZZ->4-bar and H->WW->4-bar including the complete electroweak O(α) corrections and improvements by higher-order final-state radiation and two-loop corrections proportional to G μ 2 M H 4 . The gauge-boson resonances are described in the complex-mass scheme. We find corrections at the level of 1-8% for the partial widths

  17. Generic calculation of two-body partial decay widths at the full one-loop level

    Science.gov (United States)

    Goodsell, Mark D.; Liebler, Stefan; Staub, Florian

    2017-11-01

    We describe a fully generic implementation of two-body partial decay widths at the full one-loop level in the SARAH and SPheno framework compatible with most supported models. It incorporates fermionic decays to a fermion and a scalar or a gauge boson as well as scalar decays into two fermions, two gauge bosons, two scalars or a scalar and a gauge boson. We present the relevant generic expressions for virtual and real corrections. Whereas wave-function corrections are determined from on-shell conditions, the parameters of the underlying model are by default renormalised in a \\overline{ {DR}} (or \\overline{ {MS}}) scheme. However, the user can also define model-specific counter-terms. As an example we discuss the renormalisation of the electric charge in the Thomson limit for top-quark decays in the standard model. One-loop-induced decays are also supported. The framework additionally allows the addition of mass and mixing corrections induced at higher orders for the involved external states. We explain our procedure to cancel infrared divergences for such cases, which is achieved through an infrared counter-term taking into account corrected Goldstone boson vertices. We compare our results for sfermion, gluino and Higgs decays in the minimal supersymmetric standard model (MSSM) against the public codes SFOLD, FVSFOLD and HFOLD and explain observed differences. Radiatively induced gluino and neutralino decays are compared against the original implementation in SPheno in the MSSM. We exactly reproduce the results of the code CNNDecays for decays of neutralinos and charginos in R-parity violating models. The new version SARAH 4.11.0 by default includes the calculation of two-body decay widths at the full one-loop level. Current limitations for certain model classes are described.

  18. Generic calculation of two-body partial decay widths at the full one-loop level

    Energy Technology Data Exchange (ETDEWEB)

    Goodsell, Mark D. [UPMC Univ. Paris 06 (France); Centre National de la Recherche Scientifique (CNRS), 75 - Paris (France); Sorbonne Univ., Paris (France); Liebler, Stefan [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Staub, Florian [Karlsruhe Institute for Technology, Karlsruhe (Germany). Inst. for Theoretical Physics; Karlsruhe Institute for Technology, Eggenstein-Leopoldshafen (Germany). Inst. for Nuclear Physics

    2017-04-15

    We describe a fully generic implementation of two-body partial decay widths at the full one-loop level in the SARAH and SPheno framework compatible with most supported models. It incorporates fermionic decays to a fermion and a scalar or a gauge boson as well as scalar decays into two fermions, two gauge bosons, two scalars or a scalar and a gauge boson. We present the relevant generic expressions for virtual and real corrections. Whereas wavefunction corrections are determined from on-shell conditions, the parameters of the underlying model are by default renormalised in a DR (or MS) scheme. However, the user can also define model-specific counter-terms. As an example we discuss the renormalisation of the electric charge in the Thomson limit for top-quark decays in the standard model. One-loop induced decays are also supported. The framework additionally allows the addition of mass and mixing corrections induced at higher orders for the involved external states. We explain our procedure to cancel infra-red divergences for such cases, which is achieved through an infra-red counter-term taking into account corrected Goldstone boson vertices. We compare our results for sfermion, gluino and Higgs decays in the minimal supersymmetric standard model (MSSM) against the public codes SFOLD, FVSFOLD and HFOLD and explain observed differences. Radiative induced gluino and neutralino decays are compared against the original implementation in SPheno in the MSSM. We exactly reproduce the results of the code CNNDecays for decays of neutralinos and charginos in R-parity violating models. The new version SARAH 4.11.0 by default includes the calculation of two-body decay widths at the full one-loop level. Current limitations for certain model classes are described.

  19. Generic calculation of two-body partial decay widths at the full one-loop level

    Energy Technology Data Exchange (ETDEWEB)

    Goodsell, Mark D. [Sorbonne Universites, UPMC Univ Paris 06, UMR 7589, LPTHE, Paris (France); CNRS, UMR 7589, LPTHE, Paris (France); Liebler, Stefan [DESY, Hamburg (Germany); Staub, Florian [Karlsruhe Institute of Technology, Institute for Theoretical Physics (ITP), Karlsruhe (Germany); Karlsruhe Institute of Technology, Institute for Nuclear Physics (IKP), Eggenstein-Leopoldshafen (Germany)

    2017-11-15

    We describe a fully generic implementation of two-body partial decay widths at the full one-loop level in the SARAH and SPheno framework compatible with most supported models. It incorporates fermionic decays to a fermion and a scalar or a gauge boson as well as scalar decays into two fermions, two gauge bosons, two scalars or a scalar and a gauge boson. We present the relevant generic expressions for virtual and real corrections. Whereas wave-function corrections are determined from on-shell conditions, the parameters of the underlying model are by default renormalised in a DR (or MS) scheme. However, the user can also define model-specific counter-terms. As an example we discuss the renormalisation of the electric charge in the Thomson limit for top-quark decays in the standard model. One-loop-induced decays are also supported. The framework additionally allows the addition of mass and mixing corrections induced at higher orders for the involved external states. We explain our procedure to cancel infrared divergences for such cases, which is achieved through an infrared counter-term taking into account corrected Goldstone boson vertices. We compare our results for sfermion, gluino and Higgs decays in the minimal supersymmetric standard model (MSSM) against the public codes SFOLD, FVSFOLD and HFOLD and explain observed differences. Radiatively induced gluino and neutralino decays are compared against the original implementation in SPheno in the MSSM. We exactly reproduce the results of the code CNNDecays for decays of neutralinos and charginos in R-parity violating models. The new version SARAH 4.11.0 by default includes the calculation of two-body decay widths at the full one-loop level. Current limitations for certain model classes are described. (orig.)

  20. Generic calculation of two-body partial decay widths at the full one-loop level

    International Nuclear Information System (INIS)

    Goodsell, Mark D.; Liebler, Stefan; Staub, Florian; Karlsruhe Institute for Technology, Eggenstein-Leopoldshafen

    2017-04-01

    We describe a fully generic implementation of two-body partial decay widths at the full one-loop level in the SARAH and SPheno framework compatible with most supported models. It incorporates fermionic decays to a fermion and a scalar or a gauge boson as well as scalar decays into two fermions, two gauge bosons, two scalars or a scalar and a gauge boson. We present the relevant generic expressions for virtual and real corrections. Whereas wavefunction corrections are determined from on-shell conditions, the parameters of the underlying model are by default renormalised in a DR (or MS) scheme. However, the user can also define model-specific counter-terms. As an example we discuss the renormalisation of the electric charge in the Thomson limit for top-quark decays in the standard model. One-loop induced decays are also supported. The framework additionally allows the addition of mass and mixing corrections induced at higher orders for the involved external states. We explain our procedure to cancel infra-red divergences for such cases, which is achieved through an infra-red counter-term taking into account corrected Goldstone boson vertices. We compare our results for sfermion, gluino and Higgs decays in the minimal supersymmetric standard model (MSSM) against the public codes SFOLD, FVSFOLD and HFOLD and explain observed differences. Radiative induced gluino and neutralino decays are compared against the original implementation in SPheno in the MSSM. We exactly reproduce the results of the code CNNDecays for decays of neutralinos and charginos in R-parity violating models. The new version SARAH 4.11.0 by default includes the calculation of two-body decay widths at the full one-loop level. Current limitations for certain model classes are described.

  1. Partial wave analysis using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Berger, Niklaus; Liu Beijiang; Wang Jike, E-mail: nberger@ihep.ac.c [Institute of High Energy Physics, Chinese Academy of Sciences, 19B Yuquan Lu, Shijingshan, 100049 Beijing (China)

    2010-04-01

    Partial wave analysis is an important tool for determining resonance properties in hadron spectroscopy. For large data samples however, the un-binned likelihood fits employed are computationally very expensive. At the Beijing Spectrometer (BES) III experiment, an increase in statistics compared to earlier experiments of up to two orders of magnitude is expected. In order to allow for a timely analysis of these datasets, additional computing power with short turnover times has to be made available. It turns out that graphics processing units (GPUs) originally developed for 3D computer games have an architecture of massively parallel single instruction multiple data floating point units that is almost ideally suited for the algorithms employed in partial wave analysis. We have implemented a framework for tensor manipulation and partial wave fits called GPUPWA. The user writes a program in pure C++ whilst the GPUPWA classes handle computations on the GPU, memory transfers, caching and other technical details. In conjunction with a recent graphics processor, the framework provides a speed-up of the partial wave fit by more than two orders of magnitude compared to legacy FORTRAN code.

  2. Charm mass corrections to the bottomonium mass spectrum

    International Nuclear Information System (INIS)

    Ebert, D.; Faustov, R. N.; Galkin, V. O.

    2002-01-01

    The one-loop corrections to the bottomonium mass spectrum due to the finite charm mass are evaluated in the framework of the relativistic quark model. The obtained corrections are compared with the results of perturbative QCD

  3. Octanol-Water Partition Coefficient from 3D-RISM-KH Molecular Theory of Solvation with Partial Molar Volume Correction.

    Science.gov (United States)

    Huang, WenJuan; Blinov, Nikolay; Kovalenko, Andriy

    2015-04-30

    The octanol-water partition coefficient is an important physical-chemical characteristic widely used to describe hydrophobic/hydrophilic properties of chemical compounds. The partition coefficient is related to the transfer free energy of a compound from water to octanol. Here, we introduce a new protocol for prediction of the partition coefficient based on the statistical-mechanical, 3D-RISM-KH molecular theory of solvation. It was shown recently that with the compound-solvent correlation functions obtained from the 3D-RISM-KH molecular theory of solvation, the free energy functional supplemented with the correction linearly related to the partial molar volume obtained from the Kirkwood-Buff/3D-RISM theory, also called the "universal correction" (UC), provides accurate prediction of the hydration free energy of small compounds, compared to explicit solvent molecular dynamics [ Palmer , D. S. ; J. Phys.: Condens. Matter 2010 , 22 , 492101 ]. Here we report that with the UC reparametrized accordingly this theory also provides an excellent agreement with the experimental data for the solvation free energy in nonpolar solvent (1-octanol) and so accurately predicts the octanol-water partition coefficient. The performance of the Kovalenko-Hirata (KH) and Gaussian fluctuation (GF) functionals of the solvation free energy, with and without UC, is tested on a large library of small compounds with diverse functional groups. The best agreement with the experimental data for octanol-water partition coefficients is obtained with the KH-UC solvation free energy functional.

  4. Approximate thermodynamic state relations in partially ionized gas mixtures

    International Nuclear Information System (INIS)

    Ramshaw, John D.

    2004-01-01

    Thermodynamic state relations for mixtures of partially ionized nonideal gases are often approximated by artificially partitioning the mixture into compartments or subvolumes occupied by the pure partially ionized constituent gases, and requiring these subvolumes to be in temperature and pressure equilibrium. This intuitively reasonable procedure is easily shown to reproduce the correct thermal and caloric state equations for a mixture of neutral (nonionized) ideal gases. The purpose of this paper is to point out that (a) this procedure leads to incorrect state equations for a mixture of partially ionized ideal gases, whereas (b) the alternative procedure of requiring that the subvolumes all have the same temperature and free electron density reproduces the correct thermal and caloric state equations for such a mixture. These results readily generalize to the case of partially degenerate and/or relativistic electrons, to a common approximation used to represent pressure ionization effects, and to two-temperature plasmas. This suggests that equating the subvolume electron number densities or chemical potentials instead of pressures is likely to provide a more accurate approximation in nonideal plasma mixtures

  5. Resting State fMRI in the moving fetus: a robust framework for motion, bias field and spin history correction.

    Science.gov (United States)

    Ferrazzi, Giulio; Kuklisova Murgasova, Maria; Arichi, Tomoki; Malamateniou, Christina; Fox, Matthew J; Makropoulos, Antonios; Allsop, Joanna; Rutherford, Mary; Malik, Shaihan; Aljabar, Paul; Hajnal, Joseph V

    2014-11-01

    There is growing interest in exploring fetal functional brain development, particularly with Resting State fMRI. However, during a typical fMRI acquisition, the womb moves due to maternal respiration and the fetus may perform large-scale and unpredictable movements. Conventional fMRI processing pipelines, which assume that brain movements are infrequent or at least small, are not suitable. Previous published studies have tackled this problem by adopting conventional methods and discarding as much as 40% or more of the acquired data. In this work, we developed and tested a processing framework for fetal Resting State fMRI, capable of correcting gross motion. The method comprises bias field and spin history corrections in the scanner frame of reference, combined with slice to volume registration and scattered data interpolation to place all data into a consistent anatomical space. The aim is to recover an ordered set of samples suitable for further analysis using standard tools such as Group Independent Component Analysis (Group ICA). We have tested the approach using simulations and in vivo data acquired at 1.5 T. After full motion correction, Group ICA performed on a population of 8 fetuses extracted 20 networks, 6 of which were identified as matching those previously observed in preterm babies. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Reasoning about Strategies under Partial Observability and Fairness Constraints

    Directory of Open Access Journals (Sweden)

    Simon Busard

    2013-03-01

    Full Text Available A number of extensions exist for Alternating-time Temporal Logic; some of these mix strategies and partial observability but, to the best of our knowledge, no work provides a unified framework for strategies, partial observability and fairness constraints. In this paper we propose ATLK^F_po, a logic mixing strategies under partial observability and epistemic properties of agents in a system with fairness constraints on states, and we provide a model checking algorithm for it.

  7. On the electrodynamics of partial discharges in voids in solid dielectrics

    DEFF Research Database (Denmark)

    Pedersen, Aage

    1989-01-01

    It is suggested that a correct interpretation of partial-discharge transients can be obtained only through the concept of induced charge. The application of this concept enables a partial-discharge theory to be developed by means of which the influence of relevant void parameters can be assessed ...

  8. Modeling of ultrasonic processes utilizing a generic software framework

    Science.gov (United States)

    Bruns, P.; Twiefel, J.; Wallaschek, J.

    2017-06-01

    Modeling of ultrasonic processes is typically characterized by a high degree of complexity. Different domains and size scales must be regarded, so that it is rather difficult to build up a single detailed overall model. Developing partial models is a common approach to overcome this difficulty. In this paper a generic but simple software framework is presented which allows to coupe arbitrary partial models by slave modules with well-defined interfaces and a master module for coordination. Two examples are given to present the developed framework. The first one is the parameterization of a load model for ultrasonically-induced cavitation. The piezoelectric oscillator, its mounting, and the process load are described individually by partial models. These partial models then are coupled using the framework. The load model is composed of spring-damper-elements which are parameterized by experimental results. In the second example, the ideal mounting position for an oscillator utilized in ultrasonic assisted machining of stone is determined. Partial models for the ultrasonic oscillator, its mounting, the simplified contact process, and the workpiece’s material characteristics are presented. For both applications input and output variables are defined to meet the requirements of the framework’s interface.

  9. The Relationship between Feelings-of-Knowing and Partial Knowledge for General Knowledge Questions.

    Science.gov (United States)

    Norman, Elisabeth; Blakstad, Oskar; Johnsen, Øivind; Martinsen, Stig K; Price, Mark C

    2016-01-01

    Feelings of knowing (FoK) are introspective self-report ratings of the felt likelihood that one will be able to recognize a currently unrecallable memory target. Previous studies have shown that FoKs are influenced by retrieved fragment knowledge related to the target, which is compatible with the accessibility hypothesis that FoK is partly based on currently activated partial knowledge about the memory target. However, previous results have been inconsistent as to whether or not FoKs are influenced by the accuracy of such information. In our study (N = 26), we used a recall-judge-recognize procedure where stimuli were general knowledge questions. The measure of partial knowledge was wider than those applied previously, and FoK was measured before rather than after partial knowledge. The accuracy of reported partial knowledge was positively related to subsequent recognition accuracy, and FoK only predicted recognition on trials where there was correct partial knowledge. Importantly, FoK was positively related to the amount of correct partial knowledge, but did not show a similar incremental relation with incorrect knowledge.

  10. Considerations and code for partial volume correcting [18F]-AV-1451 tau PET data

    Directory of Open Access Journals (Sweden)

    Suzanne L. Baker

    2017-12-01

    Full Text Available [18F]-AV-1451 is a leading tracer used with positron emission tomography (PET to quantify tau pathology. However, [18F]-AV-1451 shows “off target” or non-specific binding, which we define as binding of the tracer in unexpected areas unlikely to harbor aggregated tau based on autopsy literature [1]. Along with caudate, putamen, pallidum and thalamus non-specific binding [2,3], we have found binding in the superior portion of the cerebellar gray matter, leading us to use inferior cerebellar gray as the reference region. We also addressed binding in the posterior portion of the choroid plexus. PET signal unlikely to be associated with tau also occurs in skull, meninges and soft tissue (see e.g. [4]. We refer to [18F]-AV-1451 binding in the skull and meninges as extra-cortical hotspots (ECH and find them near lateral and medial orbitofrontal, lateral occipital, inferior and middle temporal, superior and inferior parietal, and inferior cerebellar gray matter. Lastly, the choroid plexus also shows non-specific binding that bleeds into hippocampus. We are providing the code (http://www.runmycode.org/companion/view/2798 used to create different regions of interest (ROIs that we then used to perform Partial Volume Correction (PVC using the Rousset geometric transfer matrix method (GTM, [5]. This method was used in the companion article, “Comparison of multiple tau-PET measures as biomarkers in aging and Alzheimer's Disease” ([6], DOI 10.1016/j.neuroimage.2017.05.058.

  11. Effects of quenching and partial quenching on QCD penguin matrix elements

    NARCIS (Netherlands)

    Golterman, Maarten; Pallante, Elisabetta

    2002-01-01

    We point out that chiral transformation properties of penguin operators change in the transition from unquenched to (partially) quenched QCD. The way in which this affects the lattice determination of weak matrix elements can be understood in the framework of (partially) quenched chiral perturbation

  12. Intensity of early correction of hyperglycaemia and outcome of critically ill patients with diabetic ketoacidosis.

    Science.gov (United States)

    Mårtensson, Johan; Bailey, Michael; Venkatesh, Balasubramanian; Pilcher, David; Deane, Adam; Abdelhamid, Yasmine Ali; Crisman, Marco; Verma, Brij; MacIsaac, Christopher; Wigmore, Geoffrey; Shehabi, Yahya; Suzuki, Takafumi; French, Craig; Orford, Neil; Kakho, Nima; Prins, Johannes; Ekinci, Elif I; Bellomo, Rinaldo

    2017-09-01

    To determine the impact of the intensity of early correction of hyperglycaemia on outcomes in patients with diabetic ketoacidosis (DKA) admitted to the intensive care unit. We studied adult patients with DKA admitted to 171 ICUs in Australia and New Zealand from 2000 to 2013. We used their blood glucose levels (BGLs) in the first 24 hours after ICU admission to determine whether intensive early correction of hyperglycemia to ≤ 180 mg/dL was independently associated with hypoglycaemia, hypokalaemia, hypo-osmolarity or mortality, compared with partial early correction to > 180 mg/dL as recommended by DKA-specific guidelines. Among 8553 patients, intensive early correction of BGL was applied to 605 patients (7.1%). A greater proportion of these patients experienced hypoglycaemia (20.2% v 9.1%; P < 0.001) and/or hypo-osmolarity (29.4% v 22.0%; P < 0.001), but not hypokalaemia (16.7% v 15.6%; P = 0.47). Overall, 11 patients (1.8%) in the intensive correction group and 112 patients (1.4%) in the partial correction group died (P = 0.42). However, after adjustment for illness severity, partial early correction of BGL was independently associated with a lower risk of hypoglycaemia (odds ratio [OR], 0.38; 95% CI, 0.30-0.48; P < 0.001), lower risk of hypo-osmolarity (OR, 0.80; 95% CI, 0.65-0.98; P < 0.03) and lower risk of death (OR, 0.44; 95% CI, 0.22-0.86; P = 0.02). In a large cohort of patients with DKA, partial early correction of BGL according to DKA-specific guidelines, when compared with intensive early correction of BGL, was independently associated with a lower risk of hypoglycaemia, hypo-osmolarity and death.

  13. Feasibility and performance of novel software to quantify metabolically active volumes and 3D partial volume corrected SUV and metabolic volumetric products of spinal bone marrow metastases on 18F-FDG-PET/CT.

    Science.gov (United States)

    Torigian, Drew A; Lopez, Rosa Fernandez; Alapati, Sridevi; Bodapati, Geetha; Hofheinz, Frank; van den Hoff, Joerg; Saboury, Babak; Alavi, Abass

    2011-01-01

    Our aim was to assess feasibility and performance of novel semi-automated image analysis software called ROVER to quantify metabolically active volume (MAV), maximum standardized uptake value-maximum (SUV(max)), 3D partial volume corrected mean SUV (cSUV(mean)), and 3D partial volume corrected mean MVP (cMVP(mean)) of spinal bone marrow metastases on fluorine-18 fluorodeoxyglucose-positron emission tomography/computerized tomography ((18)F-FDG-PET/CT). We retrospectively studied 16 subjects with 31 spinal metastases on FDG-PET/CT and MRI. Manual and ROVER determinations of lesional MAV and SUV(max), and repeated ROVER measurements of MAV, SUV(max), cSUV(mean) and cMVP(mean) were made. Bland-Altman and correlation analyses were performed to assess reproducibility and agreement. Our results showed that analyses of repeated ROVER measurements revealed MAV mean difference (D)=-0.03±0.53cc (95% CI(-0.22, 0.16)), lower limit of agreement (LLOA)=-1.07cc, and upper limit of agreement (ULOA)=1.01cc; SUV(max) D=0.00±0.00 with LOAs=0.00; cSUV(mean) D=-0.01±0.39 (95% CI(-0.15, 0.13)), LLOA=-0.76, and ULOA=0.75; cMVP(mean) D=-0.52±4.78cc (95% CI(-2.23, 1.23)), LLOA=-9.89cc, and ULOA=8.86cc. Comparisons between ROVER and manual measurements revealed volume D= -0.39±1.37cc (95% CI (-0.89, 0.11)), LLOA=-3.08cc, and ULOA=2.30cc; SUV(max) D=0.00±0.00 with LOAs=0.00. Mean percent increase in lesional SUV(mean) and MVP(mean) following partial volume correction using ROVER was 84.25±36.00% and 84.45±35.94% , respectively. In conclusion, it is feasible to estimate MAV, SUV(max), cSUV(mean), and cMVP(mean) of spinal bone marrow metastases from (18)F-FDG-PET/CT quickly and easily with good reproducibility via ROVER software. Partial volume correction is imperative, as uncorrected SUV(mean) and MVP(mean) are significantly underestimated, even for large lesions. This novel approach has great potential for practical, accurate, and precise combined structural-functional PET

  14. A partial differential equation-based general framework adapted to Rayleigh's, Rician's and Gaussian's distributed noise for restoration and enhancement of magnetic resonance image.

    Science.gov (United States)

    Yadav, Ram Bharos; Srivastava, Subodh; Srivastava, Rajeev

    2016-01-01

    The proposed framework is obtained by casting the noise removal problem into a variational framework. This framework automatically identifies the various types of noise present in the magnetic resonance image and filters them by choosing an appropriate filter. This filter includes two terms: the first term is a data likelihood term and the second term is a prior function. The first term is obtained by minimizing the negative log likelihood of the corresponding probability density functions: Gaussian or Rayleigh or Rician. Further, due to the ill-posedness of the likelihood term, a prior function is needed. This paper examines three partial differential equation based priors which include total variation based prior, anisotropic diffusion based prior, and a complex diffusion (CD) based prior. A regularization parameter is used to balance the trade-off between data fidelity term and prior. The finite difference scheme is used for discretization of the proposed method. The performance analysis and comparative study of the proposed method with other standard methods is presented for brain web dataset at varying noise levels in terms of peak signal-to-noise ratio, mean square error, structure similarity index map, and correlation parameter. From the simulation results, it is observed that the proposed framework with CD based prior is performing better in comparison to other priors in consideration.

  15. Partial scan artifact reduction (PSAR) for the assessment of cardiac perfusion in dynamic phase-correlated CT.

    Science.gov (United States)

    Stenner, Philip; Schmidt, Bernhard; Bruder, Herbert; Allmendinger, Thomas; Haberland, Ulrike; Flohr, Thomas; Kachelriess, Marc

    2009-12-01

    Cardiac CT achieves its high temporal resolution by lowering the scan range from 2pi to pi plus fan angle (partial scan). This, however, introduces CT-value variations, depending on the angular position of the pi range. These partial scan artifacts are of the order of a few HU and prevent the quantitative evaluation of perfusion measurements. The authors present the new algorithm partial scan artifact reduction (PSAR) that corrects a dynamic phase-correlated scan without a priori information. In general, a full scan does not suffer from partial scan artifacts since all projections in [0, 2pi] contribute to the data. To maintain the optimum temporal resolution and the phase correlation, PSAR creates an artificial full scan pn(AF) by projectionwise averaging a set of neighboring partial scans pn(P) from the same perfusion examination (typically N approximately 30 phase-correlated partial scans distributed over 20 s and n = 1, ..., N). Corresponding to the angular range of each partial scan, the authors extract virtual partial scans pn(V) from the artificial full scan pn(AF). A standard reconstruction yields the corresponding images fn(P), fn(AF), and fn(V). Subtracting the virtual partial scan image fn(V) from the artificial full scan image fn(AF) yields an artifact image that can be used to correct the original partial scan image: fn(C) = fn(P) - fn(V) + fn(AF), where fn(C) is the corrected image. The authors evaluated the effects of scattered radiation on the partial scan artifacts using simulated and measured water phantoms and found a strong correlation. The PSAR algorithm has been validated with a simulated semianthropomorphic heart phantom and with measurements of a dynamic biological perfusion phantom. For the stationary phantoms, real full scans have been performed to provide theoretical reference values. The improvement in the root mean square errors between the full and the partial scans with respect to the errors between the full and the corrected scans is

  16. Partial scan artifact reduction (PSAR) for the assessment of cardiac perfusion in dynamic phase-correlated CT

    International Nuclear Information System (INIS)

    Stenner, Philip; Schmidt, Bernhard; Bruder, Herbert; Allmendinger, Thomas; Haberland, Ulrike; Flohr, Thomas; Kachelriess, Marc

    2009-01-01

    Purpose: Cardiac CT achieves its high temporal resolution by lowering the scan range from 2π to π plus fan angle (partial scan). This, however, introduces CT-value variations, depending on the angular position of the π range. These partial scan artifacts are of the order of a few HU and prevent the quantitative evaluation of perfusion measurements. The authors present the new algorithm partial scan artifact reduction (PSAR) that corrects a dynamic phase-correlated scan without a priori information. Methods: In general, a full scan does not suffer from partial scan artifacts since all projections in [0, 2π] contribute to the data. To maintain the optimum temporal resolution and the phase correlation, PSAR creates an artificial full scan p n AF by projectionwise averaging a set of neighboring partial scans p n P from the same perfusion examination (typically N≅30 phase-correlated partial scans distributed over 20 s and n=1,...,N). Corresponding to the angular range of each partial scan, the authors extract virtual partial scans p n V from the artificial full scan p n AF . A standard reconstruction yields the corresponding images f n P , f n AF , and f n V . Subtracting the virtual partial scan image f n V from the artificial full scan image f n AF yields an artifact image that can be used to correct the original partial scan image: f n C =f n P -f n V +f n AF , where f n C is the corrected image. Results: The authors evaluated the effects of scattered radiation on the partial scan artifacts using simulated and measured water phantoms and found a strong correlation. The PSAR algorithm has been validated with a simulated semianthropomorphic heart phantom and with measurements of a dynamic biological perfusion phantom. For the stationary phantoms, real full scans have been performed to provide theoretical reference values. The improvement in the root mean square errors between the full and the partial scans with respect to the errors between the full and the

  17. Partial-Interval Estimation of Count: Uncorrected and Poisson-Corrected Error Levels

    Science.gov (United States)

    Yoder, Paul J.; Ledford, Jennifer R.; Harbison, Amy L.; Tapp, Jon T.

    2018-01-01

    A simulation study that used 3,000 computer-generated event streams with known behavior rates, interval durations, and session durations was conducted to test whether the main and interaction effects of true rate and interval duration affect the error level of uncorrected and Poisson-transformed (i.e., "corrected") count as estimated by…

  18. Implementing a Reentry Framework at a Correctional Facility: Challenges to the Culture

    Science.gov (United States)

    Rudes, Danielle S.; Lerch, Jennifer; Taxman, Faye S.

    2011-01-01

    Implementation research is emerging in the field of corrections, but few studies have examined the complexities associated with implementing change among frontline workers embedded in specific organizational cultures. Using a mixed methods approach, the authors examine the challenges faced by correctional workers in a work release correctional…

  19. Ribosomal Stalk Protein Silencing Partially Corrects the ΔF508-CFTR Functional Expression Defect.

    Directory of Open Access Journals (Sweden)

    Guido Veit

    2016-05-01

    Full Text Available The most common cystic fibrosis (CF causing mutation, deletion of phenylalanine 508 (ΔF508 or Phe508del, results in functional expression defect of the CF transmembrane conductance regulator (CFTR at the apical plasma membrane (PM of secretory epithelia, which is attributed to the degradation of the misfolded channel at the endoplasmic reticulum (ER. Deletion of phenylalanine 670 (ΔF670 in the yeast oligomycin resistance 1 gene (YOR1, an ABC transporter of Saccharomyces cerevisiae phenocopies the ΔF508-CFTR folding and trafficking defects. Genome-wide phenotypic (phenomic analysis of the Yor1-ΔF670 biogenesis identified several modifier genes of mRNA processing and translation, which conferred oligomycin resistance to yeast. Silencing of orthologues of these candidate genes enhanced the ΔF508-CFTR functional expression at the apical PM in human CF bronchial epithelia. Although knockdown of RPL12, a component of the ribosomal stalk, attenuated the translational elongation rate, it increased the folding efficiency as well as the conformational stability of the ΔF508-CFTR, manifesting in 3-fold augmented PM density and function of the mutant. Combination of RPL12 knockdown with the corrector drug, VX-809 (lumacaftor restored the mutant function to ~50% of the wild-type channel in primary CFTRΔF508/ΔF508 human bronchial epithelia. These results and the observation that silencing of other ribosomal stalk proteins partially rescue the loss-of-function phenotype of ΔF508-CFTR suggest that the ribosomal stalk modulates the folding efficiency of the mutant and is a potential therapeutic target for correction of the ΔF508-CFTR folding defect.

  20. Robust method for TALEN-edited correction of pF508del in patient-specific induced pluripotent stem cells.

    Science.gov (United States)

    Camarasa, María Vicenta; Gálvez, Víctor Miguel

    2016-02-09

    Cystic fibrosis is one of the most frequent inherited rare diseases, caused by mutations in the cystic fibrosis transmembrane conductance regulator gene. Apart from symptomatic treatments, therapeutic protocols for curing the disease have not yet been established. The regeneration of genetically corrected, disease-free epithelia in cystic fibrosis patients is envisioned by designing a stem cell/genetic therapy in which patient-derived pluripotent stem cells are genetically corrected, from which target tissues are derived. In this framework, we present an efficient method for seamless correction of pF508del mutation in patient-specific induced pluripotent stem cells by gene edited homologous recombination. Gene edition has been performed by transcription activator-like effector nucleases and a homologous recombination donor vector which contains a PiggyBac transposon-based double selectable marker cassette.This new method has been designed to partially avoid xenobiotics from the culture system, improve cell culture efficiency and genome stability by using a robust culture system method, and optimize timings. Overall, once the pluripotent cells have been amplified for the first nucleofection, the procedure can be completed in 69 days, and can be easily adapted to edit and change any gene of interest.

  1. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  2. Partially massless graviton on beyond Einstein spacetimes

    Science.gov (United States)

    Bernard, Laura; Deffayet, Cédric; Hinterbichler, Kurt; von Strauss, Mikael

    2017-06-01

    We show that a partially massless graviton can propagate on a large set of spacetimes which are not Einstein spacetimes. Starting from a recently constructed theory for a massive graviton that propagates the correct number of degrees of freedom on an arbitrary spacetime, we first give the full explicit form of the scalar constraint responsible for the absence of a sixth degree of freedom. We then spell out generic conditions for the constraint to be identically satisfied, so that there is a scalar gauge symmetry which makes the graviton partially massless. These simplify if one assumes that spacetime is Ricci symmetric. Under this assumption, we find explicit non-Einstein spacetimes (some, but not all, with vanishing Bach tensors) allowing for the propagation of a partially massless graviton. These include in particular the Einstein static Universe.

  3. Implementation of electroweak corrections in the POWHEG BOX: single W production

    CERN Document Server

    Barzè, L; Nason, P; Nicrosini, O; Piccinini, F

    2012-01-01

    We present a fully consistent implementation of electroweak and strong radiative corrections to single W hadroproduction in the POWHEG BOX framework, treating soft and collinear photon emissions on the same ground as coloured parton emissions. This framework can be easily extended to more complex electroweak processes. We describe how next-to-leading order (NLO) electroweak corrections are combined with the NLO QCD calculation, and show how they are interfaced to QCD and QED shower Monte Carlo. The resulting tool fills a gap in the literature and allows to study comprehensively the interplay of QCD and electroweak effects to W production using a single computational framework. Numerical comparisons with the predictions of the electroweak generator HORACE, as well as with existing results on the combination of electroweak and QCD corrections to W production, are shown for the LHC energies, to validate the reliability and accuracy of the approach

  4. Wavelet-based partial volume effect correction for simultaneous MR/PET of the carotid arteries

    International Nuclear Information System (INIS)

    Bini, Jason; Eldib, Mootaz; Robson, Philip M; Fayad, Zahi A

    2014-01-01

    Simultaneous MR/PET scanners allow for the exploration and development of novel PVE correction techniques without the challenges of coregistration of MR and PET. The development of a wavelet-based PVE correction method, to improve PET quantification, has proven successful in brain PET. 2 We report here the first attempt to apply these methods to simultaneous MR/PET imaging of the carotid arteries.

  5. QCD Corrections to Heavy Quarkonium Production

    International Nuclear Information System (INIS)

    Artoisenet, P.

    2008-01-01

    I discuss J/ψ and Υ production at the Tevatron. Working in the framework of NRQCD, I review the current theoretical status. Motivated by the polarization puzzle at the Tevatron, I present the brand-new computation of higher-order α s corrections to the color-singlet production and discuss the impact of these corrections both on the differential cross section and on the polarization of the quarkonium state. I finally comment on the relative importance of the various transitions that feed quarkonium hadroproduction

  6. A computational framework for automation of point defect calculations

    International Nuclear Information System (INIS)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei

    2017-01-01

    We have developed a complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory. Furthermore, the framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. This package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  7. Monitoring as a partially observable decision problem

    Science.gov (United States)

    Paul L. Fackler; Robert G. Haight

    2014-01-01

    Monitoring is an important and costly activity in resource man-agement problems such as containing invasive species, protectingendangered species, preventing soil erosion, and regulating con-tracts for environmental services. Recent studies have viewedoptimal monitoring as a Partially Observable Markov Decision Pro-cess (POMDP), which provides a framework for...

  8. Wavelet-based partial volume effect correction for simultaneous MR/PET of the carotid arteries

    Energy Technology Data Exchange (ETDEWEB)

    Bini, Jason; Eldib, Mootaz [Translational and Molecular Imaging Institute, Icahn School of Medicine at Mount Sinai, NY, NY (United States); Department of Biomedical Engineering, The City College of New York, NY, NY (United States); Robson, Philip M; Fayad, Zahi A [Translational and Molecular Imaging Institute, Icahn School of Medicine at Mount Sinai, NY, NY (United States)

    2014-07-29

    Simultaneous MR/PET scanners allow for the exploration and development of novel PVE correction techniques without the challenges of coregistration of MR and PET. The development of a wavelet-based PVE correction method, to improve PET quantification, has proven successful in brain PET.{sup 2} We report here the first attempt to apply these methods to simultaneous MR/PET imaging of the carotid arteries.

  9. Data-driven discovery of partial differential equations.

    Science.gov (United States)

    Rudy, Samuel H; Brunton, Steven L; Proctor, Joshua L; Kutz, J Nathan

    2017-04-01

    We propose a sparse regression method capable of discovering the governing partial differential equation(s) of a given system by time series measurements in the spatial domain. The regression framework relies on sparsity-promoting techniques to select the nonlinear and partial derivative terms of the governing equations that most accurately represent the data, bypassing a combinatorially large search through all possible candidate models. The method balances model complexity and regression accuracy by selecting a parsimonious model via Pareto analysis. Time series measurements can be made in an Eulerian framework, where the sensors are fixed spatially, or in a Lagrangian framework, where the sensors move with the dynamics. The method is computationally efficient, robust, and demonstrated to work on a variety of canonical problems spanning a number of scientific domains including Navier-Stokes, the quantum harmonic oscillator, and the diffusion equation. Moreover, the method is capable of disambiguating between potentially nonunique dynamical terms by using multiple time series taken with different initial data. Thus, for a traveling wave, the method can distinguish between a linear wave equation and the Korteweg-de Vries equation, for instance. The method provides a promising new technique for discovering governing equations and physical laws in parameterized spatiotemporal systems, where first-principles derivations are intractable.

  10. Preregularization and the structure of loop-momentum ambiguities within quantum corrections to the supercurrent

    International Nuclear Information System (INIS)

    Chowdhury, A.M.; Elias, V.; McKeon, D.G.C.; Mann, R.B.

    1986-01-01

    The anomaly in the supercurrent amplitude S/sub μ//sub ν/ is analyzed to one-loop order for N = 1 supersymmetry in the Wess-Zumino gauge within the framework of preregularization, in which loop-momentum-routing ambiguities percolate into shift-of-integration-variable surface terms peculiar to exactly four space-time dimensions. We find the supercurrent anomaly to be a consequence of the inability of such ambiguities (within a demonstrably finite set of quantum corrections) to absorb violations of gauge invariance (q/sup ν/S/sub μ//sub ν/not =0) and supersymmetry (partial/sup μ/S/sub μ//sub ν/equivalentpartialxSnot =0) simultaneously, a feature quite similar to the inability of VVA-triangle ambiguities to absorb violations of gauge invariance and the axial-vector-current Ward identity simultaneously. We also find that if gauge invariance is preserved, the contribution to the supercurrent anomaly obtained from O(g 2 ) quantum corrections to the supercurrent involves no infrared or ultraviolet infinities and resides in partialxS rather than γxS. This last result is a consequence of maintaining exactly four space-time dimensions, as is necessary for momentum-routing ambiguities to appear at all in the quantum corrections. The connection between our results and similar results obtained from an Adler-Rosenberg symmetry argument is examined in detail

  11. qPR: An adaptive partial-report procedure based on Bayesian inference.

    Science.gov (United States)

    Baek, Jongsoo; Lesmes, Luis Andres; Lu, Zhong-Lin

    2016-08-01

    Iconic memory is best assessed with the partial report procedure in which an array of letters appears briefly on the screen and a poststimulus cue directs the observer to report the identity of the cued letter(s). Typically, 6-8 cue delays or 600-800 trials are tested to measure the iconic memory decay function. Here we develop a quick partial report, or qPR, procedure based on a Bayesian adaptive framework to estimate the iconic memory decay function with much reduced testing time. The iconic memory decay function is characterized by an exponential function and a joint probability distribution of its three parameters. Starting with a prior of the parameters, the method selects the stimulus to maximize the expected information gain in the next test trial. It then updates the posterior probability distribution of the parameters based on the observer's response using Bayesian inference. The procedure is reiterated until either the total number of trials or the precision of the parameter estimates reaches a certain criterion. Simulation studies showed that only 100 trials were necessary to reach an average absolute bias of 0.026 and a precision of 0.070 (both in terms of probability correct). A psychophysical validation experiment showed that estimates of the iconic memory decay function obtained with 100 qPR trials exhibited good precision (the half width of the 68.2% credible interval = 0.055) and excellent agreement with those obtained with 1,600 trials of the conventional method of constant stimuli procedure (RMSE = 0.063). Quick partial-report relieves the data collection burden in characterizing iconic memory and makes it possible to assess iconic memory in clinical populations.

  12. Multiobjective optimization framework for landmark measurement error correction in three-dimensional cephalometric tomography.

    Science.gov (United States)

    DeCesare, A; Secanell, M; Lagravère, M O; Carey, J

    2013-01-01

    The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.

  13. Partial scan artifact reduction (PSAR) for the assessment of cardiac perfusion in dynamic phase-correlated CT

    Energy Technology Data Exchange (ETDEWEB)

    Stenner, Philip; Schmidt, Bernhard; Bruder, Herbert; Allmendinger, Thomas; Haberland, Ulrike; Flohr, Thomas; Kachelriess, Marc [Institute of Medical Physics, Henkestrasse 91, 91052 Erlangen (Germany); Siemens AG, Healthcare Sector, Siemensstrasse 1, 91301 Forchheim (Germany); Institute of Medical Physics, Henkestrasse. 91, 91052 Erlangen (Germany)

    2009-12-15

    Purpose: Cardiac CT achieves its high temporal resolution by lowering the scan range from 2{pi} to {pi} plus fan angle (partial scan). This, however, introduces CT-value variations, depending on the angular position of the {pi} range. These partial scan artifacts are of the order of a few HU and prevent the quantitative evaluation of perfusion measurements. The authors present the new algorithm partial scan artifact reduction (PSAR) that corrects a dynamic phase-correlated scan without a priori information. Methods: In general, a full scan does not suffer from partial scan artifacts since all projections in [0, 2{pi}] contribute to the data. To maintain the optimum temporal resolution and the phase correlation, PSAR creates an artificial full scan p{sub n}{sup AF} by projectionwise averaging a set of neighboring partial scans p{sub n}{sup P} from the same perfusion examination (typically N{approx_equal}30 phase-correlated partial scans distributed over 20 s and n=1,...,N). Corresponding to the angular range of each partial scan, the authors extract virtual partial scans p{sub n}{sup V} from the artificial full scan p{sub n}{sup AF}. A standard reconstruction yields the corresponding images f{sub n}{sup P}, f{sub n}{sup AF}, and f{sub n}{sup V}. Subtracting the virtual partial scan image f{sub n}{sup V} from the artificial full scan image f{sub n}{sup AF} yields an artifact image that can be used to correct the original partial scan image: f{sub n}{sup C}=f{sub n}{sup P}-f{sub n}{sup V}+f{sub n}{sup AF}, where f{sub n}{sup C} is the corrected image. Results: The authors evaluated the effects of scattered radiation on the partial scan artifacts using simulated and measured water phantoms and found a strong correlation. The PSAR algorithm has been validated with a simulated semianthropomorphic heart phantom and with measurements of a dynamic biological perfusion phantom. For the stationary phantoms, real full scans have been performed to provide theoretical reference

  14. Interaction and Self-Correction

    Directory of Open Access Journals (Sweden)

    Glenda Lucila Satne

    2014-07-01

    Full Text Available In this paper I address the question of how to account for the normative dimension involved in conceptual competence in a naturalistic framework. First, I present what I call the Naturalist Challenge (NC, referring to both the phylogenetic and ontogenetic dimensions of conceptual possession and acquisition. I then criticize two models that have been dominant in thinking about conceptual competence, the interpretationist and the causalist models. Both fail to meet NC, by failing to account for the abilities involved in conceptual self-correction. I then offer an alternative account of self-correction that I develop with the help of the interactionist theory of mutual understanding arising from recent developments in Phenomenology and Developmental Psychology.

  15. Logical Specification and Analysis of Fault Tolerant Systems through Partial Model Checking

    NARCIS (Netherlands)

    Gnesi, S.; Etalle, Sandro; Mukhopadhyay, S.; Lenzini, Gabriele; Lenzini, G.; Martinelli, F.; Roychoudhury, A.

    2003-01-01

    This paper presents a framework for a logical characterisation of fault tolerance and its formal analysis based on partial model checking techniques. The framework requires a fault tolerant system to be modelled using a formal calculus, here the CCS process algebra. To this aim we propose a uniform

  16. Visual and semiquantitative analysis of 18F-fluorodeoxyglucose positron emission tomography using a partial-ring tomograph without attenuation correction to differentiate benign and malignant pulmonary nodules

    International Nuclear Information System (INIS)

    Skehan, S.J.; Coates, G.; Otero, C.; O'Donovan, N.; Pelling, M.; Nahmias, C.

    2001-01-01

    Many studies have reported the use of attenuation-corrected positron emission tomography with 18 F-fluorodeoxyglucose (FDG PET) with full-ring tomographs to differentiate between benign and malignant pulmonary nodules. We sought to evaluate FDG PET using a partial-ring tomograph without attenuation correction. A retrospective review of PET images from 77 patients (range 38-84 years of age) with proven benign or malignant pulmonary nodules was undertaken. All images were obtained using a Siemens/CTI ECAT ART tomograph, without attenuation correction, after 185 MBq 18 F-FDG was injected. Images were visually graded on a 5-point scale from 'definitely malignant' to 'definitely benign,' and lesion-to-background (LB) ratios were calculated using region of interest analysis. Visual and semiquantitative analyses were compared using receiver operating characteristic analysis. Twenty lesions were benign and 57 were malignant. The mean LB ratio for benign lesions was 1.5 (range 1.0-5.7) and for malignant lesions 5.7 (range 1.2-14.1) (p < 0.001). The area under the ROC curve for LB ratio analysis was 0.95, and for visual analysis 0.91 (p = 0.39). The optimal cut-off ratio with LB ratio analysis was 1.8, giving a sensitivity of 95% and a specificity of 85%. For lesions thought to be 'definitely malignant' on visual analysis, the sensitivity was 93% and the specificity 85%. Three proven infective lesions were rated as malignant by both techniques (LB ratio 2.6-5.7). FDG PET without attenuation correction is accurate for differentiating between benign and malignant lung nodules. Results using simple LB ratios without attenuation correction compare favourably with the published sensitivity and specificity for standard uptake ratios. Visual analysis is equally accurate. (author)

  17. A single-chain fusion molecule consisting of peptide, major histocompatibility gene complex class I heavy chain and beta2-microglobulin can fold partially correctly, but binds peptide inefficiently

    DEFF Research Database (Denmark)

    Sylvester-Hvid, C; Buus, S

    1999-01-01

    of a recombinant murine MHC-I molecule, which could be produced in large amounts in bacteria. The recombinant MHC-I protein was expressed as a single molecule (PepSc) consisting of the antigenic peptide linked to the MHC-I heavy chain and further linked to human beta2-microglobulin (hbeta2m). The PepSc molecule...... electrophoresis (SDS-PAGE). Serological analysis revealed the presence of some, but not all, MHC-I-specific epitopes. Biochemically, PepSc could bind peptide, however, rather ineffectively. We suggest that a partially correctly refolded MHC-I has been obtained....

  18. arXiv Minimal Fundamental Partial Compositeness

    CERN Document Server

    Cacciapaglia, Giacomo; Sannino, Francesco; Thomsen, Anders Eller

    Building upon the fundamental partial compositeness framework we provide consistent and complete composite extensions of the standard model. These are used to determine the effective operators emerging at the electroweak scale in terms of the standard model fields upon consistently integrating out the heavy composite dynamics. We exhibit the first effective field theories matching these complete composite theories of flavour and analyse their physical consequences for the third generation quarks. Relations with other approaches, ranging from effective analyses for partial compositeness to extra dimensions as well as purely fermionic extensions, are briefly discussed. Our methodology is applicable to any composite theory of dynamical electroweak symmetry breaking featuring a complete theory of flavour.

  19. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  20. Motion correction in thoracic positron emission tomography

    CERN Document Server

    Gigengack, Fabian; Dawood, Mohammad; Schäfers, Klaus P

    2015-01-01

    Respiratory and cardiac motion leads to image degradation in Positron Emission Tomography (PET), which impairs quantification. In this book, the authors present approaches to motion estimation and motion correction in thoracic PET. The approaches for motion estimation are based on dual gating and mass-preserving image registration (VAMPIRE) and mass-preserving optical flow (MPOF). With mass-preservation, image intensity modulations caused by highly non-rigid cardiac motion are accounted for. Within the image registration framework different data terms, different variants of regularization and parametric and non-parametric motion models are examined. Within the optical flow framework, different data terms and further non-quadratic penalization are also discussed. The approaches for motion correction particularly focus on pipelines in dual gated PET. A quantitative evaluation of the proposed approaches is performed on software phantom data with accompanied ground-truth motion information. Further, clinical appl...

  1. On the hierarchy of partially invariant submodels of differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Golovin, Sergey V [Lavrentyev Institute of Hydrodynamics SB RAS, Novosibirsk 630090 (Russian Federation)], E-mail: sergey@hydro.nsc.ru

    2008-07-04

    It is noted that the partially invariant solution (PIS) of differential equations in many cases can be represented as an invariant reduction of some PISs of the higher rank. This introduces a hierarchic structure in the set of all PISs of a given system of differential equations. An equivalence of the two-step and the direct ways of construction of PISs is proved. The hierarchy simplifies the process of enumeration and analysis of partially invariant submodels to the given system of differential equations. In this framework, the complete classification of regular partially invariant solutions of ideal MHD equations is given.

  2. On the hierarchy of partially invariant submodels of differential equations

    Science.gov (United States)

    Golovin, Sergey V.

    2008-07-01

    It is noted that the partially invariant solution (PIS) of differential equations in many cases can be represented as an invariant reduction of some PISs of the higher rank. This introduces a hierarchic structure in the set of all PISs of a given system of differential equations. An equivalence of the two-step and the direct ways of construction of PISs is proved. The hierarchy simplifies the process of enumeration and analysis of partially invariant submodels to the given system of differential equations. In this framework, the complete classification of regular partially invariant solutions of ideal MHD equations is given.

  3. On the hierarchy of partially invariant submodels of differential equations

    International Nuclear Information System (INIS)

    Golovin, Sergey V

    2008-01-01

    It is noted that the partially invariant solution (PIS) of differential equations in many cases can be represented as an invariant reduction of some PISs of the higher rank. This introduces a hierarchic structure in the set of all PISs of a given system of differential equations. An equivalence of the two-step and the direct ways of construction of PISs is proved. The hierarchy simplifies the process of enumeration and analysis of partially invariant submodels to the given system of differential equations. In this framework, the complete classification of regular partially invariant solutions of ideal MHD equations is given

  4. Partial transpose of two disjoint blocks in XY spin chains

    International Nuclear Information System (INIS)

    Coser, Andrea; Tonni, Erik; Calabrese, Pasquale

    2015-01-01

    We consider the partial transpose of the spin reduced density matrix of two disjoint blocks in spin chains admitting a representation in terms of free fermions, such as XY chains. We exploit the solution of the model in terms of Majorana fermions and show that such partial transpose in the spin variables is a linear combination of four Gaussian fermionic operators. This representation allows to explicitly construct and evaluate the integer moments of the partial transpose. We numerically study critical XX and Ising chains and we show that the asymptotic results for large blocks agree with conformal field theory predictions if corrections to the scaling are properly taken into account. (paper)

  5. Correlation of psychomotor findings and the ability to partially weight bear

    Science.gov (United States)

    2012-01-01

    Background Partial weight bearing is thought to avoid excessive loading that may interfere with the healing process after surgery of the pelvis or the lower extremity. The object of this study was to investigate the relationship between the ability to partially weight bear and the patient's psychomotor skills and an additional evaluation of the possibility to predict this ability with a standardized psychomotor test. Methods 50 patients with a prescribed partial weight bearing at a target load of 15 kg following surgery were verbally instructed by a physical therapist. After the instruction and sufficient training with the physical therapist vertical ground reaction forces using matrix insoles were measured while walking with forearm crutches. Additionally, psychomotor skills were tested with the Motorische Leistungsserie (MLS). To test for correlations Spearman's Rank correlation was used. For further comparison of the two groups a Mann-Withney test was performed using Bonferroni correction. Results The patient's age and body weight significantly correlated with the ability to partially weight bear at a 15 kg target load. There were significant correlations between several subtests of the MLS and ground reaction forces measured while walking with crutches. Patients that were able to correctly perform partial weight bearing showed significant better psychomotor skills especially for those subtests where both hands had to be coordinated simultaneously. Conclusions The ability to partially weight bear is associated with psychomotor skills. The MLS seems to be a tool that helps predicting the ability to keep within the prescribed load limits. PMID:22330655

  6. Correlation of psychomotor findings and the ability to partially weight bear

    Directory of Open Access Journals (Sweden)

    Ruckstuhl Thomas

    2012-02-01

    Full Text Available Abstract Background Partial weight bearing is thought to avoid excessive loading that may interfere with the healing process after surgery of the pelvis or the lower extremity. The object of this study was to investigate the relationship between the ability to partially weight bear and the patient's psychomotor skills and an additional evaluation of the possibility to predict this ability with a standardized psychomotor test. Methods 50 patients with a prescribed partial weight bearing at a target load of 15 kg following surgery were verbally instructed by a physical therapist. After the instruction and sufficient training with the physical therapist vertical ground reaction forces using matrix insoles were measured while walking with forearm crutches. Additionally, psychomotor skills were tested with the Motorische Leistungsserie (MLS. To test for correlations Spearman's Rank correlation was used. For further comparison of the two groups a Mann-Withney test was performed using Bonferroni correction. Results The patient's age and body weight significantly correlated with the ability to partially weight bear at a 15 kg target load. There were significant correlations between several subtests of the MLS and ground reaction forces measured while walking with crutches. Patients that were able to correctly perform partial weight bearing showed significant better psychomotor skills especially for those subtests where both hands had to be coordinated simultaneously. Conclusions The ability to partially weight bear is associated with psychomotor skills. The MLS seems to be a tool that helps predicting the ability to keep within the prescribed load limits.

  7. Practical aspects of data-driven motion correction approach for brain SPECT

    International Nuclear Information System (INIS)

    Kyme, A.Z.; Hutton, B.F.; Hatton, R.L.; Skerrett, D.; Barnden, L.

    2002-01-01

    Full text: Patient motion can cause image artifacts in SPECT despite restraining measures. Data-driven detection and correction of motion can be achieved by comparison of acquired data with the forward-projections. By optimising the orientation of a partial reconstruction, parameters can be obtained for each misaligned projection and applied to update this volume using a 3D reconstruction algorithm. Phantom validation was performed to explore practical aspects of this approach. Noisy projection datasets simulating a patient undergoing at least one fully 3D movement during acquisition were compiled from various projections of the digital Hoffman brain phantom. Motion correction was then applied to the reconstructed studies. Correction success was assessed visually and quantitatively. Resilience with respect to subset order and missing data in the reconstruction and updating stages, detector geometry considerations, and the need for implementing an iterated correction were assessed in the process. Effective correction of the corrupted studies was achieved. Visually, artifactual regions in the reconstructed slices were suppressed and/or removed. Typically the ratio of mean square difference between the corrected and reference studies compared to that between the corrupted and reference studies was > 2. Although components of the motions are missed using a single-head implementation, improvement was still evident in the correction. The need for multiple iterations in the approach was small due to the bulk of misalignment errors being corrected in the first pass. Dispersion of subsets for reconstructing and updating the partial reconstruction appears to give optimal correction. Further validation is underway using triple-head physical phantom data. Copyright (2002) The Australian and New Zealand Society of Nuclear Medicine Inc

  8. EPU correction scheme study at the CLS

    Energy Technology Data Exchange (ETDEWEB)

    Bertwistle, Drew, E-mail: drew.bertwistle@lightsource.ca; Baribeau, C.; Dallin, L.; Chen, S.; Vogt, J.; Wurtz, W. [Canadian Light Source Inc. 44 Innovation Boulevard, Saskatoon, SK S7N 2V3 (Canada)

    2016-07-27

    The Canadian Light Source (CLS) Quantum Materials Spectroscopy Center (QMSC) beamline will employ a novel double period (55 mm, 180 mm) elliptically polarizing undulator (EPU) to produce photons of arbitrary polarization in the soft X-ray regime. The long period and high field of the 180 mm period EPU will have a strong dynamic focusing effect on the storage ring electron beam. We have considered two partial correction schemes, a 4 m long planar array of BESSY-II style current strips, and soft iron L-shims. In this paper we briefly consider the implementation of these correction schemes.

  9. A generalized framework unifying image registration and respiratory motion models and incorporating image reconstruction, for partial image data or full images

    Science.gov (United States)

    McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.

    2017-06-01

    Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.

  10. High-fidelity artifact correction for cone-beam CT imaging of the brain

    Science.gov (United States)

    Sisniega, A.; Zbijewski, W.; Xu, J.; Dang, H.; Stayman, J. W.; Yorkston, J.; Aygun, N.; Koliatsos, V.; Siewerdsen, J. H.

    2015-02-01

    CT is the frontline imaging modality for diagnosis of acute traumatic brain injury (TBI), involving the detection of fresh blood in the brain (contrast of 30-50 HU, detail size down to 1 mm) in a non-contrast-enhanced exam. A dedicated point-of-care imaging system based on cone-beam CT (CBCT) could benefit early detection of TBI and improve direction to appropriate therapy. However, flat-panel detector (FPD) CBCT is challenged by artifacts that degrade contrast resolution and limit application in soft-tissue imaging. We present and evaluate a fairly comprehensive framework for artifact correction to enable soft-tissue brain imaging with FPD CBCT. The framework includes a fast Monte Carlo (MC)-based scatter estimation method complemented by corrections for detector lag, veiling glare, and beam hardening. The fast MC scatter estimation combines GPU acceleration, variance reduction, and simulation with a low number of photon histories and reduced number of projection angles (sparse MC) augmented by kernel de-noising to yield a runtime of ~4 min per scan. Scatter correction is combined with two-pass beam hardening correction. Detector lag correction is based on temporal deconvolution of the measured lag response function. The effects of detector veiling glare are reduced by deconvolution of the glare response function representing the long range tails of the detector point-spread function. The performance of the correction framework is quantified in experiments using a realistic head phantom on a testbench for FPD CBCT. Uncorrected reconstructions were non-diagnostic for soft-tissue imaging tasks in the brain. After processing with the artifact correction framework, image uniformity was substantially improved, and artifacts were reduced to a level that enabled visualization of ~3 mm simulated bleeds throughout the brain. Non-uniformity (cupping) was reduced by a factor of 5, and contrast of simulated bleeds was improved from ~7 to 49.7 HU, in good agreement

  11. NLO corrections to the pair production of supersymmetric particles

    International Nuclear Information System (INIS)

    Obikhod, T.V.; Verbytskyy, A.A.

    2014-01-01

    The analysis of recent experimental data received from LHC (CMS) restricts the range of MSSM parameters. Using computer programs SOFTSUSY, SDECAY the mass spectrum and partial width of superpartners are calculated. With the help of computer program PROSPINO the calculations of the next-to-leading order (NLO) corrections to the production cross sections of superpartners are made. With the help of computer program PYTHIA the NLO corrections on differential distributions of p T and η for squarks and gluino are represented.

  12. Metabolic impact of partial volume correction of [18F]FDG PET-CT oncological studies on the assessment of tumor response to treatment.

    Science.gov (United States)

    Stefano, A; Gallivanone, F; Messa, C; Gilardi, M C; Gastiglioni, I

    2014-12-01

    The aim of this work is to evaluate the metabolic impact of Partial Volume Correction (PVC) on the measurement of the Standard Uptake Value (SUV) from [18F]FDG PET-CT oncological studies for treatment monitoring purpose. Twenty-nine breast cancer patients with bone lesions (42 lesions in total) underwent [18F]FDG PET-CT studies after surgical resection of breast cancer primitives, and before (PET-II) chemotherapy and hormone treatment. PVC of bone lesion uptake was performed on the two [18F]FDG PET-CT studies, using a method based on Recovery Coefficients (RC) and on an automatic measurement of lesion metabolic volume. Body-weight average SUV was calculated for each lesion, with and without PVC. The accuracy, reproducibility, clinical feasibility and the metabolic impact on treatment response of the considered PVC method was evaluated. The PVC method was found clinically feasible in bone lesions, with an accuracy of 93% for lesion sphere-equivalent diameter >1 cm. Applying PVC, average SUV values increased, from 7% up to 154% considering both PET-I and PET-II studies, proving the need of the correction. As main finding, PVC modified the therapy response classification in 6 cases according to EORTC 1999 classification and in 5 cases according to PERCIST 1.0 classification. PVC has an important metabolic impact on the assessment of tumor response to treatment by [18F]FDG PET-CT oncological studies.

  13. "Treating adult survivors of childhood emotional abuse and neglect: A new framework": Correction to Grossman et al. (2017).

    Science.gov (United States)

    2017-01-01

    Reports an error in "Treating adult survivors of childhood emotional abuse and neglect: A new framework" by Frances K. Grossman, Joseph Spinazzola, Marla Zucker and Elizabeth Hopper ( American Journal of Orthopsychiatry , 2017, Vol 87[1], 86-93). In the article, in the second sentence of the third paragraph of the "The Empirical Base for CBP" section, "construction of a life narrative" should have read "construction of a trauma narrative." The full corrected sentence follows: "Therefore, the trauma treatment component traditionally focused upon construction of a trauma narrative must be expanded to address the effects of trauma on our clients' entire life narratives, including their development of a sense of self and social identity." (The following abstract of the original article appeared in record 2017-01147-002.) This article provides the outline of a new framework for treating adult survivors of childhood emotional abuse and neglect. Component-based psychotherapy (CBP) is an evidence-informed model that bridges, synthesizes, and expands upon several existing schools, or theories, of treatment for adult survivors of traumatic stress. These include approaches to therapy that stem from more classic traditions in psychology, such as psychoanalysis, to more modern approaches including those informed by feminist thought. Moreover, CBP places particular emphasis on integration of key concepts from evidence-based treatment models developed in the past few decades predicated upon thinking and research on the effects of traumatic stress and processes of recovery for survivors. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Relativistic and the first sectorial harmonics corrections in the critical inclination

    Science.gov (United States)

    Rahoma, W. A.; Khattab, E. H.; Abd El-Salam, F. A.

    2014-05-01

    The problem of the critical inclination is treated in the Hamiltonian framework taking into consideration post-Newtonian corrections as well as the main correction term of sectorial harmonics for an earth-like planet. The Hamiltonian is expressed in terms of Delaunay canonical variables. A canonical transformation is applied to eliminate short period terms. A modified critical inclination is obtained due to relativistic and the first sectorial harmonics corrections.

  15. Experimental and Monte Carlo studies of fluence corrections for graphite calorimetry in low- and high-energy clinical proton beams

    International Nuclear Information System (INIS)

    Lourenço, Ana; Thomas, Russell; Bouchard, Hugo; Kacperek, Andrzej; Vondracek, Vladimir; Royle, Gary; Palmans, Hugo

    2016-01-01

    Purpose: The aim of this study was to determine fluence corrections necessary to convert absorbed dose to graphite, measured by graphite calorimetry, to absorbed dose to water. Fluence corrections were obtained from experiments and Monte Carlo simulations in low- and high-energy proton beams. Methods: Fluence corrections were calculated to account for the difference in fluence between water and graphite at equivalent depths. Measurements were performed with narrow proton beams. Plane-parallel-plate ionization chambers with a large collecting area compared to the beam diameter were used to intercept the whole beam. High- and low-energy proton beams were provided by a scanning and double scattering delivery system, respectively. A mathematical formalism was established to relate fluence corrections derived from Monte Carlo simulations, using the FLUKA code [A. Ferrari et al., “FLUKA: A multi-particle transport code,” in CERN 2005-10, INFN/TC 05/11, SLAC-R-773 (2005) and T. T. Böhlen et al., “The FLUKA Code: Developments and challenges for high energy and medical applications,” Nucl. Data Sheets 120, 211–214 (2014)], to partial fluence corrections measured experimentally. Results: A good agreement was found between the partial fluence corrections derived by Monte Carlo simulations and those determined experimentally. For a high-energy beam of 180 MeV, the fluence corrections from Monte Carlo simulations were found to increase from 0.99 to 1.04 with depth. In the case of a low-energy beam of 60 MeV, the magnitude of fluence corrections was approximately 0.99 at all depths when calculated in the sensitive area of the chamber used in the experiments. Fluence correction calculations were also performed for a larger area and found to increase from 0.99 at the surface to 1.01 at greater depths. Conclusions: Fluence corrections obtained experimentally are partial fluence corrections because they account for differences in the primary and part of the secondary

  16. Experimental and Monte Carlo studies of fluence corrections for graphite calorimetry in low- and high-energy clinical proton beams

    Energy Technology Data Exchange (ETDEWEB)

    Lourenço, Ana, E-mail: am.lourenco@ucl.ac.uk [Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, United Kingdom and Division of Acoustics and Ionising Radiation, National Physical Laboratory, Teddington TW11 0LW (United Kingdom); Thomas, Russell; Bouchard, Hugo [Division of Acoustics and Ionising Radiation, National Physical Laboratory, Teddington TW11 0LW (United Kingdom); Kacperek, Andrzej [National Eye Proton Therapy Centre, Clatterbridge Cancer Centre, Wirral CH63 4JY (United Kingdom); Vondracek, Vladimir [Proton Therapy Center, Budinova 1a, Prague 8 CZ-180 00 (Czech Republic); Royle, Gary [Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT (United Kingdom); Palmans, Hugo [Division of Acoustics and Ionising Radiation, National Physical Laboratory, Teddington TW11 0LW, United Kingdom and Medical Physics Group, EBG MedAustron GmbH, A-2700 Wiener Neustadt (Austria)

    2016-07-15

    Purpose: The aim of this study was to determine fluence corrections necessary to convert absorbed dose to graphite, measured by graphite calorimetry, to absorbed dose to water. Fluence corrections were obtained from experiments and Monte Carlo simulations in low- and high-energy proton beams. Methods: Fluence corrections were calculated to account for the difference in fluence between water and graphite at equivalent depths. Measurements were performed with narrow proton beams. Plane-parallel-plate ionization chambers with a large collecting area compared to the beam diameter were used to intercept the whole beam. High- and low-energy proton beams were provided by a scanning and double scattering delivery system, respectively. A mathematical formalism was established to relate fluence corrections derived from Monte Carlo simulations, using the FLUKA code [A. Ferrari et al., “FLUKA: A multi-particle transport code,” in CERN 2005-10, INFN/TC 05/11, SLAC-R-773 (2005) and T. T. Böhlen et al., “The FLUKA Code: Developments and challenges for high energy and medical applications,” Nucl. Data Sheets 120, 211–214 (2014)], to partial fluence corrections measured experimentally. Results: A good agreement was found between the partial fluence corrections derived by Monte Carlo simulations and those determined experimentally. For a high-energy beam of 180 MeV, the fluence corrections from Monte Carlo simulations were found to increase from 0.99 to 1.04 with depth. In the case of a low-energy beam of 60 MeV, the magnitude of fluence corrections was approximately 0.99 at all depths when calculated in the sensitive area of the chamber used in the experiments. Fluence correction calculations were also performed for a larger area and found to increase from 0.99 at the surface to 1.01 at greater depths. Conclusions: Fluence corrections obtained experimentally are partial fluence corrections because they account for differences in the primary and part of the secondary

  17. Partial status epilepticus - rapid genetic diagnosis of Alpers' disease.

    LENUS (Irish Health Repository)

    McCoy, Bláthnaid

    2011-11-01

    We describe four children with a devastating encephalopathy characterised by refractory focal seizures and variable liver dysfunction. We describe their electroencephalographic, radiologic, genetic and pathologic findings. The correct diagnosis was established by rapid gene sequencing. POLG1 based Alpers\\' disease should be considered in any child presenting with partial status epilepticus.

  18. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  19. Lattice Boltzmann model for high-order nonlinear partial differential equations.

    Science.gov (United States)

    Chai, Zhenhua; He, Nanzhong; Guo, Zhaoli; Shi, Baochang

    2018-01-01

    In this paper, a general lattice Boltzmann (LB) model is proposed for the high-order nonlinear partial differential equation with the form ∂_{t}ϕ+∑_{k=1}^{m}α_{k}∂_{x}^{k}Π_{k}(ϕ)=0 (1≤k≤m≤6), α_{k} are constant coefficients, Π_{k}(ϕ) are some known differential functions of ϕ. As some special cases of the high-order nonlinear partial differential equation, the classical (m)KdV equation, KdV-Burgers equation, K(n,n)-Burgers equation, Kuramoto-Sivashinsky equation, and Kawahara equation can be solved by the present LB model. Compared to the available LB models, the most distinct characteristic of the present model is to introduce some suitable auxiliary moments such that the correct moments of equilibrium distribution function can be achieved. In addition, we also conducted a detailed Chapman-Enskog analysis, and found that the high-order nonlinear partial differential equation can be correctly recovered from the proposed LB model. Finally, a large number of simulations are performed, and it is found that the numerical results agree with the analytical solutions, and usually the present model is also more accurate than the existing LB models [H. Lai and C. Ma, Sci. China Ser. G 52, 1053 (2009)1672-179910.1007/s11433-009-0149-3; H. Lai and C. Ma, Phys. A (Amsterdam) 388, 1405 (2009)PHYADX0378-437110.1016/j.physa.2009.01.005] for high-order nonlinear partial differential equations.

  20. Lattice Boltzmann model for high-order nonlinear partial differential equations

    Science.gov (United States)

    Chai, Zhenhua; He, Nanzhong; Guo, Zhaoli; Shi, Baochang

    2018-01-01

    In this paper, a general lattice Boltzmann (LB) model is proposed for the high-order nonlinear partial differential equation with the form ∂tϕ +∑k=1mαk∂xkΠk(ϕ ) =0 (1 ≤k ≤m ≤6 ), αk are constant coefficients, Πk(ϕ ) are some known differential functions of ϕ . As some special cases of the high-order nonlinear partial differential equation, the classical (m)KdV equation, KdV-Burgers equation, K (n ,n ) -Burgers equation, Kuramoto-Sivashinsky equation, and Kawahara equation can be solved by the present LB model. Compared to the available LB models, the most distinct characteristic of the present model is to introduce some suitable auxiliary moments such that the correct moments of equilibrium distribution function can be achieved. In addition, we also conducted a detailed Chapman-Enskog analysis, and found that the high-order nonlinear partial differential equation can be correctly recovered from the proposed LB model. Finally, a large number of simulations are performed, and it is found that the numerical results agree with the analytical solutions, and usually the present model is also more accurate than the existing LB models [H. Lai and C. Ma, Sci. China Ser. G 52, 1053 (2009), 10.1007/s11433-009-0149-3; H. Lai and C. Ma, Phys. A (Amsterdam) 388, 1405 (2009), 10.1016/j.physa.2009.01.005] for high-order nonlinear partial differential equations.

  1. PetIGA: A framework for high-performance isogeometric analysis

    KAUST Repository

    Dalcin, Lisandro; Collier, N.; Vignal, Philippe; Cortes, Adriano Mauricio; Calo, Victor M.

    2016-01-01

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility of PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. We show strong scaling results on up to 40964096 cores, which confirm the suitability of PetIGA for large scale simulations.

  2. PetIGA: A framework for high-performance isogeometric analysis

    KAUST Repository

    Dalcin, L.

    2016-05-25

    We present PetIGA, a code framework to approximate the solution of partial differential equations using isogeometric analysis. PetIGA can be used to assemble matrices and vectors which come from a Galerkin weak form, discretized with Non-Uniform Rational B-spline basis functions. We base our framework on PETSc, a high-performance library for the scalable solution of partial differential equations, which simplifies the development of large-scale scientific codes, provides a rich environment for prototyping, and separates parallelism from algorithm choice. We describe the implementation of PetIGA, and exemplify its use by solving a model nonlinear problem. To illustrate the robustness and flexibility of PetIGA, we solve some challenging nonlinear partial differential equations that include problems in both solid and fluid mechanics. We show strong scaling results on up to 40964096 cores, which confirm the suitability of PetIGA for large scale simulations.

  3. Understanding the impact of career academy attendance: an application of the principal stratification framework for causal effects accounting for partial compliance.

    Science.gov (United States)

    Page, Lindsay C

    2012-04-01

    Results from MDRC's longitudinal, random-assignment evaluation of career-academy high schools reveal that several years after high-school completion, those randomized to receive the academy opportunity realized a $175 (11%) increase in monthly earnings, on average. In this paper, I investigate the impact of duration of actual academy enrollment, as nearly half of treatment group students either never enrolled or participated for only a portion of high school. I capitalize on data from this experimental evaluation and utilize a principal stratification framework and Bayesian inference to investigate the causal impact of academy participation. This analysis focuses on a sample of 1,306 students across seven sites in the MDRC evaluation. Participation is measured by number of years of academy enrollment, and the outcome of interest is average monthly earnings in the period of four to eight years after high school graduation. I estimate an average causal effect of treatment assignment on subsequent monthly earnings of approximately $588 among males who remained enrolled in an academy throughout high school and more modest impacts among those who participated only partially. Different from an instrumental variables approach to treatment non-compliance, which allows for the estimation of linear returns to treatment take-up, the more general framework of principal stratification allows for the consideration of non-linear returns, although at the expense of additional model-based assumptions.

  4. Quantitation of regional cerebral blood flow corrected for partial volume effect using O-15 water and PET: I. Theory, error analysis, and stereologic comparison

    DEFF Research Database (Denmark)

    Lida, H; Law, I; Pakkenberg, B

    2000-01-01

    Limited spatial resolution of positron emission tomography (PET) can cause significant underestimation in the observed regional radioactivity concentration (so-called partial volume effect or PVE) resulting in systematic errors in estimating quantitative physiologic parameters. The authors have...... formulated four mathematical models that describe the dynamic behavior of a freely diffusible tracer (H215O) in a region of interest (ROI) incorporating estimates of regional tissue flow that are independent of PVE. The current study was intended to evaluate the feasibility of these models and to establish...... a methodology to accurately quantify regional cerebral blood flow (CBF) corrected for PVE in cortical gray matter regions. Five monkeys were studied with PET after IV H2(15)O two times (n = 3) or three times (n = 2) in a row. Two ROIs were drawn on structural magnetic resonance imaging (MRI) scans and projected...

  5. Radiative corrections of semileptonic hyperon decays Pt. 1

    International Nuclear Information System (INIS)

    Margaritisz, T.; Szegoe, K.; Toth, K.

    1982-07-01

    The beta decay of free quarks is studied in the framework of the standard SU(2) x U(1) model of weak and electromagnetic interactions. The so-called 'weak' part of radiative corrections is evaluated to order α in one-loop approximation using a renormalization scheme, which adjusts the counter terms to the electric charge, and to the mass of the charged and neutral vector bosons, Msub(w) and Msub(o), respectively. The obtained result is, to a good approximation, equal with the 'weak' part of radiative corrections for the semileptonic decay of any hyperon. It is shown in the model that the methods, which work excellently in case of the 'weak' corrections, do not, in general, provide us with the dominant part of the 'photonic' corrections. (author)

  6. Hybrid Zeolitic Imidazolate Frameworks: Controlling Framework Porosity and Functionality by Mixed-Linker Synthesis

    KAUST Repository

    Thompson, Joshua A.

    2012-05-22

    Zeolitic imidazolate frameworks (ZIFs) are a subclass of nanoporous metal-organic frameworks (MOFs) that exhibit zeolite-like structural topologies and have interesting molecular recognition properties, such as molecular sieving and gate-opening effects associated with their pore apertures. The synthesis and characterization of hybrid ZIFs with mixed linkers in the framework are described in this work, producing materials with properties distinctly different from the parent frameworks (ZIF-8, ZIF-90, and ZIF-7). NMR spectroscopy is used to assess the relative amounts of the different linkers included in the frameworks, whereas nitrogen physisorption shows the evolution of the effective pore size distribution in materials resulting from the framework hybridization. X-ray diffraction shows these hybrid materials to be crystalline. In the case of ZIF-8-90 hybrids, the cubic space group of the parent frameworks is continuously maintained, whereas in the case of the ZIF-7-8 hybrids there is a transition from a cubic to a rhombohedral space group. Nitrogen physisorption data reveal that the hybrid materials exhibit substantial changes in gate-opening phenomena, either occurring at continuously tunable partial pressures of nitrogen (ZIF-8-90 hybrids) or loss of gate-opening effects to yield more rigid frameworks (ZIF-7-8 hybrids). With this synthetic approach, significant alterations in MOF properties may be realized to suit a desired separation or catalytic process. © 2012 American Chemical Society.

  7. A framework for semantic driven electronic examination system for ...

    African Journals Online (AJOL)

    The framework is implemented using Java programming language and a prototype of the proposed system is tested and compared with the existing system. Results show that words that are synonymous to any given correct answer are equally recognize as correct option. Hence, the e - examination system reliability, ...

  8. Triple-coincidence with automatic chance coincidence correction

    International Nuclear Information System (INIS)

    Chase, R.L.

    1975-05-01

    The chance coincidences in a triple-coincidence circuit are of two types--partially correlated and entirely uncorrelated. Their relative importance depends on source strength and source and detector geometry so that the total chance correction cannot, in general, be calculated. The system described makes use of several delays and straightforward integrated circuit logic to provide independent evaluation of the two components of the chance coincidence rate. (auth)

  9. Dynamic retardation corrections to the mass spectrum of heavy quarkonia

    International Nuclear Information System (INIS)

    Kopalejshvili, T.; Rusetskij, A.

    1996-01-01

    In the framework of the Logunov-Tavkhelidze quasipotential approach the first-order retardation corrections to the heavy quarkonia mass spectrum are calculated with the use of the stationary wave boundary condition in the covariant kernel of the Bethe-Salpeter equation. As has been expected, these corrections turn out to be small for all low-lying heavy meson states and vanish in the heavy quark limit (m Q →∞). The comparison of the suggested approach to the calculation of retardation corrections with others, known in literature, is carried out. 22 refs., 1 tab

  10. MRI intensity inhomogeneity correction by combining intensity and spatial information

    International Nuclear Information System (INIS)

    Vovk, Uros; Pernus, Franjo; Likar, Bostjan

    2004-01-01

    We propose a novel fully automated method for retrospective correction of intensity inhomogeneity, which is an undesired phenomenon in many automatic image analysis tasks, especially if quantitative analysis is the final goal. Besides most commonly used intensity features, additional spatial image features are incorporated to improve inhomogeneity correction and to make it more dynamic, so that local intensity variations can be corrected more efficiently. The proposed method is a four-step iterative procedure in which a non-parametric inhomogeneity correction is conducted. First, the probability distribution of image intensities and corresponding second derivatives is obtained. Second, intensity correction forces, condensing the probability distribution along the intensity feature, are computed for each voxel. Third, the inhomogeneity correction field is estimated by regularization of all voxel forces, and fourth, the corresponding partial inhomogeneity correction is performed. The degree of inhomogeneity correction dynamics is determined by the size of regularization kernel. The method was qualitatively and quantitatively evaluated on simulated and real MR brain images. The obtained results show that the proposed method does not corrupt inhomogeneity-free images and successfully corrects intensity inhomogeneity artefacts even if these are more dynamic

  11. Partial Molar Volumes of Aqua Ions from First Principles.

    Science.gov (United States)

    Wiktor, Julia; Bruneval, Fabien; Pasquarello, Alfredo

    2017-08-08

    Partial molar volumes of ions in water solution are calculated through pressures obtained from ab initio molecular dynamics simulations. The correct definition of pressure in charged systems subject to periodic boundary conditions requires access to the variation of the electrostatic potential upon a change of volume. We develop a scheme for calculating such a variation in liquid systems by setting up an interface between regions of different density. This also allows us to determine the absolute deformation potentials for the band edges of liquid water. With the properly defined pressures, we obtain partial molar volumes of a series of aqua ions in very good agreement with experimental values.

  12. Partial Correlation Matrix Estimation using Ridge Penalty Followed by Thresholding and Reestimation

    Science.gov (United States)

    2014-01-01

    Summary Motivated by the problem of construction gene co-expression network, we propose a statistical framework for estimating high-dimensional partial correlation matrix by a three-step approach. We first obtain a penalized estimate of a partial correlation matrix using ridge penalty. Next we select the non-zero entries of the partial correlation matrix by hypothesis testing. Finally we reestimate the partial correlation coefficients at these non-zero entries. In the second step, the null distribution of the test statistics derived from penalized partial correlation estimates has not been established. We address this challenge by estimating the null distribution from the empirical distribution of the test statistics of all the penalized partial correlation estimates. Extensive simulation studies demonstrate the good performance of our method. Application on a yeast cell cycle gene expression data shows that our method delivers better predictions of the protein-protein interactions than the Graphic Lasso. PMID:24845967

  13. Influence of Abutment Angle on Implant Strain When Supporting a Distal Extension Removable Partial Dental Prosthesis: An In Vitro Study.

    Science.gov (United States)

    Hirata, Kiyotaka; Takahashi, Toshihito; Tomita, Akiko; Gonda, Tomoya; Maeda, Yoshinobu

    This study evaluated the impact of angled abutments on strain in implants supporting a distal extension removable partial denture. An in vitro model of an implant supporting a distal extension removable partial denture was developed. The implant was positioned with a 17- or 30-degree mesial inclination, with either a healing abutment or a corrective multiunit abutment. Levels of strain under load were compared, and the results were compared using t test (P = .05). Correcting angulation with a multiunit angled abutment significantly decreased strain (P abutment. An angled abutment decreased the strain on an inclined implant significantly more than a healing abutment when loaded under a distal extension removable partial denture.

  14. A framework for the correction of slow physiological drifts during MR-guided HIFU therapies: Proof of concept

    International Nuclear Information System (INIS)

    Zachiu, Cornel; Moonen, Chrit; Ries, Mario; Denis de Senneville, Baudouin

    2015-01-01

    slow physiological motion can exceed acceptable therapeutic margins. In the animal experiment, motion tracking revealed an initial shift of up to 4 mm during the first 10 min and a subsequent continuous shift of ∼2 mm/h until the end of the intervention. This leads to a continuously increasing mismatch of the initial shot planning, the thermal dose measurements, and the true underlying anatomy. The estimated displacements allowed correcting the planned sonication cell cluster positions to the true target position, as well as the thermal dose estimates during the entire intervention and to correct the nonperfused volume measurement. A spatial coherence of all three is particularly important to assure a confluent ablation volume and to prevent remaining islets of viable malignant tissue. Conclusions: This study proposes a motion correction strategy for displacements resulting from slowly varying physiological motion that might occur during a MR-guided HIFU intervention. The authors have shown that such drifts can lead to a misalignment between interventional planning, energy delivery, and therapeutic validation. The presented volunteer study and in vivo experiment demonstrate both the relevance of the problem for HIFU therapies and the compatibility of the proposed motion compensation framework with the workflow of a HIFU intervention under clinical conditions

  15. A framework for the correction of slow physiological drifts during MR-guided HIFU therapies: Proof of concept

    Energy Technology Data Exchange (ETDEWEB)

    Zachiu, Cornel, E-mail: C.Zachiu@umcutrecht.nl; Moonen, Chrit; Ries, Mario [Imaging Division, UMC Utrecht, Heidelberglaan 100, Utrecht 3584 CX (Netherlands); Denis de Senneville, Baudouin [Imaging Division, UMC Utrecht, Heidelberglaan 100, Utrecht 3584 CX (Netherlands); Mathematical Institute of Bordeaux, University of Bordeaux, Talence Cedex 33405 (France)

    2015-07-15

    slow physiological motion can exceed acceptable therapeutic margins. In the animal experiment, motion tracking revealed an initial shift of up to 4 mm during the first 10 min and a subsequent continuous shift of ∼2 mm/h until the end of the intervention. This leads to a continuously increasing mismatch of the initial shot planning, the thermal dose measurements, and the true underlying anatomy. The estimated displacements allowed correcting the planned sonication cell cluster positions to the true target position, as well as the thermal dose estimates during the entire intervention and to correct the nonperfused volume measurement. A spatial coherence of all three is particularly important to assure a confluent ablation volume and to prevent remaining islets of viable malignant tissue. Conclusions: This study proposes a motion correction strategy for displacements resulting from slowly varying physiological motion that might occur during a MR-guided HIFU intervention. The authors have shown that such drifts can lead to a misalignment between interventional planning, energy delivery, and therapeutic validation. The presented volunteer study and in vivo experiment demonstrate both the relevance of the problem for HIFU therapies and the compatibility of the proposed motion compensation framework with the workflow of a HIFU intervention under clinical conditions.

  16. Development of an Analysis and Design Optimization Framework for Marine Propellers

    Science.gov (United States)

    Tamhane, Ashish C.

    In this thesis, a framework for the analysis and design optimization of ship propellers is developed. This framework can be utilized as an efficient synthesis tool in order to determine the main geometric characteristics of the propeller but also to provide the designer with the capability to optimize the shape of the blade sections based on their specific criteria. A hybrid lifting-line method with lifting-surface corrections to account for the three-dimensional flow effects has been developed. The prediction of the correction factors is achieved using Artificial Neural Networks and Support Vector Regression. This approach results in increased approximation accuracy compared to existing methods and allows for extrapolation of the correction factor values. The effect of viscosity is implemented in the framework via the coupling of the lifting line method with the open-source RANSE solver OpenFOAM for the calculation of lift, drag and pressure distribution on the blade sections using a transition kappa-o SST turbulence model. Case studies of benchmark high-speed propulsors are utilized in order to validate the proposed framework for propeller operation in open-water conditions but also in a ship's wake.

  17. A Lagrangian framework for deriving triples and quadruples corrections to the CCSD energy

    DEFF Research Database (Denmark)

    Eriksen, Janus Juul; Kristensen, Kasper; Kjærgaard, Thomas

    2014-01-01

    Using the coupled cluster Lagrangian technique, we have determined perturbative corrections to the coupled cluster singles and doubles (CCSD) energy that converge towards the coupled cluster singles, doubles, and triples (CCSDT) and coupled cluster singles, doubles, triples, and quadruples (CCSDTQ......) energies, considering the CCSD state as the unperturbed reference state and the fluctua- tion potential as the perturbation. Since the Lagrangian technique is utilized, the energy corrections satisfy Wigner’s 2n + 1 rule for the cluster amplitudes and the 2n + 2 rule for the Lagrange multi- pliers...

  18. Association between partial-volume corrected SUVmax and Oncotype DX recurrence score in early-stage, ER-positive/HER2-negative invasive breast cancer.

    Science.gov (United States)

    Lee, Su Hyun; Ha, Seunggyun; An, Hyun Joon; Lee, Jae Sung; Han, Wonshik; Im, Seock-Ah; Ryu, Han Suk; Kim, Won Hwa; Chang, Jung Min; Cho, Nariya; Moon, Woo Kyung; Cheon, Gi Jeong

    2016-08-01

    Oncotype DX, a 21-gene expression assay, provides a recurrence score (RS) which predicts prognosis and the benefit from adjuvant chemotherapy in patients with early-stage, estrogen receptor-positive (ER-positive), and human epidermal growth factor receptor 2-negative (HER2-negative) invasive breast cancer. However, Oncotype DX tests are expensive and not readily available in all institutions. The purpose of this study was to investigate whether metabolic parameters on (18)F-FDG PET/CT are associated with the Oncotype DX RS and whether (18)F-FDG PET/CT can be used to predict the Oncotype DX RS. The study group comprised 38 women with stage I/II, ER-positive/HER2-negative invasive breast cancer who underwent pretreatment (18)F-FDG PET/CT and Oncotype DX testing. On PET/CT, maximum (SUVmax) and average standardized uptake values, metabolic tumor volume, and total lesion glycolysis were measured. Partial volume-corrected SUVmax (PVC-SUVmax) determined using the recovery coefficient method was also evaluated. Oncotype DX RS (0 - 100) was categorized as low (negative breast cancer.

  19. Real-time scatter measurement and correction in film radiography

    International Nuclear Information System (INIS)

    Shaw, C.G.

    1987-01-01

    A technique for real-time scatter measurement and correction in scanning film radiography is described. With this technique, collimated x-ray fan beams are used to partially reject scattered radiation. Photodiodes are attached to the aft-collimator for sampled scatter measurement. Such measurement allows the scatter distribution to be reconstructed and subtracted from digitized film image data for accurate transmission measurement. In this presentation the authors discuss the physical and technical considerations of this scatter correction technique. Examples are shown that demonstrate the feasibility of the technique. Improved x-ray transmission measurement and dual-energy subtraction imaging are demonstrated with phantoms

  20. The Unreasonable Destructiveness of Political Correctness in Philosophy

    Directory of Open Access Journals (Sweden)

    Manuel Doria

    2017-08-01

    Full Text Available I submit that epistemic progress in key areas of contemporary academic philosophy has been compromised by politically correct (“PC” ideology. First, guided by an evolutionary account of ideology, results from social and cognitive psychology and formal philosophical methods, I expose evidence for political bias in contemporary Western academia and sketch a formalization for the contents of beliefs from the PC worldview taken to be of core importance, the theory of social oppression and the thesis of anthropological mental egalitarianism. Then, aided by discussions from contemporary epistemology on epistemic values, I model the problem of epistemic appraisal using the frameworks of multi-objective optimization theory and multi-criteria decision analysis and apply it to politically correct philosophy. I conclude that philosophy guided by politically correct values is bound to produce constructs that are less truth-conducive and that spurious values which are ideologically motivated should be abandoned. Objections to my framework stemming from contextual empiricism, the feminine voice in ethics and political philosophy are considered. I conclude by prescribing the epistemic value of epistemic adequacy, the contextual value of political diversity and the moral virtue of moral courage to reverse unwarranted trends in academic philosophy due to PC ideology.

  1. Scoring correction for MMPI-2 Hs scale with patients experiencing a traumatic brain injury: a test of measurement invariance.

    Science.gov (United States)

    Alkemade, Nathan; Bowden, Stephen C; Salzman, Louis

    2015-02-01

    It has been suggested that MMPI-2 scoring requires removal of some items when assessing patients after a traumatic brain injury (TBI). Gass (1991. MMPI-2 interpretation and closed head injury: A correction factor. Psychological assessment, 3, 27-31) proposed a correction procedure in line with the hypothesis that MMPI-2 endorsement may be affected by symptoms of TBI. This study assessed the validity of the Gass correction procedure. A sample of patients with a TBI (n = 242), and a random subset of the MMPI-2 normative sample (n = 1,786). The correction procedure implies a failure of measurement invariance across populations. This study examined measurement invariance of one of the MMPI-2 scales (Hs) that includes TBI correction items. A four-factor model of the MMPI-2 Hs items was defined. The factor model was found to meet the criteria for partial measurement invariance. Analysis of the change in sensitivity and specificity values implied by partial measurement invariance failed to indicate significant practical impact of partial invariance. Overall, the results support continued use of all Hs items to assess psychological well-being in patients with TBI. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Convergence of method of lines approximations to partial differential equations

    International Nuclear Information System (INIS)

    Verwer, J.G.; Sanz-Serna, J.M.

    1984-01-01

    Many existing numerical schemes for evolutionary problems in partial differential equations (PDEs) can be viewed as method of lines (MOL) schemes. This paper treats the convergence of one-step MOL schemes. The main purpose is to set up a general framework for a convergence analysis applicable to nonlinear problems. The stability materials for this framework are taken from the field of nonlinear stiff ODEs. In this connection, important concepts are the logarithmic matrix norm and C-stability. A nonlinear parabolic equation and the cubic Schroedinger equation are used for illustrating the ideas. (Auth.)

  3. POLARIZED BEAMS: 2 - Partial Siberian Snake rescues polarized protons at Brookhaven

    International Nuclear Information System (INIS)

    Huang, Haixin

    1994-01-01

    To boost the level of beam polarization (spin orientation), a partial 'Siberian Snake' was recently used to overcome imperfection depolarizing resonances in the Brookhaven Alternating Gradient Synchrotron (AGS). This 9-degree spin rotator recently permitted acceleration with no noticeable polarization loss. The intrinsic AGS depolarizing resonances (which degrade the polarization content) had been eliminated by betatron tune jumps, but the imperfection resonances were compensated by means of harmonic orbit corrections. However, at high energies these orbit corrections are difficult and tedious and a Siberian Snake became an attractive alternative

  4. From a Proven Correct Microkernel to Trustworthy Large Systems

    Science.gov (United States)

    Andronick, June

    The seL4 microkernel was the world's first general-purpose operating system kernel with a formal, machine-checked proof of correctness. The next big step in the challenge of building truly trustworthy systems is to provide a framework for developing secure systems on top of seL4. This paper first gives an overview of seL4's correctness proof, together with its main implications and assumptions, and then describes our approach to provide formal security guarantees for large, complex systems.

  5. A Verification Framework for Agent Communication

    NARCIS (Netherlands)

    Eijk, R.M. van; Boer, F.S. de; Hoek, W. van der; Meyer, J-J.Ch.

    2003-01-01

    In this paper, we introduce a verification method for the correctness of multiagent systems as described in the framework of acpl (Agent Communication Programming Language). The computational model of acpl consists of an integration of the two different paradigms of ccp (Concurrent Constraint

  6. In vitro investigation of marginal accuracy of implant-supported screw-retained partial dentures.

    Science.gov (United States)

    Koke, U; Wolf, A; Lenz, P; Gilde, H

    2004-05-01

    Mismatch occurring during the fabrication of implant-supported dentures may induce stress to the peri-implant bone. The purpose of this study was to investigate the influence of two different alloys and the fabrication method on the marginal accuracy of cast partial dentures. Two laboratory implants were bonded into an aluminium block so that the distance between their longitudinal axes was 21 mm. Frameworks designed for screw-retained partial dentures were cast either with pure titanium (rematitan) or with a CoCr-alloy (remanium CD). Two groups of 10 frameworks were cast in a single piece. The first group was made of pure titanium, and the second group of a CoCr-alloy (remanium CD). A third group of 10 was cast in two pieces and then laser-welded onto a soldering model. This latter group was also made of the CoCr-alloy. All the frameworks were screwed to the original model with defined torque. Using light microscopy, marginal accuracy was determined by measuring vertical gaps at eight defined points around each implant. Titanium frameworks cast in a single piece demonstrated mean vertical gaps of 40 microm (s.d. = 11 microm) compared with 72 microm (s.d. = 40 microm) for CoCr-frameworks. These differences were not significant (U-test, P = 0.124) because of a considerable variation of the values for CoCr-frameworks (minimum: 8 microm and maximum: 216 microm). However, frameworks cast in two pieces and mated with a laser showed significantly better accuracy in comparison with the other experimental groups (mean: 17 microm +/- 6; P laser welding. Manufacturing the framework pieces separately and then welding them together provides the best marginal fit.

  7. Meson exchange current corrections to magnetic moments in quantum hadro-dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Morse, T M; Price, C E; Shepard, J R [Colorado Univ., Boulder (USA). Dept. of Physics

    1990-11-15

    We have calculated pion exchange current corrections to the magnetic moments of closed shell {plus minus}1 particle nuclei near A=16 and 40 within the framework of quantum hadro-dynamics (QHD). We find that the correction is significant and that, in general, the agreement of the QHD isovector moments with experiment is worsened. Comparisons to previous non-relativistic calculations are also made. (orig.).

  8. Anatomy-based reconstruction of FDG-PET images with implicit partial volume correction improves detection of hypometabolic regions in patients with epilepsy due to focal cortical dysplasia diagnosed on MRI

    Energy Technology Data Exchange (ETDEWEB)

    Goffin, Karolien; Baete, Kristof; Nuyts, Johan; Laere, Koen van [University Hospital Leuven, Division of Nuclear Medicine and Medical Imaging Center, Leuven (Belgium); Van Paesschen, Wim [University Hospital Leuven, Neurology Department, Leuven (Belgium); Dupont, Patrick [University Hospital Leuven, Division of Nuclear Medicine and Medical Imaging Center, Leuven (Belgium); University Hospital Leuven, Laboratory of Cognitive Neurology, Leuven (Belgium); Palmini, Andre [Pontificia Universidade Catolica do Rio Grande do Sul (PUCRS), Porto Alegre Epilepsy Surgery Program, Hospital Sao Lucas, Porto Alegre (Brazil)

    2010-06-15

    Detection of hypometabolic areas on interictal FDG-PET images for assessing the epileptogenic zone is hampered by partial volume effects. We evaluated the performance of an anatomy-based maximum a-posteriori (A-MAP) reconstruction algorithm which combined noise suppression with correction for the partial volume effect in the detection of hypometabolic areas in patients with focal cortical dysplasia (FCD). FDG-PET images from 14 patients with refractory partial epilepsy were reconstructed using A-MAP and maximum likelihood (ML) reconstruction. In all patients, presurgical evaluation showed that FCD represented the epileptic lesion. Correspondence between the FCD location and regional metabolism on a predefined atlas was evaluated. An asymmetry index of FCD to normal cortex was calculated. Hypometabolism at the FCD location was detected in 9/14 patients (64%) using ML and in 10/14 patients (71%) using A-MAP reconstruction. Hypometabolic areas outside the FCD location were detected in 12/14 patients (86%) using ML and in 11/14 patients (79%) using A-MAP reconstruction. The asymmetry index was higher using A-MAP reconstruction (0.61, ML 0.49, p=0.03). The A-MAP reconstruction algorithm improved visual detection of epileptic FCD on brain FDG-PET images compared to ML reconstruction, due to higher contrast and better delineation of the lesion. This improvement failed to reach significance in our small sample. Hypometabolism outside the lesion is often present, consistent with the observation that the functional deficit zone tends to be larger than the epileptogenic zone. (orig.)

  9. Anatomy-based reconstruction of FDG-PET images with implicit partial volume correction improves detection of hypometabolic regions in patients with epilepsy due to focal cortical dysplasia diagnosed on MRI

    International Nuclear Information System (INIS)

    Goffin, Karolien; Baete, Kristof; Nuyts, Johan; Laere, Koen van; Van Paesschen, Wim; Dupont, Patrick; Palmini, Andre

    2010-01-01

    Detection of hypometabolic areas on interictal FDG-PET images for assessing the epileptogenic zone is hampered by partial volume effects. We evaluated the performance of an anatomy-based maximum a-posteriori (A-MAP) reconstruction algorithm which combined noise suppression with correction for the partial volume effect in the detection of hypometabolic areas in patients with focal cortical dysplasia (FCD). FDG-PET images from 14 patients with refractory partial epilepsy were reconstructed using A-MAP and maximum likelihood (ML) reconstruction. In all patients, presurgical evaluation showed that FCD represented the epileptic lesion. Correspondence between the FCD location and regional metabolism on a predefined atlas was evaluated. An asymmetry index of FCD to normal cortex was calculated. Hypometabolism at the FCD location was detected in 9/14 patients (64%) using ML and in 10/14 patients (71%) using A-MAP reconstruction. Hypometabolic areas outside the FCD location were detected in 12/14 patients (86%) using ML and in 11/14 patients (79%) using A-MAP reconstruction. The asymmetry index was higher using A-MAP reconstruction (0.61, ML 0.49, p=0.03). The A-MAP reconstruction algorithm improved visual detection of epileptic FCD on brain FDG-PET images compared to ML reconstruction, due to higher contrast and better delineation of the lesion. This improvement failed to reach significance in our small sample. Hypometabolism outside the lesion is often present, consistent with the observation that the functional deficit zone tends to be larger than the epileptogenic zone. (orig.)

  10. The theoretical analysis content correctional massage for athletes with disabilities

    Directory of Open Access Journals (Sweden)

    Romanna Rudenko

    2015-12-01

    Full Text Available Purpose: to analyze the content authoring methodology of correction massage for athletes with disabilities. Material and Methods: analysis and synthesis of information for scientific, methodical and special literature; pedagogical supervision; analysis of medical cards; methods of mathematical statistics. The study involved 60 athletes with disabilities qualifications of different nosological groups. Results: of correction massage technique developed taking into account the level of physical activity, nosological group, physiological effects of massage techniques on the system. Forms of correction massage must meet the intensity of physical activity, main course and related diseases in the training cycle athletes with disabilities. Conclusions: apply total, partial, intermittent, local, segmental-reflex massage, paravertebral zones, taking into account intensity physical activity, individual tolerance for exercise

  11. Novel Principles and Techniques to Create a Natural Design in Female Hairline Correction Surgery.

    Science.gov (United States)

    Park, Jae Hyun

    2015-12-01

    Female hairline correction surgery is becoming increasingly popular. However, no guidelines or methods of female hairline design have been introduced to date. The purpose of this study was to create an initial framework based on the novel principles of female hairline design and then use artistic ability and experience to fine tune this framework. An understanding of the concept of 5 areas (frontal area, frontotemporal recess area, temporal peak, infratemple area, and sideburns) and 5 points (C, A, B, T, and S) is required for female hairline correction surgery (the 5A5P principle). The general concepts of female hairline correction surgery and natural design methods are, herein, explained with a focus on the correlations between these 5 areas and 5 points. A natural and aesthetic female hairline can be created with application of the above-mentioned concepts. The 5A5P principle of forming the female hairline is very useful in female hairline correction surgery.

  12. The effect of performing corrections on reported uterine cancer mortality data in the city of São Paulo

    Directory of Open Access Journals (Sweden)

    J.L.F. Antunes

    2006-08-01

    Full Text Available Reports of uterine cancer deaths that do not specify the subsite of the tumor threaten the quality of the epidemiologic appraisal of corpus and cervix uteri cancer mortality. The present study assessed the impact of correcting the estimated corpus and cervix uteri cancer mortality in the city of São Paulo, Brazil. The epidemiologic assessment of death rates comprised the estimation of magnitudes, trends (1980-2003, and area-level distribution based on three strategies: i using uncorrected death certificate information; ii correcting estimates of corpus and cervix uteri mortality by fully reallocating unspecified deaths to either one of these categories, and iii partially correcting specified estimates by maintaining as unspecified a fraction of deaths certified as due to cancer of "uterus not otherwise specified". The proportion of uterine cancer deaths without subsite specification decreased from 42.9% in 1984 to 20.8% in 2003. Partial and full corrections resulted in considerable increases of cervix (31.3 and 48.8%, respectively and corpus uteri (34.4 and 55.2% cancer mortality. Partial correction did not change trends for subsite-specific uterine cancer mortality, whereas full correction did, thus representing an early indication of decrease for cervical neoplasms and stability for tumors of the corpus uteri in this population. Ecologic correlations between mortality and socioeconomic indices were unchanged for both strategies of correcting estimates. Reallocating unspecified uterine cancer mortality in contexts with a high proportion of these deaths has a considerable impact on the epidemiologic profile of mortality and provides more reliable estimates of cervix and corpus uteri cancer death rates and trends.

  13. Fundamental partial compositeness

    CERN Document Server

    Sannino, Francesco

    2016-11-07

    We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Successful models exist because gauge quantum numbers of Standard Model fermions admit a minimal enough 'square root'. Furthermore, right-handed SM fermions have an SU(2)$_R$-like structure, yielding a custodially-protected composite Higgs. Baryon and lepton numbers arise accidentally. Standard Model fermions acquire mass at tree level, while the Higgs potential and flavor violations are generated by quantum corrections. We further discuss accidental symmetries and other dynamical features stemming from the new strongly interacting scalars. If the same phenomenology can be obtained from models without our elementary scalars, they would reappear as composite states.

  14. Consistency of direct integral estimator for partially observed systems of ordinary differential equations

    NARCIS (Netherlands)

    Vujačić, Ivan; Dattner, Itai

    In this paper we use the sieve framework to prove consistency of the ‘direct integral estimator’ of parameters for partially observed systems of ordinary differential equations, which are commonly used for modeling dynamic processes.

  15. Higher Order Corrections in the CoLoRFulNNLO Framework

    Science.gov (United States)

    Somogyi, G.; Kardos, A.; Szőr, Z.; Trócsányi, Z.

    We discuss the CoLoRFulNNLO method for computing higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the calculation of event shapes and jet rates in three-jet production in electron-positron annihilation. We validate our code by comparing our predictions to previous results in the literature and present the jet cone energy fraction distribution at NNLO accuracy. We also present preliminary NNLO results for the three-jet rate using the Durham jet clustering algorithm matched to resummed predictions at NLL accuracy, and a comparison to LEP data.

  16. Effects of partial volume correction on discrimination between very early Alzheimer's dementia and controls using brain perfusion SPECT

    International Nuclear Information System (INIS)

    Kanetaka, Hidekazu; Matsuda, Hiroshi; Ohnishi, Takashi; Imabayashi, Etsuko; Tanaka, Fumiko; Asada, Takashi; Yamashita, Fumio; Nakano, Seigo; Takasaki, Masaru

    2004-01-01

    We assessed the accuracy of brain perfusion single-photon emission computed tomography (SPECT) in discriminating between patients with probable Alzheimer's disease (AD) at the very early stage and age-matched controls before and after partial volume correction (PVC). Three-dimensional MRI was used for PVC. We randomly divided the subjects into two groups. The first group, comprising 30 patients and 30 healthy volunteers, was used to identify the brain area with the most significant decrease in regional cerebral blood flow (rCBF) in patients compared with normal controls based on the voxel-based analysis of a group comparison. The second group, comprising 31 patients and 31 healthy volunteers, was used to study the improvement in diagnostic accuracy provided by PVC. A Z score map for a SPECT image of a subject was obtained by comparison with mean and standard deviation SPECT images of the healthy volunteers for each voxel after anatomical standardization and voxel normalization to global mean or cerebellar values using the following equation: Z score = ([control mean]-[individual value])/(control SD). Analysis of receiver operating characteristics curves for a Z score discriminating AD and controls in the posterior cingulate gyrus, where a significant decrease in rCBF was identified in the first group, showed that the PVC significantly enhanced the accuracy of the SPECT diagnosis of very early AD from 73.9% to 83.7% with global mean normalization. The PVC mildly enhanced the accuracy from 73.1% to 76.3% with cerebellar normalization. This result suggests that early diagnosis of AD requires PVC in a SPECT study. (orig.)

  17. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  18. On Neglecting Chemical Exchange When Correcting in Vivo 31P MRS Data for Partial Saturation: Commentary on: ``Pitfalls in the Measurement of Metabolite Concentrations Using the One-Pulse Experiment in in Vivo NMR''

    Science.gov (United States)

    Ouwerkerk, Ronald; Bottomley, Paul A.

    2001-04-01

    This article replies to Spencer et al. (J. Magn. Reson.149, 251-257, 2001) concerning the degree to which chemical exchange affects partial saturation corrections using saturation factors. Considering the important case of in vivo31P NMR, we employ differential analysis to demonstrate a broad range of experimental conditions over which chemical exchange minimally affects saturation factors, and near-optimum signal-to-noise ratio is preserved. The analysis contradicts Spencer et al.'s broad claim that chemical exchange results in a strong dependence of saturation factors upon M0's and T1 and exchange parameters. For Spencer et al.'s example of a dynamic 31P NMR experiment in which phosphocreatine varies 20-fold, we show that our strategy of measuring saturation factors at the start and end of the study reduces errors in saturation corrections to 2% for the high-energy phosphates.

  19. A unified software framework for deriving, visualizing, and exploring abstraction networks for ontologies

    Science.gov (United States)

    Ochs, Christopher; Geller, James; Perl, Yehoshua; Musen, Mark A.

    2016-01-01

    Software tools play a critical role in the development and maintenance of biomedical ontologies. One important task that is difficult without software tools is ontology quality assurance. In previous work, we have introduced different kinds of abstraction networks to provide a theoretical foundation for ontology quality assurance tools. Abstraction networks summarize the structure and content of ontologies. One kind of abstraction network that we have used repeatedly to support ontology quality assurance is the partial-area taxonomy. It summarizes structurally and semantically similar concepts within an ontology. However, the use of partial-area taxonomies was ad hoc and not generalizable. In this paper, we describe the Ontology Abstraction Framework (OAF), a unified framework and software system for deriving, visualizing, and exploring partial-area taxonomy abstraction networks. The OAF includes support for various ontology representations (e.g., OWL and SNOMED CT's relational format). A Protégé plugin for deriving “live partial-area taxonomies” is demonstrated. PMID:27345947

  20. Partial correlation matrix estimation using ridge penalty followed by thresholding and re-estimation.

    Science.gov (United States)

    Ha, Min Jin; Sun, Wei

    2014-09-01

    Motivated by the problem of construction of gene co-expression network, we propose a statistical framework for estimating high-dimensional partial correlation matrix by a three-step approach. We first obtain a penalized estimate of a partial correlation matrix using ridge penalty. Next we select the non-zero entries of the partial correlation matrix by hypothesis testing. Finally we re-estimate the partial correlation coefficients at these non-zero entries. In the second step, the null distribution of the test statistics derived from penalized partial correlation estimates has not been established. We address this challenge by estimating the null distribution from the empirical distribution of the test statistics of all the penalized partial correlation estimates. Extensive simulation studies demonstrate the good performance of our method. Application on a yeast cell cycle gene expression data shows that our method delivers better predictions of the protein-protein interactions than the Graphic Lasso. © 2014, The International Biometric Society.

  1. Partially converted stereoscopic images and the effects on visual attention and memory

    Science.gov (United States)

    Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi

    2015-03-01

    This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct

  2. Radiative corrections in K→3π decays

    International Nuclear Information System (INIS)

    Bissegger, M.; Fuhrer, A.; Gasser, J.; Kubis, B.; Rusetsky, A.

    2009-01-01

    We investigate radiative corrections to K→3π decays. In particular, we extend the non-relativistic framework developed recently to include real and virtual photons and show that, in a well-defined power counting scheme, the results reproduce corrections obtained in the relativistic calculation. Real photons are included exactly, beyond the soft-photon approximation, and we compare the result with the latter. The singularities generated by pionium near threshold are investigated, and a region is identified where standard perturbation theory in the fine structure constant α may be applied. We expect that the formulae provided allow one to extract S-wave ππ scattering lengths from the cusp effect in these decays with high precision

  3. A terahertz study of taurine: Dispersion correction and mode couplings

    Science.gov (United States)

    Dai, Zelin; Xu, Xiangdong; Gu, Yu; Li, Xinrong; Wang, Fu; Lian, Yuxiang; Fan, Kai; Cheng, Xiaomeng; Chen, Zhegeng; Sun, Minghui; Jiang, Yadong; Yang, Chun; Xu, Jimmy

    2017-03-01

    The low-frequency characteristics of polycrystalline taurine were studied experimentally by terahertz (THz) absorption spectroscopy and theoretically by ab initio density-functional simulations. Full optimizations with semi-empirical dispersion correction were performed in spectral computations and vibrational mode assignments. For comparison, partial optimizations with pure density functional theory were conducted in parallel. Results indicate that adding long-range dispersion correction to the standard DFT better reproduces the measured THz spectra than the popular partial optimizations. The main origins of the observed absorption features were also identified. Moreover, a coupled-oscillators model was proposed to explain the experimental observation of the unusual spectral blue-shift with the increase of temperature. Such coupled-oscillators model not only provides insights into the temperature dynamics of non-bonded interactions but also offers an opportunity to better understand the physical mechanisms behind the unusual THz spectral behaviors in taurine. Particularly, the simulation approach and novel coupled-oscillators model presented in this work are applicable to analyze the THz spectra of other molecular systems.

  4. Markov random field and Gaussian mixture for segmented MRI-based partial volume correction in PET

    International Nuclear Information System (INIS)

    Bousse, Alexandre; Thomas, Benjamin A; Erlandsson, Kjell; Hutton, Brian F; Pedemonte, Stefano; Ourselin, Sébastien; Arridge, Simon

    2012-01-01

    In this paper we propose a segmented magnetic resonance imaging (MRI) prior-based maximum penalized likelihood deconvolution technique for positron emission tomography (PET) images. The model assumes the existence of activity classes that behave like a hidden Markov random field (MRF) driven by the segmented MRI. We utilize a mean field approximation to compute the likelihood of the MRF. We tested our method on both simulated and clinical data (brain PET) and compared our results with PET images corrected with the re-blurred Van Cittert (VC) algorithm, the simplified Guven (SG) algorithm and the region-based voxel-wise (RBV) technique. We demonstrated our algorithm outperforms the VC algorithm and outperforms SG and RBV corrections when the segmented MRI is inconsistent (e.g. mis-segmentation, lesions, etc) with the PET image. (paper)

  5. On using smoothing spline and residual correction to fuse rain gauge observations and remote sensing data

    Science.gov (United States)

    Huang, Chengcheng; Zheng, Xiaogu; Tait, Andrew; Dai, Yongjiu; Yang, Chi; Chen, Zhuoqi; Li, Tao; Wang, Zhonglei

    2014-01-01

    Partial thin-plate smoothing spline model is used to construct the trend surface.Correction of the spline estimated trend surface is often necessary in practice.Cressman weight is modified and applied in residual correction.The modified Cressman weight performs better than Cressman weight.A method for estimating the error covariance matrix of gridded field is provided.

  6. Mantle ingredients for making the fingerprint of Etna alkaline magmas: implications for shallow partial melting within the complex geodynamic framework of Eastern Sicily

    Science.gov (United States)

    Viccaro, Marco; Zuccarello, Francesco

    2017-09-01

    able to produce magmas with variable compositions and volatile contents, which can then undergo distinct histories of ascent and evolution, leading to the wide range of eruptive styles observed at Mt. Etna volcano. Being partial melting confined in the spinel facies of the mantle, our model implies that the source of Mt. Etna magmas might be rather shallow (<2 GPa; i.e., lesser than ca. 60 km), excluding the presence of deep, plume-like mantle structures responsible for magma generation. Partial melting should occur consequently as a response of mantle decompression within the framework of regional tectonics affecting the Eastern Sicily, which could be triggered by extensional tectonics and/or subduction-induced mantle upwelling.

  7. Computational acceleration for MR image reconstruction in partially parallel imaging.

    Science.gov (United States)

    Ye, Xiaojing; Chen, Yunmei; Huang, Feng

    2011-05-01

    In this paper, we present a fast numerical algorithm for solving total variation and l(1) (TVL1) based image reconstruction with application in partially parallel magnetic resonance imaging. Our algorithm uses variable splitting method to reduce computational cost. Moreover, the Barzilai-Borwein step size selection method is adopted in our algorithm for much faster convergence. Experimental results on clinical partially parallel imaging data demonstrate that the proposed algorithm requires much fewer iterations and/or less computational cost than recently developed operator splitting and Bregman operator splitting methods, which can deal with a general sensing matrix in reconstruction framework, to get similar or even better quality of reconstructed images.

  8. Hadron mass corrections in semi-inclusive deep inelastic scattering

    International Nuclear Information System (INIS)

    Accardi, A.; Hobbs, T.; Melnitchouk, W.

    2009-01-01

    We derive mass corrections for semi-inclusive deep inelastic scattering of leptons from nucleons using a collinear factorization framework which incorporates the initial state mass of the target nucleon and the final state mass of the produced hadron h. The hadron mass correction is made by introducing a generalized, finite-Q 2 scaling variable ζ h for the hadron fragmentation function, which approaches the usual energy fraction z h = E h /ν in the Bjorken limit. We systematically examine the kinematic dependencies of the mass corrections to semi-inclusive cross sections, and find that these are even larger than for inclusive structure functions. The hadron mass corrections compete with the experimental uncertainties at kinematics typical of current facilities, Q 2 2 and intermediate x B > 0.3, and will be important to efforts at extracting parton distributions from semi-inclusive processes at intermediate energies.

  9. Partial Discharge Monitoring in Power Transformers Using Low-Cost Piezoelectric Sensors.

    Science.gov (United States)

    Castro, Bruno; Clerice, Guilherme; Ramos, Caio; Andreoli, André; Baptista, Fabricio; Campos, Fernando; Ulson, José

    2016-08-10

    Power transformers are crucial in an electric power system. Failures in transformers can affect the quality and cause interruptions in the power supply. Partial discharges are a phenomenon that can cause failures in the transformers if not properly monitored. Typically, the monitoring requires high-cost corrective maintenance or even interruptions of the power system. Therefore, the development of online non-invasive monitoring systems to detect partial discharges in power transformers has great relevance since it can reduce significant maintenance costs. Although commercial acoustic emission sensors have been used to monitor partial discharges in power transformers, they still represent a significant cost. In order to overcome this drawback, this paper presents a study of the feasibility of low-cost piezoelectric sensors to identify partial discharges in mineral insulating oil of power transformers. The analysis of the feasibility of the proposed low-cost sensor is performed by its comparison with a commercial acoustic emission sensor commonly used to detect partial discharges. The comparison between the responses in the time and frequency domain of both sensors was carried out and the experimental results indicate that the proposed piezoelectric sensors have great potential in the detection of acoustic waves generated by partial discharges in insulation oil, contributing for the popularization of this noninvasive technique.

  10. Partial Discharge Monitoring in Power Transformers Using Low-Cost Piezoelectric Sensors

    Directory of Open Access Journals (Sweden)

    Bruno Castro

    2016-08-01

    Full Text Available Power transformers are crucial in an electric power system. Failures in transformers can affect the quality and cause interruptions in the power supply. Partial discharges are a phenomenon that can cause failures in the transformers if not properly monitored. Typically, the monitoring requires high-cost corrective maintenance or even interruptions of the power system. Therefore, the development of online non-invasive monitoring systems to detect partial discharges in power transformers has great relevance since it can reduce significant maintenance costs. Although commercial acoustic emission sensors have been used to monitor partial discharges in power transformers, they still represent a significant cost. In order to overcome this drawback, this paper presents a study of the feasibility of low-cost piezoelectric sensors to identify partial discharges in mineral insulating oil of power transformers. The analysis of the feasibility of the proposed low-cost sensor is performed by its comparison with a commercial acoustic emission sensor commonly used to detect partial discharges. The comparison between the responses in the time and frequency domain of both sensors was carried out and the experimental results indicate that the proposed piezoelectric sensors have great potential in the detection of acoustic waves generated by partial discharges in insulation oil, contributing for the popularization of this noninvasive technique.

  11. Fundamental partial compositeness

    International Nuclear Information System (INIS)

    Sannino, Francesco; Strumia, Alessandro; Tesi, Andrea; Vigiani, Elena

    2016-01-01

    We construct renormalizable Standard Model extensions, valid up to the Planck scale, that give a composite Higgs from a new fundamental strong force acting on fermions and scalars. Yukawa interactions of these particles with Standard Model fermions realize the partial compositeness scenario. Under certain assumptions on the dynamics of the scalars, successful models exist because gauge quantum numbers of Standard Model fermions admit a minimal enough ‘square root’. Furthermore, right-handed SM fermions have an SU(2)_R-like structure, yielding a custodially-protected composite Higgs. Baryon and lepton numbers arise accidentally. Standard Model fermions acquire mass at tree level, while the Higgs potential and flavor violations are generated by quantum corrections. We further discuss accidental symmetries and other dynamical features stemming from the new strongly interacting scalars. If the same phenomenology can be obtained from models without our elementary scalars, they would reappear as composite states.

  12. a Conceptual Framework for Indoor Mapping by Using Grammars

    Science.gov (United States)

    Hu, X.; Fan, H.; Zipf, A.; Shang, J.; Gu, F.

    2017-09-01

    Maps are the foundation of indoor location-based services. Many automatic indoor mapping approaches have been proposed, but they rely highly on sensor data, such as point clouds and users' location traces. To address this issue, this paper presents a conceptual framework to represent the layout principle of research buildings by using grammars. This framework can benefit the indoor mapping process by improving the accuracy of generated maps and by dramatically reducing the volume of the sensor data required by traditional reconstruction approaches. In addition, we try to present more details of partial core modules of the framework. An example using the proposed framework is given to show the generation process of a semantic map. This framework is part of an ongoing research for the development of an approach for reconstructing semantic maps.

  13. New applications of partial residual methodology

    International Nuclear Information System (INIS)

    Uslu, V.R.

    1999-12-01

    The formulation of a problem of interest in the framework of a statistical analysis starts with collecting the data, choosing a model, making certain assumptions as described in the basic paradigm by Box (1980). This stage is is called model building. Then the estimation stage is in order by pretending as if the formulation of the problem was true to obtain estimates, to make tests and inferences. In the final stage, called diagnostic checking, checking of whether there are some disagreements between the data and the model fitted is done by using diagnostic measures and diagnostic plots. It is well known that statistical methods perform best under the condition that all assumptions related to the methods are satisfied. However it is true that having the ideal case in practice is very difficult. Diagnostics are therefore becoming important so are diagnostic plots because they provide a immediate assessment. Partial residual plots that are the main interest of the present study are playing the major role among the diagnostic plots in multiple regression analysis. In statistical literature it is admitted that partial residual plots are more useful than ordinary residual plots in detecting outliers, nonconstant variance, and especially discovering curvatures. In this study we consider the partial residual methodology in statistical methods rather than multiple regression. We have shown that for the same purpose as in the multiple regression the use of partial residual plots is possible particularly in autoregressive time series models, transfer function models, linear mixed models and ridge regression. (author)

  14. Prediction in Partial Duration Series With Generalized Pareto-Distributed Exceedances

    DEFF Research Database (Denmark)

    Rosbjerg, Dan; Madsen, Henrik; Rasmussen, Peter Funder

    1992-01-01

    As a generalization of the common assumption of exponential distribution of the exceedances in Partial duration series the generalized Pareto distribution has been adopted. Estimators for the parameters are presented using estimation by both method of moments and probability-weighted moments...... distributions (with physically justified upper limit) the correct exceedance distribution should be applied despite a possible acceptance of the exponential assumption by a test of significance....

  15. A modified occlusal wafer for managing partially dentate orthognathic patients--a case series.

    Science.gov (United States)

    Soneji, Bhavin Kiritkumar; Esmail, Zaid; Sharma, Pratik

    2015-03-01

    A multidisciplinary approach is essential in orthognathic surgery to achieve stable and successful outcomes. The model surgery planning is an important aspect in achieving the desired aims. An occlusal wafer used at the time of surgery aids the surgeon during correct placement of the jaws. When dealing with partially dentate patients, the design of the occlusal wafer requires modification to appropriately position the jaw. Two cases with partially dentate jaws are presented in which the occlusal wafer has been modified to provide stability at the time of surgery.

  16. Have You Paid Your Rent? Servant Leadership in Correctional Education

    Directory of Open Access Journals (Sweden)

    Alana Jeaniece Simmons

    2015-04-01

    Full Text Available This article makes the case for servant leadership as a model and as a philosophy to guide correctional educationalists on how to interact with their students. This article begins with an introduction to identify the gap in the literature with regards to the relationship between servant leadership and adult and correctional education. It continues with a summary of the 10 characteristics of servant leadership that parallel some of the characteristics that educators exhibit in the prison classroom, and explores how those characteristics impact the student population. We conclude by providing a framework by which further research should explore the link between servant leadership and correctional education in order to enhance classroom practices.

  17. QCD parton showers and NLO EW corrections to Drell-Yan

    CERN Document Server

    Richardson, P; Sapronov, A A; Seymour, M H; Skands, P Z

    2012-01-01

    We report on the implementation of an interface between the SANC generator framework for Drell-Yan hard processes, which includes next-to-leading order electroweak (NLO EW) corrections, and the Herwig++ and Pythia8 QCD parton shower Monte Carlos. A special aspect of this implementation is that the initial-state shower evolution in both shower generators has been augmented to handle the case of an incoming photon-in-a-proton, diagrams for which appear at the NLO EW level. The difference between shower algorithms leads to residual differences in the relative corrections of 2-3% in the p_T(mu) distributions at p_T(mu)>~50 GeV (where the NLO EW correction itself is of order 10%).

  18. Deriving implicit user feedback from partial URLs for effective web page retrieval

    NARCIS (Netherlands)

    Li, R.; van der Weide, Theo

    2010-01-01

    User click-throughs provide a search context for understanding the user need of complex information. This paper re-examines the effectiveness of this approach when based on partial clicked data using the language modeling framework. We expand the original query by topical terms derived from clicked

  19. Repeat-aware modeling and correction of short read errors.

    Science.gov (United States)

    Yang, Xiao; Aluru, Srinivas; Dorman, Karin S

    2011-02-15

    High-throughput short read sequencing is revolutionizing genomics and systems biology research by enabling cost-effective deep coverage sequencing of genomes and transcriptomes. Error detection and correction are crucial to many short read sequencing applications including de novo genome sequencing, genome resequencing, and digital gene expression analysis. Short read error detection is typically carried out by counting the observed frequencies of kmers in reads and validating those with frequencies exceeding a threshold. In case of genomes with high repeat content, an erroneous kmer may be frequently observed if it has few nucleotide differences with valid kmers with multiple occurrences in the genome. Error detection and correction were mostly applied to genomes with low repeat content and this remains a challenging problem for genomes with high repeat content. We develop a statistical model and a computational method for error detection and correction in the presence of genomic repeats. We propose a method to infer genomic frequencies of kmers from their observed frequencies by analyzing the misread relationships among observed kmers. We also propose a method to estimate the threshold useful for validating kmers whose estimated genomic frequency exceeds the threshold. We demonstrate that superior error detection is achieved using these methods. Furthermore, we break away from the common assumption of uniformly distributed errors within a read, and provide a framework to model position-dependent error occurrence frequencies common to many short read platforms. Lastly, we achieve better error correction in genomes with high repeat content. The software is implemented in C++ and is freely available under GNU GPL3 license and Boost Software V1.0 license at "http://aluru-sun.ece.iastate.edu/doku.php?id = redeem". We introduce a statistical framework to model sequencing errors in next-generation reads, which led to promising results in detecting and correcting errors

  20. Meson-exchange-current corrections to magnetic moments in quantum hadrodynamics

    International Nuclear Information System (INIS)

    Morse, T.M.

    1990-01-01

    Corrections to the magnetic moments of the non-relativistic shell model (Schmidt lines) have a long history. In the early fifties calculations of pion exchange and core polarization contributions to nuclear magnetic moments were initiated. These calculations matured by the early eighties to include other mesons and the delta isobar. Relativistic nuclear shell model calculations are relatively recent. Meson exchange and the delta isobar current contributions to the magnetic moments of the relativistic shell model have remained largely unexplored. The disagreement between the valence values of spherical relativistic mean-field models and experiment was a major problem with early (1975-1985) quantum hydrodynamics (QHD) calculations of magnetic moments. Core polarization calculations (1986-1988) have been found to resolve the large discrepancy, predicting isoscalar magnetic moments to within typically five percent of experiment. The isovector magnetic moments, however, are about twice as far from experiment with an average discrepancy of about ten percent. The pion, being the lightest of the mesons, has historically been expected to dominate isovector corrections. Because this has been found to be true in non-relativistic calculations, the author calculated the pion corrections in the framework of QHD. The seagull and in-flight pion exchange current diagram corrections to the magnetic moments of eight finite nuclei (plus or minus one valence nucleon from the magic A = 16 and A = 40 doubly closed shell systems) are calculated in the framework of QHD, and compared with earlier non-relativistic calculations and experiment

  1. Target mass corrections to electroweak structure functions and perturbative neutrino cross sections

    International Nuclear Information System (INIS)

    Kretzer, S.; Reno, M.H.

    2004-01-01

    We provide a complete and consistent framework to include subasymptotic perturbative as well as mass corrections to the leading twist (τ=2) evaluation of charged and neutral current weak structure functions and the perturbative neutrino cross sections. We reexamine previous calculations in a modern language and fill in the gaps that we find missing for a complete and ready-to-use 'NLO ξ-scaling' formulary. In particular, as a new result we formulate the mixing of the partonic and hadronic structure function tensor basis in the operator approach to deep inelastic scattering. As an underlying framework we follow the operator product expansion in the manner of Georgi and Politzer that allows the inclusion of target mass corrections at arbitrary order in QCD and we provide explicit analytical and numerical results at NLO. We compare this approach with a simpler collinear parton model approach to ξ scaling. Along with target mass corrections we include heavy quark mass effects as a calculable leading twist power suppressed correction. The complete corrections have been implemented into a Monte Carlo integration program to evaluate structure functions and/or integrated cross sections. As applications, we compare the operator approach with the collinear approximation numerically and we investigate the NLO and mass corrections to observables that are related to the extraction of the weak mixing angle from a Paschos-Wolfenstein-like relation in neutrino-iron scattering. We expect that the interpretation of neutrino scattering events in terms of oscillation physics and electroweak precision physics will benefit from our results

  2. Finite-volume and partial quenching effects in the magnetic polarizability of the neutron

    Science.gov (United States)

    Hall, J. M. M.; Leinweber, D. B.; Young, R. D.

    2014-03-01

    There has been much progress in the experimental measurement of the electric and magnetic polarizabilities of the nucleon. Similarly, lattice QCD simulations have recently produced dynamical QCD results for the magnetic polarizability of the neutron approaching the chiral regime. In order to compare the lattice simulations with experiment, calculation of partial quenching and finite-volume effects is required prior to an extrapolation in quark mass to the physical point. These dependencies are described using chiral effective field theory. Corrections to the partial quenching effects associated with the sea-quark-loop electric charges are estimated by modeling corrections to the pion cloud. These are compared to the uncorrected lattice results. In addition, the behavior of the finite-volume corrections as a function of pion mass is explored. Box sizes of approximately 7 fm are required to achieve a result within 5% of the infinite-volume result at the physical pion mass. A variety of extrapolations are shown at different box sizes, providing a benchmark to guide future lattice QCD calculations of the magnetic polarizabilities. A relatively precise value for the physical magnetic polarizability of the neutron is presented, βn=1.93(11)stat(11)sys×10-4 fm3, which is in agreement with current experimental results.

  3. Minimal and non-minimal standard models: Universality of radiative corrections

    International Nuclear Information System (INIS)

    Passarino, G.

    1991-01-01

    The possibility of describing electroweak processes by means of models with a non-minimal Higgs sector is analyzed. The renormalization procedure which leads to a set of fitting equations for the bare parameters of the lagrangian is first reviewed for the minimal standard model. A solution of the fitting equations is obtained, which correctly includes large higher-order corrections. Predictions for physical observables, notably the W boson mass and the Z O partial widths, are discussed in detail. Finally the extension to non-minimal models is described under the assumption that new physics will appear only inside the vector boson self-energies and the concept of universality of radiative corrections is introduced, showing that to a large extent they are insensitive to the details of the enlarged Higgs sector. Consequences for the bounds on the top quark mass are also discussed. (orig.)

  4. A CONCEPTUAL FRAMEWORK FOR INDOOR MAPPING BY USING GRAMMARS

    Directory of Open Access Journals (Sweden)

    X. Hu

    2017-09-01

    Full Text Available Maps are the foundation of indoor location-based services. Many automatic indoor mapping approaches have been proposed, but they rely highly on sensor data, such as point clouds and users’ location traces. To address this issue, this paper presents a conceptual framework to represent the layout principle of research buildings by using grammars. This framework can benefit the indoor mapping process by improving the accuracy of generated maps and by dramatically reducing the volume of the sensor data required by traditional reconstruction approaches. In addition, we try to present more details of partial core modules of the framework. An example using the proposed framework is given to show the generation process of a semantic map. This framework is part of an ongoing research for the development of an approach for reconstructing semantic maps.

  5. Case Study: Effects of a Partial-Debris Dam on Riverbank Erosion in the Parlung Tsangpo River, China

    Directory of Open Access Journals (Sweden)

    Clarence Edward Choi

    2018-02-01

    Full Text Available This paper examines two successive debris flows that deposited a total of 1.4 million m3 of sediment into the Parlung Tsangpo River in China in 2010. As a result of these deposits, a partial-debris dam was formed in the river. This dam rerouted the discharge in the river along one of the riverbanks, which supported a highway. The rerouted discharge eroded the riverbank and the highway eventually collapsed. To enhance our understanding of the threat posed by partial-debris dams, a field investigation was carried out to measure the discharge in the river and to collect soil samples of the collapsed riverbank. Findings from the field investigation were then used to back-analyze fluvial erosion along the riverbank using a combined erosion framework proposed in this study. This combined framework adopts a dam-breach erosion model which can capture the progressive nature of fluvial erosion by considering the particle size distribution of the soil being eroded. The results from the back-analysis were then evaluated against unique high-resolution images obtained from satellites. This case study not only highlights the consequences of the formation of partial-debris dams on nearby infrastructure, but it also proposes the use of a combined erosion framework to provide a first-order assessment of riverbank stability. Unique high-resolution satellite images are used to assess the proposed erosion framework and key challenges in assessing erosion are discussed.

  6. Complete one-loop electroweak corrections to ZZZ production at the ILC

    International Nuclear Information System (INIS)

    Su Jijuan; Ma Wengan; Zhang Renyou; Wang Shaoming; Guo Lei

    2008-01-01

    We study the complete O(α ew ) electroweak (EW) corrections to the production of three Z 0 bosons in the framework of the standard model (SM) at the ILC. The leading-order and the EW next-to-leading-order corrected cross sections are presented, and their dependence on the colliding energy √(s) and Higgs-boson mass m H is analyzed. We investigate also the LO and one-loop EW corrected distributions of the transverse momentum of the final Z 0 boson, and the invariant mass of the Z 0 Z 0 pair. Our numerical results show that the EW one-loop correction generally suppresses the tree-level cross section, and the relative correction with m H =120 GeV(150 GeV) varies between -15.8%(-13.9%) and -7.5%(-6.2%) when √(s) goes up from 350 GeV to 1 TeV.

  7. Partial control of chaotic transients using escape times

    International Nuclear Information System (INIS)

    Sabuco, Juan; Zambrano, Samuel; Sanjuan, Miguel A F

    2010-01-01

    The partial control technique allows one to keep the trajectories of a dynamical system inside a region where there is a chaotic saddle and from which nearly all the trajectories diverge. Its main advantage is that this goal is achieved even if the corrections applied to the trajectories are smaller than the action of environmental noise on the dynamics, a counterintuitive result that is obtained by using certain safe sets. Using the Henon map as a paradigm, we show here the deep relationship between the safe sets and the sets of points with different escape times, the escape time sets. Furthermore, we show that it is possible to find certain extended safe sets that can be used instead of the safe sets in the partial control technique. Numerical simulations confirm our findings and show that in some situations, the use of extended safe sets can be more advantageous.

  8. The correction of occlusal vertical dimension on tooth wear

    Directory of Open Access Journals (Sweden)

    Rostiny Rostiny

    2007-12-01

    Full Text Available The loss of occlusal vertical dimension which is caused by tooth wear is necessarily treated to regain vertical dimension. Correctional therapy should be done as early possible. In this case, simple and relatively low cost therapy was performed. In unserve loss of occlusal vertical dimension, partial removable denture could be used and the improvement of lengthening anterior teeth using composite resin to improve to regain vertical dimensional occlusion.

  9. Diffusion corrections to the hard pomeron

    CERN Document Server

    Ciafaloni, Marcello; Müller, A H; Ciafaloni, Marcello; Taiuti, Martina

    2001-01-01

    The high-energy behaviour of two-scale hard processes is investigated in the framework of small-x models with running coupling, having the Airy diffusion model as prototype. We show that, in some intermediate high-energy regime, the perturbative hard Pomeron exponent determines the energy dependence, and we prove that diffusion corrections have the form hinted at before in particular cases. We also discuss the breakdown of such regime at very large energies, and the onset of the non-perturbative Pomeron behaviour.

  10. Decaying states as physically nonisolable partial systems

    International Nuclear Information System (INIS)

    Szasz, G.I.

    1976-01-01

    Presently the investigations of decaying quantum mechanical systems lack a well-founded concept, which is reflected by several formal difficulties of the corresponding mathematical treatment. In order to clarify in some respect the situation, it is investigated, within the framework of nonrelativistic quantum mechanics, the resonant scattering of an initially well localized partial wave packet. If the potential decreases sufficiently fast for r→infinite, the wave packet can be expressed at sufficiently long time after the scattering has taken place, as the sum of a term describing the direct scattering and a function of the resonant solution with complex 'momentum'. From such a heuristic relation one can deduce not only the probability for the creation of unstable particles but also obtain some hints to a connection between decaying states and physically nonisolable partial systems. On the other hand, this connection can perhaps display the inadequacy of attempts which suggest to solve the problem of decaying states within the usual Hilbert space methods. (author)

  11. Path integral solution of linear second order partial differential equations I: the general construction

    International Nuclear Information System (INIS)

    LaChapelle, J.

    2004-01-01

    A path integral is presented that solves a general class of linear second order partial differential equations with Dirichlet/Neumann boundary conditions. Elementary kernels are constructed for both Dirichlet and Neumann boundary conditions. The general solution can be specialized to solve elliptic, parabolic, and hyperbolic partial differential equations with boundary conditions. This extends the well-known path integral solution of the Schroedinger/diffusion equation in unbounded space. The construction is based on a framework for functional integration introduced by Cartier/DeWitt-Morette

  12. Radiative corrections in K{yields}3{pi} decays

    Energy Technology Data Exchange (ETDEWEB)

    Bissegger, M. [Institute for Theoretical Physics, University of Bern, Sidlerstr. 5, CH-3012 Bern (Switzerland); Fuhrer, A. [Institute for Theoretical Physics, University of Bern, Sidlerstr. 5, CH-3012 Bern (Switzerland); Physics Department, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA 92093-0319 (United States); Gasser, J. [Institute for Theoretical Physics, University of Bern, Sidlerstr. 5, CH-3012 Bern (Switzerland); Kubis, B. [Helmholtz-Institut fuer Strahlen-und Kernphysik, Universitaet Bonn, Nussallee 14-16, D-53115 Bonn (Germany)], E-mail: kubis@itkp.uni-bonn.de; Rusetsky, A. [Helmholtz-Institut fuer Strahlen-und Kernphysik, Universitaet Bonn, Nussallee 14-16, D-53115 Bonn (Germany)

    2009-01-01

    We investigate radiative corrections to K{yields}3{pi} decays. In particular, we extend the non-relativistic framework developed recently to include real and virtual photons and show that, in a well-defined power counting scheme, the results reproduce corrections obtained in the relativistic calculation. Real photons are included exactly, beyond the soft-photon approximation, and we compare the result with the latter. The singularities generated by pionium near threshold are investigated, and a region is identified where standard perturbation theory in the fine structure constant {alpha} may be applied. We expect that the formulae provided allow one to extract S-wave {pi}{pi} scattering lengths from the cusp effect in these decays with high precision.

  13. Features of an Error Correction Memory to Enhance Technical Texts Authoring in LELIE

    Directory of Open Access Journals (Sweden)

    Patrick SAINT-DIZIER

    2015-12-01

    Full Text Available In this paper, we investigate the notion of error correction memory applied to technical texts. The main purpose is to introduce flexibility and context sensitivity in the detection and the correction of errors related to Constrained Natural Language (CNL principles. This is realized by enhancing error detection paired with relatively generic correction patterns and contextual correction recommendations. Patterns are induced from previous corrections made by technical writers for a given type of text. The impact of such an error correction memory is also investigated from the point of view of the technical writer's cognitive activity. The notion of error correction memory is developed within the framework of the LELIE project an experiment is carried out on the case of fuzzy lexical items and negation, which are both major problems in technical writing. Language processing and knowledge representation aspects are developed together with evaluation directions.

  14. Correcting human heart 31P NMR spectra for partial saturation. Evidence that saturation factors for PCr/ATP are homogeneous in normal and disease states

    Science.gov (United States)

    Bottomley, Paul A.; Hardy, Christopher J.; Weiss, Robert G.

    Heart PCr/ATP ratios measured from spatially localized 31P NMR spectra can be corrected for partial saturation effects using saturation factors derived from unlocalized chest surface-coil spectra acquired at the heart rate and approximate Ernst angle for phosphor creatine (PCr) and again under fully relaxed conditions during each 31P exam. To validate this approach in studies of normal and disease states where the possibility of heterogeneity in metabolite T1 values between both chest muscle and heart and normal and disease states exists, the properties of saturation factors for metabolite ratios were investigated theoretically under conditions applicable in typical cardiac spectroscopy exams and empirically using data from 82 cardiac 31P exams in six study groups comprising normal controls ( n = 19) and patients with dilated ( n = 20) and hypertrophic ( n = 5) cardiomyopathy, coronary artery disease ( n = 16), heart transplants ( n = 19), and valvular heart disease ( n = 3). When TR ≪ T1,(PCr), with T1(PCr) ⩾ T1(ATP), the saturation factor for PCr/ATP lies in the range 1.5 ± 0.5, regardless of the T1 values. The precise value depends on the ratio of metabolite T1 values rather than their absolute values and is insensitive to modest changes in TR. Published data suggest that the metabolite T1 ratio is the same in heart and muscle. Our empirical data reveal that the saturation factors do not vary significantly with disease state, nor with the relative fractions of muscle and heart contributing to the chest surface-coil spectra. Also, the corrected myocardial PCr/ATP ratios in each normal or disease state bear no correlation with the corresponding saturation factors nor the fraction of muscle in the unlocalized chest spectra. However, application of the saturation correction (mean value, 1.36 ± 0.03 SE) significantly reduced scatter in myocardial PCr/ATP data by 14 ± 11% (SD) ( p ⩽ 0.05). The findings suggest that the relative T1 values of PCr and ATP are

  15. Clinical Fit of Partial Removable Dental Prostheses Based on Alginate or Polyvinyl Siloxane Impressions.

    NARCIS (Netherlands)

    Fokkinga, W.A.; Witter, D.J.; Bronkhorst, E.M.; Creugers, N.H.J.

    2017-01-01

    PURPOSE: The aim of this study was to analyze the clinical fit of metal-frame partial removable dental prostheses (PRDPs) based on custom trays used with alginate or polyvinyl siloxane impression material. MATERIALS AND METHODS: Fifth-year students of the Nijmegen Dental School made 25 correct

  16. Modeling treatment of ischemic heart disease with partially observable Markov decision processes.

    Science.gov (United States)

    Hauskrecht, M; Fraser, H

    1998-01-01

    Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead they are very often dependent and interleaved over time, mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of Partially observable Markov decision processes (POMDPs) developed and used in operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In the paper, we show how the POMDP framework could be used to model and solve the problem of the management of patients with ischemic heart disease, and point out modeling advantages of the framework over standard decision formalisms.

  17. Planning treatment of ischemic heart disease with partially observable Markov decision processes.

    Science.gov (United States)

    Hauskrecht, M; Fraser, H

    2000-03-01

    Diagnosis of a disease and its treatment are not separate, one-shot activities. Instead, they are very often dependent and interleaved over time. This is mostly due to uncertainty about the underlying disease, uncertainty associated with the response of a patient to the treatment and varying cost of different diagnostic (investigative) and treatment procedures. The framework of partially observable Markov decision processes (POMDPs) developed and used in the operations research, control theory and artificial intelligence communities is particularly suitable for modeling such a complex decision process. In this paper, we show how the POMDP framework can be used to model and solve the problem of the management of patients with ischemic heart disease (IHD), and demonstrate the modeling advantages of the framework over standard decision formalisms.

  18. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    KAUST Repository

    Vignal, Philippe; Dalcin, Lisandro; Collier, Nathan; Calo, Victor M.

    2014-01-01

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation

  19. Modifying Spearman's Attenuation Equation to Yield Partial Corrections for Measurement Error--With Application to Sample Size Calculations

    Science.gov (United States)

    Nicewander, W. Alan

    2018-01-01

    Spearman's correction for attenuation (measurement error) corrects a correlation coefficient for measurement errors in either-or-both of two variables, and follows from the assumptions of classical test theory. Spearman's equation removes all measurement error from a correlation coefficient which translates into "increasing the reliability of…

  20. Local linear density estimation for filtered survival data, with bias correction

    DEFF Research Database (Denmark)

    Nielsen, Jens Perch; Tanggaard, Carsten; Jones, M.C.

    2009-01-01

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a 'pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias-correction methods...... within our framework. The multiplicative bias-correction method proves to be the best in a simulation study comparing the performance of the considered estimators. An example concerning old-age mortality demonstrates the importance of the improvements provided....

  1. Local Linear Density Estimation for Filtered Survival Data with Bias Correction

    DEFF Research Database (Denmark)

    Tanggaard, Carsten; Nielsen, Jens Perch; Jones, M.C.

    it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a ‘pilot' estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias correction methods...... within our framework. The multiplicative bias correction method proves to be best in a simulation study comparing the performance of the considered estimators. An example concerning old age mortality demonstrates the importance of the improvements provided....

  2. In-medium effects in K+ scattering versus Glauber model with noneikonal corrections

    International Nuclear Information System (INIS)

    Eliseev, S.M.; Rihan, T.H.

    1996-01-01

    The discrepancy between the experimental and the theoretical ratio R of the total cross sections, R=σ(K + - 12 C)/6σ(K + - d), at momenta up to 800 MeV/c is discussed in the framework of the Glauber multiple scattering approach. It is shown that various corrections such as adopting relativistic K + -N amplitudes as well as noneikonal corrections seem to fail in reproducing the experimental data especially at higher momenta. 17 refs., 1 fig

  3. Determination of correction factor of radioelement content data generated from Exploranium GR-320 Gamma spectrometer

    International Nuclear Information System (INIS)

    Nasrun, S; Syamsul-Hadi, M; Sumardi

    2000-01-01

    Gamma-ray Spectrometer Exploranium GR-320 is the instrument radiometric survey which is able to measure radioelement content directly in field based on partial gamma-ray energy of elements. Because of the instrument is new and it was granted from the lAEA, so it is necessarily to create a correction factor for the instrument due to be gaining the better data. Correction factor was generated from comparing gamma spectrometer's radioelement content to those of chemical analysed data of calibration pad. The correction factor for Potassium (K) is 1.31, uranium is 1.46, and thorium is 0.39

  4. Multi-jet merged top-pair production including electroweak corrections

    Science.gov (United States)

    Gütschow, Christian; Lindert, Jonas M.; Schönherr, Marek

    2018-04-01

    We present theoretical predictions for the production of top-quark pairs in association with jets at the LHC including electroweak (EW) corrections. First, we present and compare differential predictions at the fixed-order level for t\\bar{t} and t\\bar{t}+ {jet} production at the LHC considering the dominant NLO EW corrections of order O(α_{s}^2 α ) and O(α_{s}^3 α ) respectively together with all additional subleading Born and one-loop contributions. The NLO EW corrections are enhanced at large energies and in particular alter the shape of the top transverse momentum distribution, whose reliable modelling is crucial for many searches for new physics at the energy frontier. Based on the fixed-order results we motivate an approximation of the EW corrections valid at the percent level, that allows us to readily incorporate the EW corrections in the MePs@Nlo framework of Sherpa combined with OpenLoops. Subsequently, we present multi-jet merged parton-level predictions for inclusive top-pair production incorporating NLO QCD + EW corrections to t\\bar{t} and t\\bar{t}+ {jet}. Finally, we compare at the particle-level against a recent 8 TeV measurement of the top transverse momentum distribution performed by ATLAS in the lepton + jet channel. We find very good agreement between the Monte Carlo prediction and the data when the EW corrections are included.

  5. Thermodynamics in modified Brans-Dicke gravity with entropy corrections

    Energy Technology Data Exchange (ETDEWEB)

    Rani, Shamaila; Jawad, Abdul; Nawaz, Tanzeela [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan); Manzoor, Rubab [University of Management and Technology, Department of Mathematics, Lahore (Pakistan)

    2018-01-15

    In this paper, we investigate the thermodynamics in the frame-work of recently proposed theory called modified Brans-Dicke gravity (Kofinas et al. in Class Quantum Gravity 33:15, 2016). For this purpose, we develop the generalized second law of thermodynamics by assuming usual entropy as well as its corrected forms (logarithmic and power law corrected) on the apparent and event horizons. In order to analyzed the clear view of thermodynamic law, the power law forms of scalar field and scale factor is being assumed. We evaluate the results graphically and found that generalized second law of thermodynamics holds in most of the cases. (orig.)

  6. Thermodynamics in modified Brans-Dicke gravity with entropy corrections

    International Nuclear Information System (INIS)

    Rani, Shamaila; Jawad, Abdul; Nawaz, Tanzeela; Manzoor, Rubab

    2018-01-01

    In this paper, we investigate the thermodynamics in the frame-work of recently proposed theory called modified Brans-Dicke gravity (Kofinas et al. in Class Quantum Gravity 33:15, 2016). For this purpose, we develop the generalized second law of thermodynamics by assuming usual entropy as well as its corrected forms (logarithmic and power law corrected) on the apparent and event horizons. In order to analyzed the clear view of thermodynamic law, the power law forms of scalar field and scale factor is being assumed. We evaluate the results graphically and found that generalized second law of thermodynamics holds in most of the cases. (orig.)

  7. [MEG]PLS: A pipeline for MEG data analysis and partial least squares statistics.

    Science.gov (United States)

    Cheung, Michael J; Kovačević, Natasa; Fatima, Zainab; Mišić, Bratislav; McIntosh, Anthony R

    2016-01-01

    The emphasis of modern neurobiological theories has recently shifted from the independent function of brain areas to their interactions in the context of whole-brain networks. As a result, neuroimaging methods and analyses have also increasingly focused on network discovery. Magnetoencephalography (MEG) is a neuroimaging modality that captures neural activity with a high degree of temporal specificity, providing detailed, time varying maps of neural activity. Partial least squares (PLS) analysis is a multivariate framework that can be used to isolate distributed spatiotemporal patterns of neural activity that differentiate groups or cognitive tasks, to relate neural activity to behavior, and to capture large-scale network interactions. Here we introduce [MEG]PLS, a MATLAB-based platform that streamlines MEG data preprocessing, source reconstruction and PLS analysis in a single unified framework. [MEG]PLS facilitates MRI preprocessing, including segmentation and coregistration, MEG preprocessing, including filtering, epoching, and artifact correction, MEG sensor analysis, in both time and frequency domains, MEG source analysis, including multiple head models and beamforming algorithms, and combines these with a suite of PLS analyses. The pipeline is open-source and modular, utilizing functions from FieldTrip (Donders, NL), AFNI (NIMH, USA), SPM8 (UCL, UK) and PLScmd (Baycrest, CAN), which are extensively supported and continually developed by their respective communities. [MEG]PLS is flexible, providing both a graphical user interface and command-line options, depending on the needs of the user. A visualization suite allows multiple types of data and analyses to be displayed and includes 4-D montage functionality. [MEG]PLS is freely available under the GNU public license (http://meg-pls.weebly.com). Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Direct Parametric Reconstruction With Joint Motion Estimation/Correction for Dynamic Brain PET Data.

    Science.gov (United States)

    Jiao, Jieqing; Bousse, Alexandre; Thielemans, Kris; Burgos, Ninon; Weston, Philip S J; Schott, Jonathan M; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Markiewicz, Pawel; Ourselin, Sebastien

    2017-01-01

    Direct reconstruction of parametric images from raw photon counts has been shown to improve the quantitative analysis of dynamic positron emission tomography (PET) data. However it suffers from subject motion which is inevitable during the typical acquisition time of 1-2 hours. In this work we propose a framework to jointly estimate subject head motion and reconstruct the motion-corrected parametric images directly from raw PET data, so that the effects of distorted tissue-to-voxel mapping due to subject motion can be reduced in reconstructing the parametric images with motion-compensated attenuation correction and spatially aligned temporal PET data. The proposed approach is formulated within the maximum likelihood framework, and efficient solutions are derived for estimating subject motion and kinetic parameters from raw PET photon count data. Results from evaluations on simulated [ 11 C]raclopride data using the Zubal brain phantom and real clinical [ 18 F]florbetapir data of a patient with Alzheimer's disease show that the proposed joint direct parametric reconstruction motion correction approach can improve the accuracy of quantifying dynamic PET data with large subject motion.

  9. Correction

    DEFF Research Database (Denmark)

    Pinkevych, Mykola; Cromer, Deborah; Tolstrup, Martin

    2016-01-01

    [This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.].......[This corrects the article DOI: 10.1371/journal.ppat.1005000.][This corrects the article DOI: 10.1371/journal.ppat.1005740.][This corrects the article DOI: 10.1371/journal.ppat.1005679.]....

  10. Matching NLO QCD corrections in WHIZARD with the POWHEG scheme

    International Nuclear Information System (INIS)

    Nejad, Bijan Chokoufe; Reuter, Juergen; Kilian, Wolfgang; Weiss, Christian; Siegen Univ.

    2015-01-01

    Building on the new automatic subtraction of NLO amplitudes in WHIZARD, we present our implementation of the POWHEG scheme to match radiative corrections consistently with the parton shower. We apply this general framework to two linear collider processes, e + e - →t anti t and e + e - →t anti tH.

  11. Regularized Partial Least Squares with an Application to NMR Spectroscopy

    OpenAIRE

    Allen, Genevera I.; Peterson, Christine; Vannucci, Marina; Maletic-Savatic, Mirjana

    2012-01-01

    High-dimensional data common in genomics, proteomics, and chemometrics often contains complicated correlation structures. Recently, partial least squares (PLS) and Sparse PLS methods have gained attention in these areas as dimension reduction techniques in the context of supervised data analysis. We introduce a framework for Regularized PLS by solving a relaxation of the SIMPLS optimization problem with penalties on the PLS loadings vectors. Our approach enjoys many advantages including flexi...

  12. Fixed Points of Multivalued Contractive Mappings in Partial Metric Spaces

    Directory of Open Access Journals (Sweden)

    Abdul Rahim Khan

    2014-01-01

    Full Text Available The aim of this paper is to present fixed point results of multivalued mappings in the framework of partial metric spaces. Some examples are presented to support the results proved herein. Our results generalize and extend various results in the existing literature. As an application of our main result, the existence and uniqueness of bounded solution of functional equations arising in dynamic programming are established.

  13. Quantum corrections to Drell-Yan production of Z bosons

    Energy Technology Data Exchange (ETDEWEB)

    Shcherbakova, Elena S.

    2011-08-15

    In this thesis, we present higher-order corrections to inclusive Z-boson hadroproduction via the Drell-Yan mechanism, h{sub 1}+h{sub 2}{yields}Z+X, at large transverse momentum (q{sub T}). Specifically, we include the QED, QCD and electroweak corrections of orders O({alpha}{sub S}{alpha}, {alpha}{sub S}{sup 2}{alpha}, {alpha}{sub S}{alpha}{sup 2}). We work in the framework of the Standard Model and adopt the MS scheme of renormalization and factorization. The cross section of Z-boson production has been precisely measured at various hadron-hadron colliders, including the Tevatron and the LHC. Our calculations will help to calibrate and monitor the luminosity and to estimate of backgrounds of the hadron-hadron interactions more reliably. Besides the total cross section, we study the distributions in the transverse momentum and the rapidity (y) of the Z boson, appropriate for Tevatron and LHC experimental conditions. Investigating the relative sizes fo the various types of corrections by means of the factor K = {sigma}{sub tot} / {sigma}{sub Born}, we find that the QCS corrections of order {alpha}{sub S}{sup 2}{alpha} are largest in general and that the electroweak corrections of order {alpha}{sub S}{alpha}{sup 2} play an important role at large values of q{sub T}, while the QED corrections at the same order are small, of order 2% or below. We also compare out results with the existing literature. We correct a few misprints in the original calculation of the QCD corrections, and find the published electroweak correction to be incomplete. Our results for the QED corrections are new. (orig.)

  14. The Odd story of α{sup ′}-corrections

    Energy Technology Data Exchange (ETDEWEB)

    Baron, Walter H. [Instituto de Física La Plata (CONICET-UNLP),La Plata, Buenos Aires (Argentina); Departamento de Física, Universidad Nacional de La Plata,Buenos Aires (Argentina); Fernández-Melgarejo, José J. [Yukawa Institute for Theoretical Physics, Kyoto University,Kyoto 606-8502 (Japan); Departamento de Física, Universidad de Murcia,E-30100 Murcia (Spain); Marqués, Diego [Instituto de Astronomía y Física del Espacio (IAFE-CONICET-UBA),Ciudad Universitaria, Buenos Aires (Argentina); Nuñez, Carmen A. [Instituto de Astronomía y Física del Espacio (IAFE-CONICET-UBA),Ciudad Universitaria, Buenos Aires (Argentina); Departamento de Física, FCEyN, Universidad de Buenos Aires,Ciudad Universitaria, Buenos Aires (Argentina)

    2017-04-13

    The α{sup ′}-deformed frame-like Double Field Theory (DFT) is a T-duality and gauge invariant extension of DFT in which generalized Green-Schwarz transformations provide a gauge principle that fixes the higher-derivative corrections. It includes all the first order α{sup ′}-corrections of the bosonic and heterotic string low energy effective actions and of the Hohm-Siegel-Zwiebach α{sup ′}-geometry. Here we gauge this theory and parameterize it in terms of a frame, a two-form, a dilaton, gauge vectors and scalar fields. This leads to a unified framework that extends the previous construction by including all duality constrained interactions in generic (gauged/super)gravity effective field theories in arbitrary number of dimensions, to first order in α{sup ′}.

  15. Global Classical Solutions for Partially Dissipative Hyperbolic System of Balance Laws

    Science.gov (United States)

    Xu, Jiang; Kawashima, Shuichi

    2014-02-01

    The basic existence theory of Kato and Majda enables us to obtain local-in-time classical solutions to generally quasilinear hyperbolic systems in the framework of Sobolev spaces (in x) with higher regularity. However, it remains a challenging open problem whether classical solutions still preserve well-posedness in the case of critical regularity. This paper is concerned with partially dissipative hyperbolic system of balance laws. Under the entropy dissipative assumption, we establish the local well-posedness and blow-up criterion of classical solutions in the framework of Besov spaces with critical regularity with the aid of the standard iteration argument and Friedrichs' regularization method. Then we explore the theory of function spaces and develop an elementary fact that indicates the relation between homogeneous and inhomogeneous Chemin-Lerner spaces (mixed space-time Besov spaces). This fact allows us to capture the dissipation rates generated from the partial dissipative source term and further obtain the global well-posedness and stability by assuming at all times the Shizuta-Kawashima algebraic condition. As a direct application, the corresponding well-posedness and stability of classical solutions to the compressible Euler equations with damping are also obtained.

  16. Nilpotent chiral superfield in N=2 supergravity and partial rigid supersymmetry breaking

    Energy Technology Data Exchange (ETDEWEB)

    Kuzenko, Sergei M. [School of Physics M013, The University of Western Australia,35 Stirling Highway, Crawley W.A. 6009 (Australia); Tartaglino-Mazzucchelli, Gabriele [Instituut voor Theoretische Fysica, KU Leuven,Celestijnenlaan 200D, B-3001 Leuven (Belgium)

    2016-03-15

    In the framework of N=2 conformal supergravity in four dimensions, we introduce a nilpotent chiral superfield suitable for the description of partial supersymmetry breaking in maximally supersymmetric spacetimes. As an application, we construct Maxwell-Goldstone multiplet actions for partial N=2→N=1 supersymmetry breaking on ℝ×S{sup 3}, AdS{sub 3}×S{sup 1} (or its covering AdS{sub 3}×ℝ), and a pp-wave spacetime. In each of these cases, the action coincides with a unique curved-superspace extension of the N=1 supersymmetric Born-Infeld action, which is singled out by the requirement of U(1) duality invariance.

  17. Emission of partial dislocations from triple junctions of grain boundaries in nanocrystalline materials

    International Nuclear Information System (INIS)

    Gutkin, M Yu; Ovid'ko, I A; Skiba, N V

    2005-01-01

    A theoretical model is suggested that describes emission of partial Shockley dislocations from triple junctions of grain boundaries (GBs) in deformed nanocrystalline materials. In the framework of the model, triple junctions accumulate dislocations due to GB sliding along adjacent GBs. The dislocation accumulation at triple junctions causes partial Shockley dislocations to be emitted from the dislocated triple junctions and thus accommodates GB sliding. Ranges of parameters (applied stress, grain size, etc) are calculated in which the emission events are energetically favourable in nanocrystalline Al, Cu and Ni. The model accounts for the corresponding experimental data reported in the literature

  18. Atmospheric Correction Inter-Comparison Exercise

    Directory of Open Access Journals (Sweden)

    Georgia Doxani

    2018-02-01

    Full Text Available The Atmospheric Correction Inter-comparison eXercise (ACIX is an international initiative with the aim to analyse the Surface Reflectance (SR products of various state-of-the-art atmospheric correction (AC processors. The Aerosol Optical Thickness (AOT and Water Vapour (WV are also examined in ACIX as additional outputs of AC processing. In this paper, the general ACIX framework is discussed; special mention is made of the motivation to initiate the experiment, the inter-comparison protocol, and the principal results. ACIX is free and open and every developer was welcome to participate. Eventually, 12 participants applied their approaches to various Landsat-8 and Sentinel-2 image datasets acquired over sites around the world. The current results diverge depending on the sensors, products, and sites, indicating their strengths and weaknesses. Indeed, this first implementation of processor inter-comparison was proven to be a good lesson for the developers to learn the advantages and limitations of their approaches. Various algorithm improvements are expected, if not already implemented, and the enhanced performances are yet to be assessed in future ACIX experiments.

  19. Partial coherence and imperfect optics at a synchrotron radiation source modeled by wavefront propagation

    Science.gov (United States)

    Laundy, David; Alcock, Simon G.; Alianelli, Lucia; Sutter, John P.; Sawhney, Kawal J. S.; Chubar, Oleg

    2014-09-01

    A full wave propagation of X-rays from source to sample at a storage ring beamline requires simulation of the electron beam source and optical elements in the beamline. The finite emittance source causes the appearance of partial coherence in the wave field. Consequently, the wavefront cannot be treated exactly with fully coherent wave propagation or fully incoherent ray tracing. We have used the wavefront code Synchrotron Radiation Workshop (SRW) to perform partially coherent wavefront propagation using a parallel computing cluster at the Diamond Light Source. Measured mirror profiles have been used to correct the wavefront for surface errors.

  20. Nearly degenerate neutrinos, supersymmetry and radiative corrections

    International Nuclear Information System (INIS)

    Casas, J.A.; Espinosa, J.R.; Ibarra, A.; Navarro, I.

    2000-01-01

    If neutrinos are to play a relevant cosmological role, they must be essentially degenerate with a mass matrix of the bimaximal mixing type. We study this scenario in the MSSM framework, finding that if neutrino masses are produced by a see-saw mechanism, the radiative corrections give rise to mass splittings and mixing angles that can accommodate the atmospheric and the (large angle MSW) solar neutrino oscillations. This provides a natural origin for the Δm 2 sol 2 atm hierarchy. On the other hand, the vacuum oscillation solution to the solar neutrino problem is always excluded. We discuss also in the SUSY scenario other possible effects of radiative corrections involving the new neutrino Yukawa couplings, including implications for triviality limits on the Majorana mass, the infrared fixed point value of the top Yukawa coupling, and gauge coupling and bottom-tau unification

  1. FACET - a "Flexible Artifact Correction and Evaluation Toolbox" for concurrently recorded EEG/fMRI data.

    Science.gov (United States)

    Glaser, Johann; Beisteiner, Roland; Bauer, Herbert; Fischmeister, Florian Ph S

    2013-11-09

    In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230-239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720-737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches.

  2. Accounting for partiality in serial crystallography using ray-tracing principles

    International Nuclear Information System (INIS)

    Kroon-Batenburg, Loes M. J.; Schreurs, Antoine M. M.; Ravelli, Raimond B. G.; Gros, Piet

    2015-01-01

    Serial crystallography generates partial reflections from still diffraction images. Partialities are estimated with EVAL ray-tracing simulations, thereby improving merged reflection data to a similar quality as conventional rotation data. Serial crystallography generates ‘still’ diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialities based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a ‘still’ Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R int factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R int of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography

  3. Accounting for partiality in serial crystallography using ray-tracing principles

    Energy Technology Data Exchange (ETDEWEB)

    Kroon-Batenburg, Loes M. J., E-mail: l.m.j.kroon-batenburg@uu.nl; Schreurs, Antoine M. M. [Utrecht University, Padualaan 8, 3584 CH Utrecht (Netherlands); Ravelli, Raimond B. G. [Maastricht University, PO Box 616, 6200 MD Maastricht (Netherlands); Gros, Piet [Utrecht University, Padualaan 8, 3584 CH Utrecht (Netherlands)

    2015-08-25

    Serial crystallography generates partial reflections from still diffraction images. Partialities are estimated with EVAL ray-tracing simulations, thereby improving merged reflection data to a similar quality as conventional rotation data. Serial crystallography generates ‘still’ diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialities based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a ‘still’ Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R{sub int} factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R{sub int} of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography.

  4. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  5. A Partially Observed Markov Decision Process for Dynamic Pricing

    OpenAIRE

    Yossi Aviv; Amit Pazgal

    2005-01-01

    In this paper, we develop a stylized partially observed Markov decision process (POMDP) framework to study a dynamic pricing problem faced by sellers of fashion-like goods. We consider a retailer that plans to sell a given stock of items during a finite sales season. The objective of the retailer is to dynamically price the product in a way that maximizes expected revenues. Our model brings together various types of uncertainties about the demand, some of which are resolvable through sales ob...

  6. Numerical solutions of ordinary and partial differential equations in the frequency domain

    International Nuclear Information System (INIS)

    Hazi, G.; Por, G.

    1997-01-01

    Numerical problems during the noise simulation in a nuclear power plant are discussed. The solutions of ordinary and partial differential equations are studied in the frequency domain. Numerical methods by the transfer function method are applied. It is shown that the correctness of the numerical methods is limited for ordinary differential equations in the frequency domain. To overcome the difficulties, step-size selection is suggested. (author)

  7. Cast Partial Denture versus Acrylic Partial Denture for Replacement of Missing Teeth in Partially Edentulous Patients

    Directory of Open Access Journals (Sweden)

    Pramita Suwal

    2017-03-01

    Full Text Available Aim: To compare the effects of cast partial denture with conventional all acrylic denture in respect to retention, stability, masticatory efficiency, comfort and periodontal health of abutments. Methods: 50 adult partially edentulous patient seeking for replacement of missing teeth having Kennedy class I and II arches with or without modification areas were selected for the study. Group-A was treated with cast partial denture and Group-B with acrylic partial denture. Data collected during follow-up visit of 3 months, 6 months, and 1 year by evaluating retention, stability, masticatory efficiency, comfort, periodontal health of abutment. Results: Chi-square test was applied to find out differences between the groups at 95% confidence interval where p = 0.05. One year comparison shows that cast partial denture maintained retention and stability better than acrylic partial denture (p< 0.05. The masticatory efficiency was significantly compromising from 3rd month to 1 year in all acrylic partial denture groups (p< 0.05. The comfort of patient with cast partial denture was maintained better during the observation period (p< 0.05. Periodontal health of abutment was gradually deteriorated in all acrylic denture group (p

  8. Establishing a regulatory framework for a RCRA corrective action program

    International Nuclear Information System (INIS)

    Krueger, J.W.

    1989-01-01

    Recently, the environmental community has become keenly aware of problems associated with integration of the demanding regulations that apply to environmental restoration activities. Once can not attend an EPA-sponsored conference on Superfund without hearing questions concerning the Resource, Conservation, and Recovery Act (RCRA) and the applicability of the National Contingency Plan (NCP) to sites that do not qualify for the National Priorities List (NPL). In particular, the U.S. Department of Energy (DOE) has been greatly criticized for its inability to define a comprehensive approach for cleaning up its hazardous waste sites. This article presents two decision flowcharts designed to resolve some of this confusion for DOE. The RCRA/CERCLA integration diagram can help the environmental manager determine which law applies and under what conditions, and the RCRA corrective action decision flowchart can guide the manager in determining which specific sections of RCRA apply to a RCRA-lead environmental restoration program

  9. Control of Partial Coalescence of Self-Assembled Metal Nano-Particles across Lyotropic Liquid Crystals Templates towards Long Range Meso-Porous Metal Frameworks Design

    Directory of Open Access Journals (Sweden)

    Ludovic F. Dumée

    2015-10-01

    Full Text Available The formation of purely metallic meso-porous metal thin films by partial interface coalescence of self-assembled metal nano-particles across aqueous solutions of Pluronics triblock lyotropic liquid crystals is demonstrated for the first time. Small angle X-ray scattering was used to study the influence of the thin film composition and processing conditions on the ordered structures. The structural characteristics of the meso-structures formed demonstrated to primarily rely on the lyotropic liquid crystal properties while the nature of the metal nano-particles used as well as the their diameters were found to affect the ordered structure formation. The impact of the annealing temperature on the nano-particle coalescence and efficiency at removing the templating lyotropic liquid crystals was also analysed. It is demonstrated that the lyotropic liquid crystal is rendered slightly less thermally stable, upon mixing with metal nano-particles and that low annealing temperatures are sufficient to form purely metallic frameworks with average pore size distributions smaller than 500 nm and porosity around 45% with potential application in sensing, catalysis, nanoscale heat exchange, and molecular separation.

  10. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  11. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  12. RCRA corrective action determination of no further action

    International Nuclear Information System (INIS)

    1996-06-01

    On July 27, 1990, the U.S. Environmental Protection Agency (EPA) proposed a regulatory framework (55 FR 30798) for responding to releases of hazardous waste and hazardous constituents from solid waste management units (SWMUs) at facilities seeking permits or permitted under the Resource Conservation and Recovery Act (RCRA). The proposed rule, 'Corrective Action for Solid Waste Management Units at Hazardous Waste Facilities', would create a new Subpart S under the 40 CFR 264 regulations, and outlines requirements for conducting RCRA Facility Investigations, evaluating potential remedies, and selecting and implementing remedies (i.e., corrective measures) at RCRA facilities. EPA anticipates instances where releases or suspected releases of hazardous wastes or constituents from SWMUs identified in a RCRA Facility Assessment, and subsequently addressed as part of required RCRA Facility Investigations, will be found to be non-existent or non-threatening to human health or the environment. Such releases may require no further action. For such situations, EPA proposed a mechanism for making a determination that no further corrective action is needed. This mechanism is known as a Determination of No Further Action (DNFA) (55 FR 30875). This information Brief describes what a DNFA is and discusses the mechanism for making a DNFA. This is one of a series of Information Briefs on RCRA corrective action

  13. Comparison of Physics Frameworks for WebGL-Based Game Engine

    Directory of Open Access Journals (Sweden)

    Yogya Resa

    2014-03-01

    Full Text Available Recently, a new technology called WebGL shows a lot of potentials for developing games. However since this technology is still new, there are still many potentials in the game development area that are not explored yet. This paper tries to uncover the potential of integrating physics frameworks with WebGL technology in a game engine for developing 2D or 3D games. Specifically we integrated three open source physics frameworks: Bullet, Cannon, and JigLib into a WebGL-based game engine. Using experiment, we assessed these frameworks in terms of their correctness or accuracy, performance, completeness and compatibility. The results show that it is possible to integrate open source physics frameworks into a WebGLbased game engine, and Bullet is the best physics framework to be integrated into the WebGL-based game engine.

  14. cgCorrect: a method to correct for confounding cell-cell variation due to cell growth in single-cell transcriptomics

    Science.gov (United States)

    Blasi, Thomas; Buettner, Florian; Strasser, Michael K.; Marr, Carsten; Theis, Fabian J.

    2017-06-01

    Accessing gene expression at a single-cell level has unraveled often large heterogeneity among seemingly homogeneous cells, which remains obscured when using traditional population-based approaches. The computational analysis of single-cell transcriptomics data, however, still imposes unresolved challenges with respect to normalization, visualization and modeling the data. One such issue is differences in cell size, which introduce additional variability into the data and for which appropriate normalization techniques are needed. Otherwise, these differences in cell size may obscure genuine heterogeneities among cell populations and lead to overdispersed steady-state distributions of mRNA transcript numbers. We present cgCorrect, a statistical framework to correct for differences in cell size that are due to cell growth in single-cell transcriptomics data. We derive the probability for the cell-growth-corrected mRNA transcript number given the measured, cell size-dependent mRNA transcript number, based on the assumption that the average number of transcripts in a cell increases proportionally to the cell’s volume during the cell cycle. cgCorrect can be used for both data normalization and to analyze the steady-state distributions used to infer the gene expression mechanism. We demonstrate its applicability on both simulated data and single-cell quantitative real-time polymerase chain reaction (PCR) data from mouse blood stem and progenitor cells (and to quantitative single-cell RNA-sequencing data obtained from mouse embryonic stem cells). We show that correcting for differences in cell size affects the interpretation of the data obtained by typically performed computational analysis.

  15. A Framework for Assessment of Intentional Fires

    Directory of Open Access Journals (Sweden)

    Iraj Mohammadfam

    2014-04-01

    Full Text Available Background & Objectives : It is not possible to live without using fire. However, fire could destruct human properties in a short time. One of the most important types of fire is intentional fire. This type of fire has become a great problem for insurance companies, fire departments, industries, government and business in the recent years. This study aimed to provide a framework for risk assessment of intentional fires . Methods: In the present study, risk assessment and management model for protecting critical properties and security vulnerability assessment model were used to develop a comprehensive framework for risk assessment of intentional fires. The framework was examined in an automotive industry . Results : The designed framework contained five steps as 1 asset inventory and prioritizing them according to their importance, 2 invasion assessment, 3 vulnerability assessment, 4 risk assessment and design and 5 implementation and evaluating the effectiveness of corrective/preventive actions. Thirty different scenarios for intentional fires were identified by implementing the designed framework in an automotive company, and then the associated risk of each scenario was quantitatively determined. Conclusion : Compared to seven models, the proposed framework represents its comprehension. Development of safety and security standards and a central security information bank to reduce security risks, including the risk of intentional fires is recommended .

  16. RBSDE's with jumps and the related obstacle problems for integral-partial differential equations

    Institute of Scientific and Technical Information of China (English)

    FAN; Yulian

    2006-01-01

    The author proves, when the noise is driven by a Brownian motion and an independent Poisson random measure, the one-dimensional reflected backward stochastic differential equation with a stopping time terminal has a unique solution. And in a Markovian framework, the solution can provide a probabilistic interpretation for the obstacle problem for the integral-partial differential equation.

  17. Phase-and-amplitude recovery from a single phase-contrast image using partially spatially coherent x-ray radiation

    Science.gov (United States)

    Beltran, Mario A.; Paganin, David M.; Pelliccia, Daniele

    2018-05-01

    A simple method of phase-and-amplitude extraction is derived that corrects for image blurring induced by partially spatially coherent incident illumination using only a single intensity image as input. The method is based on Fresnel diffraction theory for the case of high Fresnel number, merged with the space-frequency description formalism used to quantify partially coherent fields and assumes the object under study is composed of a single-material. A priori knowledge of the object’s complex refractive index and information obtained by characterizing the spatial coherence of the source is required. The algorithm was applied to propagation-based phase-contrast data measured with a laboratory-based micro-focus x-ray source. The blurring due to the finite spatial extent of the source is embedded within the algorithm as a simple correction term to the so-called Paganin algorithm and is also numerically stable in the presence of noise.

  18. Modeling tree crown dynamics with 3D partial differential equations.

    Science.gov (United States)

    Beyer, Robert; Letort, Véronique; Cournède, Paul-Henry

    2014-01-01

    We characterize a tree's spatial foliage distribution by the local leaf area density. Considering this spatially continuous variable allows to describe the spatiotemporal evolution of the tree crown by means of 3D partial differential equations. These offer a framework to rigorously take locally and adaptively acting effects into account, notably the growth toward light. Biomass production through photosynthesis and the allocation to foliage and wood are readily included in this model framework. The system of equations stands out due to its inherent dynamic property of self-organization and spontaneous adaptation, generating complex behavior from even only a few parameters. The density-based approach yields spatially structured tree crowns without relying on detailed geometry. We present the methodological fundamentals of such a modeling approach and discuss further prospects and applications.

  19. Maxillary rehabilitation using fixed and removable partial dentures with attachments: a clinical report.

    Science.gov (United States)

    dos Santos Nunes Reis, José Maurício; da Cruz Perez, Luciano Elias; Alfenas, Bruna Fernandes Moreira; de Oliveira Abi-Rached, Filipe; Filho, João Neudenir Arioli

    2014-01-01

    Despite requiring dental crown preparation and possible root canal treatment, besides the difficulty of clinical and laboratory repairs, and financial burden, the association between fixed (FPD) and removable partial dentures (RPD) by means of attachments is an important alternative for oral rehabilitation, particularly when the use of dental implants and FPDs is limited or not indicated. Among the advantages of attachment-retained RPDs are the improvements in esthetics and biomechanics, as well as correction of the buccal arrangement of anterior teeth in Kennedy Class III partially edentulous arches. This article describes the treatment sequence and technique for the use of attachments in therapy combining FPD/RPD. © 2013 by the American College of Prosthodontists.

  20. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  1. An improved optimization algorithm of the three-compartment model with spillover and partial volume corrections for dynamic FDG PET images of small animal hearts in vivo

    Science.gov (United States)

    Li, Yinlin; Kundu, Bijoy K.

    2018-03-01

    The three-compartment model with spillover (SP) and partial volume (PV) corrections has been widely used for noninvasive kinetic parameter studies of dynamic 2-[18F] fluoro-2deoxy-D-glucose (FDG) positron emission tomography images of small animal hearts in vivo. However, the approach still suffers from estimation uncertainty or slow convergence caused by the commonly used optimization algorithms. The aim of this study was to develop an improved optimization algorithm with better estimation performance. Femoral artery blood samples, image-derived input functions from heart ventricles and myocardial time-activity curves (TACs) were derived from data on 16 C57BL/6 mice obtained from the UCLA Mouse Quantitation Program. Parametric equations of the average myocardium and the blood pool TACs with SP and PV corrections in a three-compartment tracer kinetic model were formulated. A hybrid method integrating artificial immune-system and interior-reflective Newton methods were developed to solve the equations. Two penalty functions and one late time-point tail vein blood sample were used to constrain the objective function. The estimation accuracy of the method was validated by comparing results with experimental values using the errors in the areas under curves (AUCs) of the model corrected input function (MCIF) and the 18F-FDG influx constant K i . Moreover, the elapsed time was used to measure the convergence speed. The overall AUC error of MCIF for the 16 mice averaged  -1.4  ±  8.2%, with correlation coefficients of 0.9706. Similar results can be seen in the overall K i error percentage, which was 0.4  ±  5.8% with a correlation coefficient of 0.9912. The t-test P value for both showed no significant difference. The mean and standard deviation of the MCIF AUC and K i percentage errors have lower values compared to the previously published methods. The computation time of the hybrid method is also several times lower than using just a stochastic

  2. Optimizing pattern recognition-based control for partial-hand prosthesis application.

    Science.gov (United States)

    Earley, Eric J; Adewuyi, Adenike A; Hargrove, Levi J

    2014-01-01

    Partial-hand amputees often retain good residual wrist motion, which is essential for functional activities involving use of the hand. Thus, a crucial design criterion for a myoelectric, partial-hand prosthesis control scheme is that it allows the user to retain residual wrist motion. Pattern recognition (PR) of electromyographic (EMG) signals is a well-studied method of controlling myoelectric prostheses. However, wrist motion degrades a PR system's ability to correctly predict hand-grasp patterns. We studied the effects of (1) window length and number of hand-grasps, (2) static and dynamic wrist motion, and (3) EMG muscle source on the ability of a PR-based control scheme to classify functional hand-grasp patterns. Our results show that training PR classifiers with both extrinsic and intrinsic muscle EMG yields a lower error rate than training with either group by itself (pgrasps available to the classifier significantly decrease classification error (pgrasp.

  3. Effects of finite electron temperature on gradient drift instabilities in partially magnetized plasmas

    Science.gov (United States)

    Lakhin, V. P.; Ilgisonis, V. I.; Smolyakov, A. I.; Sorokina, E. A.; Marusov, N. A.

    2018-01-01

    The gradient-drift instabilities of partially magnetized plasmas in plasma devices with crossed electric and magnetic fields are investigated in the framework of the two-fluid model with finite electron temperature in an inhomogeneous magnetic field. The finite electron Larmor radius (FLR) effects are also included via the gyroviscosity tensor taking into account the magnetic field gradient. This model correctly describes the electron dynamics for k⊥ρe>1 in the sense of Padé approximants (here, k⊥ and ρe are the wavenumber perpendicular to the magnetic field and the electron Larmor radius, respectively). The local dispersion relation for electrostatic plasma perturbations with the frequency in the range between the ion and electron cyclotron frequencies and propagating strictly perpendicular to the magnetic field is derived. The dispersion relation includes the effects of the equilibrium E ×B electron current, finite ion velocity, electron inertia, electron FLR, magnetic field gradients, and Debye length effects. The necessary and sufficient condition of stability is derived, and the stability boundary is found. It is shown that, in general, the electron inertia and FLR effects stabilize the short-wavelength perturbations. In some cases, such effects completely suppress the high-frequency short-wavelength modes so that only the long-wavelength low-frequency (with respect to the lower-hybrid frequency) modes remain unstable.

  4. Using the sense of coherence framework as a tactical approach to communicating corrective action in crisis situations

    DEFF Research Database (Denmark)

    Simonsen, Daniel Morten; Jacobsen, Johan Martin Hjorth

    By combining attribution theory with crisis types, Coombs developed a theory of Situational Crisis Communication (SCCT) that recommends which crisis response strategy is appropriate for which crisis (Coombs, 1999; 2007; 2012). The seven response strategies presented in SCCT have been tested...... empirically; however, there still is a need for empirical contributions on the tactical level where the Crisis Communication Message (CCM) is developed, as argued here: “SCCT tries to answer the question of when to use different crisis responses, but it does not help researchers address the question of how...... the response strategy named corrective action for our study. According to Coombs, corrective action means giving stakeholders information about a crisis and explaining what is being done to handle it (Coombs, 2012: 150). Corrective action shows stakeholders that their safety is a priority, and thus reduces...

  5. Simple dead-time corrections for discrete time series of non-Poisson data

    International Nuclear Information System (INIS)

    Larsen, Michael L; Kostinski, Alexander B

    2009-01-01

    The problem of dead time (instrumental insensitivity to detectable events due to electronic or mechanical reset time) is considered. Most existing algorithms to correct for event count errors due to dead time implicitly rely on Poisson counting statistics of the underlying phenomena. However, when the events to be measured are clustered in time, the Poisson statistics assumption results in underestimating both the true event count and any statistics associated with count variability; the 'busiest' part of the signal is partially missed. Using the formalism associated with the pair-correlation function, we develop first-order correction expressions for the general case of arbitrary counting statistics. The results are verified through simulation of a realistic clustering scenario

  6. PENDEKATAN ERROR CORRECTION MODEL SEBAGAI PENENTU HARGA SAHAM

    Directory of Open Access Journals (Sweden)

    David Kaluge

    2017-03-01

    Full Text Available This research was to find the effect of profitability, rate of interest, GDP, and foreign exchange rate on stockprices. Approach used was error correction model. Profitability was indicated by variables EPS, and ROIwhile the SBI (1 month was used for representing interest rate. This research found that all variablessimultaneously affected the stock prices significantly. Partially, EPS, PER, and Foreign Exchange rate significantlyaffected the prices both in short run and long run. Interestingly that SBI and GDP did not affect theprices at all. The variable of ROI had only long run impact on the prices.

  7. Classifying Acute Respiratory Distress Syndrome Severity: Correcting the Arterial Oxygen Partial Pressure to Fractional Inspired Oxygen at Altitude.

    Science.gov (United States)

    Pérez-Padilla, Rogelio; Hernández-Cárdenas, Carmen Margarita; Lugo-Goytia, Gustavo

    2016-01-01

    In the well-known Berlin definition of acute respiratory distress syndrome (ARDS), there is a recommended adjustment for arterial oxygen partial pressure to fractional inspired oxygen (PaO2/FIO2) at altitude, but without a reference as to how it was derived.

  8. Constructing a clinical decision-making framework for image-guided radiotherapy using a Bayesian Network

    International Nuclear Information System (INIS)

    Hargrave, C; Deegan, T; Gibbs, A; Poulsen, M; Moores, M; Harden, F; Mengersen, K

    2014-01-01

    A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific subregions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.

  9. Constructing a clinical decision-making framework for image-guided radiotherapy using a Bayesian Network

    Science.gov (United States)

    Hargrave, C.; Moores, M.; Deegan, T.; Gibbs, A.; Poulsen, M.; Harden, F.; Mengersen, K.

    2014-03-01

    A decision-making framework for image-guided radiotherapy (IGRT) is being developed using a Bayesian Network (BN) to graphically describe, and probabilistically quantify, the many interacting factors that are involved in this complex clinical process. Outputs of the BN will provide decision-support for radiation therapists to assist them to make correct inferences relating to the likelihood of treatment delivery accuracy for a given image-guided set-up correction. The framework is being developed as a dynamic object-oriented BN, allowing for complex modelling with specific subregions, as well as representation of the sequential decision-making and belief updating associated with IGRT. A prototype graphic structure for the BN was developed by analysing IGRT practices at a local radiotherapy department and incorporating results obtained from a literature review. Clinical stakeholders reviewed the BN to validate its structure. The BN consists of a sub-network for evaluating the accuracy of IGRT practices and technology. The directed acyclic graph (DAG) contains nodes and directional arcs representing the causal relationship between the many interacting factors such as tumour site and its associated critical organs, technology and technique, and inter-user variability. The BN was extended to support on-line and off-line decision-making with respect to treatment plan compliance. Following conceptualisation of the framework, the BN will be quantified. It is anticipated that the finalised decision-making framework will provide a foundation to develop better decision-support strategies and automated correction algorithms for IGRT.

  10. Model-independent partial wave analysis using a massively-parallel fitting framework

    Science.gov (United States)

    Sun, L.; Aoude, R.; dos Reis, A. C.; Sokoloff, M.

    2017-10-01

    The functionality of GooFit, a GPU-friendly framework for doing maximum-likelihood fits, has been extended to extract model-independent {\\mathscr{S}}-wave amplitudes in three-body decays such as D + → h + h + h -. A full amplitude analysis is done where the magnitudes and phases of the {\\mathscr{S}}-wave amplitudes are anchored at a finite number of m 2(h + h -) control points, and a cubic spline is used to interpolate between these points. The amplitudes for {\\mathscr{P}}-wave and {\\mathscr{D}}-wave intermediate states are modeled as spin-dependent Breit-Wigner resonances. GooFit uses the Thrust library, with a CUDA backend for NVIDIA GPUs and an OpenMP backend for threads with conventional CPUs. Performance on a variety of platforms is compared. Executing on systems with GPUs is typically a few hundred times faster than executing the same algorithm on a single CPU.

  11. Unilateral canine crossbite correction in adults using the Invisalign method: a case report.

    Science.gov (United States)

    Giancotti, Aldo; Mampieri, Gianluca

    2012-01-01

    The aim of this paper is to present and debate the treatment of a unilateral canine crossbite using clear aligners (Invisalign). The possibility of combining partial fixed appliances with removable elastics to optimize the final outcome is also described. The advantages of protected movement, due to the presence of the aligners, to jump the occlusion during crossbite correction is also highlighted.

  12. Experiences in Building Python Automation Framework for Verification and Data Collections

    Directory of Open Access Journals (Sweden)

    2010-09-01

    Full Text Available

    This paper describes our experiences in building a Python automation framework. Specifically, the automation framework is used to support verification and data collection scripts. The scripts control various test equipments in addition to the device under test (DUT to characterize a specific performance with a specific configuration or to evaluate the correctness of the behaviour of the DUT. The specific focus on this paper is on documenting our experiences in building an automation framework using Python: on the purposes, goals and the benefits, rather than on a tutorial of how to build such a framework.

  13. Popular conceptions of nationhood in old and new European member states: Partial support for the ethnic-civic framework

    NARCIS (Netherlands)

    Janmaat, J.G.

    2006-01-01

    One of the most influential theories in the study of nationalism has been the ethnic-East/civic-West framework developed by Hans Kohn. Using the 2002 Eurobarometer survey on national identity and building on earlier survey studies, this article examines whether the Kohn framework is valid at the

  14. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  15. SAMPL5: 3D-RISM partition coefficient calculations with partial molar volume corrections and solute conformational sampling.

    Science.gov (United States)

    Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C; Joyce, Kevin P; Kovalenko, Andriy

    2016-11-01

    Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing ([Formula: see text] for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining [Formula: see text] compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to [Formula: see text]. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple [Formula: see text] correction improved agreement with experiment from [Formula: see text] to [Formula: see text], despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.

  16. Bound states on the lattice with partially twisted boundary conditions

    International Nuclear Information System (INIS)

    Agadjanov, D.; Guo, F.-K.; Ríos, G.; Rusetsky, A.

    2015-01-01

    We propose a method to study the nature of exotic hadrons by determining the wave function renormalization constant Z from lattice simulations. It is shown that, instead of studying the volume-dependence of the spectrum, one may investigate the dependence of the spectrum on the twisting angle, imposing twisted boundary conditions on the fermion fields on the lattice. In certain cases, e.g., the case of the DK bound state which is addressed in detail, it is demonstrated that the partial twisting is equivalent to the full twisting up to exponentially small corrections.

  17. FACET – a “Flexible Artifact Correction and Evaluation Toolbox” for concurrently recorded EEG/fMRI data

    Science.gov (United States)

    2013-01-01

    Background In concurrent EEG/fMRI recordings, EEG data are impaired by the fMRI gradient artifacts which exceed the EEG signal by several orders of magnitude. While several algorithms exist to correct the EEG data, these algorithms lack the flexibility to either leave out or add new steps. The here presented open-source MATLAB toolbox FACET is a modular toolbox for the fast and flexible correction and evaluation of imaging artifacts from concurrently recorded EEG datasets. It consists of an Analysis, a Correction and an Evaluation framework allowing the user to choose from different artifact correction methods with various pre- and post-processing steps to form flexible combinations. The quality of the chosen correction approach can then be evaluated and compared to different settings. Results FACET was evaluated on a dataset provided with the FMRIB plugin for EEGLAB using two different correction approaches: Averaged Artifact Subtraction (AAS, Allen et al., NeuroImage 12(2):230–239, 2000) and the FMRI Artifact Slice Template Removal (FASTR, Niazy et al., NeuroImage 28(3):720–737, 2005). Evaluation of the obtained results were compared to the FASTR algorithm implemented in the EEGLAB plugin FMRIB. No differences were found between the FACET implementation of FASTR and the original algorithm across all gradient artifact relevant performance indices. Conclusion The FACET toolbox not only provides facilities for all three modalities: data analysis, artifact correction as well as evaluation and documentation of the results but it also offers an easily extendable framework for development and evaluation of new approaches. PMID:24206927

  18. A Numerical Approximation Framework for the Stochastic Linear Quadratic Regulator on Hilbert Spaces

    Energy Technology Data Exchange (ETDEWEB)

    Levajković, Tijana, E-mail: tijana.levajkovic@uibk.ac.at, E-mail: t.levajkovic@sf.bg.ac.rs; Mena, Hermann, E-mail: hermann.mena@uibk.ac.at [University of Innsbruck, Department of Mathematics (Austria); Tuffaha, Amjad, E-mail: atufaha@aus.edu [American University of Sharjah, Department of Mathematics (United Arab Emirates)

    2017-06-15

    We present an approximation framework for computing the solution of the stochastic linear quadratic control problem on Hilbert spaces. We focus on the finite horizon case and the related differential Riccati equations (DREs). Our approximation framework is concerned with the so-called “singular estimate control systems” (Lasiecka in Optimal control problems and Riccati equations for systems with unbounded controls and partially analytic generators: applications to boundary and point control problems, 2004) which model certain coupled systems of parabolic/hyperbolic mixed partial differential equations with boundary or point control. We prove that the solutions of the approximate finite-dimensional DREs converge to the solution of the infinite-dimensional DRE. In addition, we prove that the optimal state and control of the approximate finite-dimensional problem converge to the optimal state and control of the corresponding infinite-dimensional problem.

  19. Correction of Depolarizing Resonances in ELSA

    Science.gov (United States)

    Steier, C.; Husmann, D.

    1997-05-01

    The 3.5 GeV electron stretcherring ELSA (ELectron Stretcher Accelerator) at Bonn University is operational since 1987, both as a continuous beam facility for external fixed target experiments and as a partially dedicated synchrotron light source. For the external experiments an upgrade to polarized electrons is under way. One source of polarized electrons (GaAs crystal, photoeffect using circular polarized laser light) is operational. The studies of minimizing the losses in polarization degree due to crossing of depolarizing resonances that necessarily exist in circular accelerators (storagerings) just started recently. Calculations concerning different correction schemes for the depolarizing resonances in ELSA are presented, and first results of measurements are shown (done by means of a Møller polarimeter in one of the external beamlines).

  20. Oxygen-Partial-Pressure Sensor for Aircraft Oxygen Mask

    Science.gov (United States)

    Kelly, Mark; Pettit, Donald

    2003-01-01

    A device that generates an alarm when the partial pressure of oxygen decreases to less than a preset level has been developed to help prevent hypoxia in a pilot or other crewmember of a military or other high-performance aircraft. Loss of oxygen partial pressure can be caused by poor fit of the mask or failure of a hose or other component of an oxygen distribution system. The deleterious physical and mental effects of hypoxia cause the loss of a military aircraft and crew every few years. The device is installed in the crewmember s oxygen mask and is powered via communication wiring already present in all such oxygen masks. The device (see figure) includes an electrochemical sensor, the output potential of which is proportional to the partial pressure of oxygen. The output of the sensor is amplified and fed to the input of a comparator circuit. A reference potential that corresponds to the amplified sensor output at the alarm oxygen-partial-pressure level is fed to the second input of the comparator. When the sensed partial pressure of oxygen falls below the minimum acceptable level, the output of the comparator goes from the low state (a few millivolts) to the high state (near the supply potential, which is typically 6.8 V for microphone power). The switching of the comparator output to the high state triggers a tactile alarm in the form of a vibration in the mask, generated by a small 1.3-Vdc pager motor spinning an eccentric mass at a rate between 8,000 and 10,000 rpm. The sensation of the mask vibrating against the crewmember s nose is very effective at alerting the crewmember, who may already be groggy from hypoxia and is immersed in an environment that is saturated with visual cues and sounds. Indeed, the sensation is one of rudeness, but such rudeness could be what is needed to stimulate the crewmember to take corrective action in a life-threatening situation.

  1. Accounting for partiality in serial crystallography using ray-tracing principles.

    Science.gov (United States)

    Kroon-Batenburg, Loes M J; Schreurs, Antoine M M; Ravelli, Raimond B G; Gros, Piet

    2015-09-01

    Serial crystallography generates `still' diffraction data sets that are composed of single diffraction images obtained from a large number of crystals arbitrarily oriented in the X-ray beam. Estimation of the reflection partialities, which accounts for the expected observed fractions of diffraction intensities, has so far been problematic. In this paper, a method is derived for modelling the partialities by making use of the ray-tracing diffraction-integration method EVAL. The method estimates partialities based on crystal mosaicity, beam divergence, wavelength dispersion, crystal size and the interference function, accounting for crystallite size. It is shown that modelling of each reflection by a distribution of interference-function weighted rays yields a `still' Lorentz factor. Still data are compared with a conventional rotation data set collected from a single lysozyme crystal. Overall, the presented still integration method improves the data quality markedly. The R factor of the still data compared with the rotation data decreases from 26% using a Monte Carlo approach to 12% after applying the Lorentz correction, to 5.3% when estimating partialities by EVAL and finally to 4.7% after post-refinement. The merging R(int) factor of the still data improves from 105 to 56% but remains high. This suggests that the accuracy of the model parameters could be further improved. However, with a multiplicity of around 40 and an R(int) of ∼50% the merged still data approximate the quality of the rotation data. The presented integration method suitably accounts for the partiality of the observed intensities in still diffraction data, which is a critical step to improve data quality in serial crystallography.

  2. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin; Kalnis, Panos; Solovyev, Victor

    2015-01-01

    accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality

  3. On the Interpretation of Gravitational Corrections to Gauge Couplings

    CERN Document Server

    Ellis, John

    2012-01-01

    Several recent papers discuss gravitational corrections to gauge couplings that depend quadratically on the energy. In the framework of the background-field approach, these correspond in general to adding to the effective action terms quadratic in the field strength but with higher-order space-time derivatives. We observe that such terms can be removed by appropriate local field redefinitions, and do not contribute to physical scattering-matrix elements. We illustrate this observation in the context of open string theory, where the effective action includes, among other terms, the well-known Born-Infeld form of non-linear electrodynamics. We conclude that the quadratically energy-dependent gravitational corrections are \\emph{not} physical in the sense of contributing to the running of a physically-measurable gauge coupling, or of unifying couplings as in string theory.

  4. Automation of one-loop QCD corrections

    CERN Document Server

    Hirschi, Valentin; Frixione, Stefano; Garzelli, Maria Vittoria; Maltoni, Fabio; Pittau, Roberto

    2011-01-01

    We present the complete automation of the computation of one-loop QCD corrections, including UV renormalization, to an arbitrary scattering process in the Standard Model. This is achieved by embedding the OPP integrand reduction technique, as implemented in CutTools, into the MadGraph framework. By interfacing the tool so constructed, which we dub MadLoop, with MadFKS, the fully automatic computation of any infrared-safe observable at the next-to-leading order in QCD is attained. We demonstrate the flexibility and the reach of our method by calculating the production rates for a variety of processes at the 7 TeV LHC.

  5. Surgical correction of Peyronie's disease via tunica albuginea plication or partial plaque excision with pericardial graft: long-term follow up.

    Science.gov (United States)

    Taylor, Frederick L; Levine, Laurence A

    2008-09-01

    Limited publications exist regarding long-term outcomes of surgical correction for Peyronie's Disease (PD). To report on long-term postoperative parameters including rigidity, curvature, length, sensation, function, and patient satisfaction in men with PD treated surgically via Tunica Albuginea Plication (TAP) or Partial Plaque Excision with Tutoplast Human Pericardial Grafting (PEG). Objective and subjective data regarding patients who underwent either TAP or PEG. We report on 142 patients (61 TAP and 81 PEG) with both objective data and subjective patient reports on their postoperative experience. Patients underwent either TAP or PEG following our previously published algorithm. Data was collected via chart review and an internally generated survey, in which patients were asked about their rigidity, straightness, penile length, sensation, sexual function and satisfaction. Average follow up for TAP patients was 72 months (range 8-147) and 58 months (range 6-185) for PEG patients. At survey time, 93% of TAP and 91% of PEG patients reported curvatures of less than 30o. Rigidity was reportedly as good as or better than preoperative in 81% of TAP and 68% of PEG patients, and was adequate for coitus in 90% of TAP and 79% of PEG patients with or without the use of PDE5i. Objective flaccid stretched penile length measurements obtained pre and postoperatively show an average overall length gain of 0.6 cm (range -3.5-3.5) for TAP and 0.2 cm (range -1.5-2.0) for PEG patients. Sensation was reportedly as good as or better than preoperative in 69% of both TAP and PEG patients; 98% of TAP patients and 90% of PEG are able to achieve orgasm. 82% of TAP patients and 75% of PEG patients were either very satisfied or satisfied. Our long-term results support both TAP and PEG as durable surgical therapy for men with clinically significant PD.

  6. Total, partial and differential ionization cross sections in proton-hydrogen collisions at low energy

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Shiyang [Graduate University for Advanced Studies, School of Mathematical and Physical Science, Toki, Gifu (Japan); Pichl, Lukas [University of Aizu, Foundation of Computer Science Laboratory, Aizuwakamatsu, Fukushima (Japan); Kimura, Mineo [Yamaguchi Univ., Graduate School of Science and Engineering, Ube, Yamaguchi (Japan); Kato, Takako [National Inst. for Fusion Science, Toki, Gifu (Japan)

    2003-01-01

    Single-differential, partial and total ionization cross sections for the proton-hydrogen collision system at low energy range (0.1-10 keV/amu) are determined by using the electron translation factor corrected molecular-orbital close-coupling method. Full convergence of ionization cross sections as a function of H{sub 2}{sup +} molecular basis size is achieved by including up to 10 bound states, and 11 continuum partial waves. The present cross sections are in an excellent agreement with the recent experiments of Shah et al., but decrease more rapidly than the cross sections measured by Pieksma et al. with decreasing energy. The calculated cross section data are included in this report. (author)

  7. 75 FR 17955 - Public Land Order No. 7736; Partial Revocation of the Bureau of Reclamation Order Dated February...

    Science.gov (United States)

    2010-04-08

    ... DEPARTMENT OF THE INTERIOR Bureau of Land Management [LLCA930000; CACA 7817] Public Land Order No. 7736; Partial Revocation of the Bureau of Reclamation Order Dated February 19, 1952; California AGENCY: Bureau of Land Management. ACTION: Correction. SUMMARY: The Bureau of Land Management published a...

  8. Revisionist integral deferred correction with adaptive step-size control

    KAUST Repository

    Christlieb, Andrew

    2015-03-27

    © 2015 Mathematical Sciences Publishers. Adaptive step-size control is a critical feature for the robust and efficient numerical solution of initial-value problems in ordinary differential equations. In this paper, we show that adaptive step-size control can be incorporated within a family of parallel time integrators known as revisionist integral deferred correction (RIDC) methods. The RIDC framework allows for various strategies to implement stepsize control, and we report results from exploring a few of them.

  9. Karect: accurate correction of substitution, insertion and deletion errors for next-generation sequencing data

    KAUST Repository

    Allam, Amin

    2015-07-14

    Motivation: Next-generation sequencing generates large amounts of data affected by errors in the form of substitutions, insertions or deletions of bases. Error correction based on the high-coverage information, typically improves de novo assembly. Most existing tools can correct substitution errors only; some support insertions and deletions, but accuracy in many cases is low. Results: We present Karect, a novel error correction technique based on multiple alignment. Our approach supports substitution, insertion and deletion errors. It can handle non-uniform coverage as well as moderately covered areas of the sequenced genome. Experiments with data from Illumina, 454 FLX and Ion Torrent sequencing machines demonstrate that Karect is more accurate than previous methods, both in terms of correcting individual-bases errors (up to 10% increase in accuracy gain) and post de novo assembly quality (up to 10% increase in NGA50). We also introduce an improved framework for evaluating the quality of error correction.

  10. When structure affects function--the need for partial volume effect correction in functional and resting state magnetic resonance imaging studies.

    Science.gov (United States)

    Dukart, Juergen; Bertolino, Alessandro

    2014-01-01

    Both functional and also more recently resting state magnetic resonance imaging have become established tools to investigate functional brain networks. Most studies use these tools to compare different populations without controlling for potential differences in underlying brain structure which might affect the functional measurements of interest. Here, we adapt a simulation approach combined with evaluation of real resting state magnetic resonance imaging data to investigate the potential impact of partial volume effects on established functional and resting state magnetic resonance imaging analyses. We demonstrate that differences in the underlying structure lead to a significant increase in detected functional differences in both types of analyses. Largest increases in functional differences are observed for highest signal-to-noise ratios and when signal with the lowest amount of partial volume effects is compared to any other partial volume effect constellation. In real data, structural information explains about 25% of within-subject variance observed in degree centrality--an established resting state connectivity measurement. Controlling this measurement for structural information can substantially alter correlational maps obtained in group analyses. Our results question current approaches of evaluating these measurements in diseased population with known structural changes without controlling for potential differences in these measurements.

  11. Specification Search for Identifying the Correct Mean Trajectory in Polynomial Latent Growth Models

    Science.gov (United States)

    Kim, Minjung; Kwok, Oi-Man; Yoon, Myeongsun; Willson, Victor; Lai, Mark H. C.

    2016-01-01

    This study investigated the optimal strategy for model specification search under the latent growth modeling (LGM) framework, specifically on searching for the correct polynomial mean or average growth model when there is no a priori hypothesized model in the absence of theory. In this simulation study, the effectiveness of different starting…

  12. Solving nonlinear, High-order partial differential equations using a high-performance isogeometric analysis framework

    KAUST Repository

    Cortes, Adriano Mauricio; Vignal, Philippe; Sarmiento, Adel; Garcí a, Daniel O.; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.

    2014-01-01

    In this paper we present PetIGA, a high-performance implementation of Isogeometric Analysis built on top of PETSc. We show its use in solving nonlinear and time-dependent problems, such as phase-field models, by taking advantage of the high-continuity of the basis functions granted by the isogeometric framework. In this work, we focus on the Cahn-Hilliard equation and the phase-field crystal equation.

  13. MOOSE: A parallel computational framework for coupled systems of nonlinear equations

    International Nuclear Information System (INIS)

    Gaston, Derek; Newman, Chris; Hansen, Glen; Lebrun-Grandie, Damien

    2009-01-01

    Systems of coupled, nonlinear partial differential equations (PDEs) often arise in simulation of nuclear processes. MOOSE: Multiphysics Object Oriented Simulation Environment, a parallel computational framework targeted at the solution of such systems, is presented. As opposed to traditional data-flow oriented computational frameworks, MOOSE is instead founded on the mathematical principle of Jacobian-free Newton-Krylov (JFNK). Utilizing the mathematical structure present in JFNK, physics expressions are modularized into 'Kernels,' allowing for rapid production of new simulation tools. In addition, systems are solved implicitly and fully coupled, employing physics-based preconditioning, which provides great flexibility even with large variance in time scales. A summary of the mathematics, an overview of the structure of MOOSE, and several representative solutions from applications built on the framework are presented.

  14. An approximation theory for nonlinear partial differential equations with applications to identification and control

    Science.gov (United States)

    Banks, H. T.; Kunisch, K.

    1982-01-01

    Approximation results from linear semigroup theory are used to develop a general framework for convergence of approximation schemes in parameter estimation and optimal control problems for nonlinear partial differential equations. These ideas are used to establish theoretical convergence results for parameter identification using modal (eigenfunction) approximation techniques. Results from numerical investigations of these schemes for both hyperbolic and parabolic systems are given.

  15. Truncation correction for oblique filtering lines

    International Nuclear Information System (INIS)

    Hoppe, Stefan; Hornegger, Joachim; Lauritsch, Guenter; Dennerlein, Frank; Noo, Frederic

    2008-01-01

    State-of-the-art filtered backprojection (FBP) algorithms often define the filtering operation to be performed along oblique filtering lines in the detector. A limited scan field of view leads to the truncation of those filtering lines, which causes artifacts in the final reconstructed volume. In contrast to the case where filtering is performed solely along the detector rows, no methods are available for the case of oblique filtering lines. In this work, the authors present two novel truncation correction methods which effectively handle data truncation in this case. Method 1 (basic approach) handles data truncation in two successive preprocessing steps by applying a hybrid data extrapolation method, which is a combination of a water cylinder extrapolation and a Gaussian extrapolation. It is independent of any specific reconstruction algorithm. Method 2 (kink approach) uses similar concepts for data extrapolation as the basic approach but needs to be integrated into the reconstruction algorithm. Experiments are presented from simulated data of the FORBILD head phantom, acquired along a partial-circle-plus-arc trajectory. The theoretically exact M-line algorithm is used for reconstruction. Although the discussion is focused on theoretically exact algorithms, the proposed truncation correction methods can be applied to any FBP algorithm that exposes oblique filtering lines.

  16. Subroutine MLTGRD: a multigrid algorithm based on multiplicative correction and implicit non-stationary iteration

    International Nuclear Information System (INIS)

    Barry, J.M.; Pollard, J.P.

    1986-11-01

    A FORTRAN subroutine MLTGRD is provided to solve efficiently the large systems of linear equations arising from a five-point finite difference discretisation of some elliptic partial differential equations. MLTGRD is a multigrid algorithm which provides multiplicative correction to iterative solution estimates from successively reduced systems of linear equations. It uses the method of implicit non-stationary iteration for all grid levels

  17. Augmenting SCA project management and automation framework

    Science.gov (United States)

    Iyapparaja, M.; Sharma, Bhanupriya

    2017-11-01

    In our daily life we need to keep the records of the things in order to manage it in more efficient and proper way. Our Company manufactures semiconductor chips and sale it to the buyer. Sometimes it manufactures the entire product and sometimes partially and sometimes it sales the intermediary product obtained during manufacturing, so for the better management of the entire process there is a need to keep the track record of all the entity involved in it. Materials and Methods: Therefore to overcome with the problem the need raised to develop the framework for the maintenance of the project and for the automation testing. Project management framework provides an architecture which supports in managing the project by marinating the records of entire requirements, the test cases that were created for testing each unit of the software, defect raised from the past years. So through this the quality of the project can be maintained. Results: Automation framework provides the architecture which supports the development and implementation of the automation test script for the software testing process. Conclusion: For implementing project management framework the product of HP that is Application Lifecycle management is used which provides central repository to maintain the project.

  18. Functional restoration of penis with partial defect by scrotal skin flap.

    Science.gov (United States)

    Zhao, Yue-Qiang; Zhang, Jie; Yu, Mo-Sheng; Long, Dao-Chou

    2009-11-01

    We investigated a reconstructive method with better sensory and erectile function for partial penile defects and report our long-term results of surgical correction using scrotal skin flaps. We retrospectively analyzed the records of 18 patients with penile defects referred to us between 1992 and 2007. All cases were treated with a scrotal skin flap initially to repair the secondary defect after penile elongation. Of the 18 cases treated during the 15-year period the mechanism of primary injury was circumcision in 3, animal bite in 9 and penile tumor dissection in 6. Penile elongation, division of the suspensory ligament and scrotal skin flaps achieved penile augmentation and enhancement. Six cases were treated with a bilateral scrotal skin flap supplied by the anterior scrotal artery and 12 were repaired with a total anterior scrotal skin flap supplied by the anterior and posterior scrotal arteries. Penile length in the flaccid and erectile states was obviously increased postoperatively (p <0.05). All patients were followed 1 to 9 years (mean 2.3) postoperatively. Deep and superficial sensation recovered and erectile function was retained. Of the 18 patients 15 reported satisfied sexual intercourse during the 0.5 to 5-year followup. The method of correcting partial penile defect using scrotal skin flaps is effective and simple according to our long-term experience. This method achieves reasonable cosmesis and penile length in most cases with better sensory and erectile function.

  19. Image transfer with spatial coherence for aberration corrected transmission electron microscopes

    International Nuclear Information System (INIS)

    Hosokawa, Fumio; Sawada, Hidetaka; Shinkawa, Takao; Sannomiya, Takumi

    2016-01-01

    The formula of spatial coherence involving an aberration up to six-fold astigmatism is derived for aberration-corrected transmission electron microscopy. Transfer functions for linear imaging are calculated using the newly derived formula with several residual aberrations. Depending on the symmetry and origin of an aberration, the calculated transfer function shows characteristic symmetries. The aberrations that originate from the field’s components, having uniformity along the z direction, namely, the n-fold astigmatism, show rotational symmetric damping of the coherence. The aberrations that originate from the field’s derivatives with respect to z, such as coma, star, and three lobe, show non-rotational symmetric damping. It is confirmed that the odd-symmetric wave aberrations have influences on the attenuation of an image via spatial coherence. Examples of image simulations of haemoglobin and Si [211] are shown by using the spatial coherence for an aberration-corrected electron microscope. - Highlights: • The formula of partial coherence for aberration corrected TEM is derived. • Transfer functions are calculated with several residual aberrations. • The calculated transfer function shows the characteristic damping. • The odd-symmetric wave aberrations can cause the attenuation of image via coherence. • The examples of aberration corrected TEM image simulations are shown.

  20. Image transfer with spatial coherence for aberration corrected transmission electron microscopes

    Energy Technology Data Exchange (ETDEWEB)

    Hosokawa, Fumio, E-mail: hosokawa@bio-net.co.jp [BioNet Ltd., 2-3-28 Nishikityo, Tachikwa, Tokyo (Japan); Tokyo Institute of Technology, 4259 Nagatsuta, Midoriku, Yokohama 226-8503 (Japan); Sawada, Hidetaka [JEOL (UK) Ltd., JEOL House, Silver Court, Watchmead, Welwyn Garden City, Herts AL7 1LT (United Kingdom); Shinkawa, Takao [BioNet Ltd., 2-3-28 Nishikityo, Tachikwa, Tokyo (Japan); Sannomiya, Takumi [Tokyo Institute of Technology, 4259 Nagatsuta, Midoriku, Yokohama 226-8503 (Japan)

    2016-08-15

    The formula of spatial coherence involving an aberration up to six-fold astigmatism is derived for aberration-corrected transmission electron microscopy. Transfer functions for linear imaging are calculated using the newly derived formula with several residual aberrations. Depending on the symmetry and origin of an aberration, the calculated transfer function shows characteristic symmetries. The aberrations that originate from the field’s components, having uniformity along the z direction, namely, the n-fold astigmatism, show rotational symmetric damping of the coherence. The aberrations that originate from the field’s derivatives with respect to z, such as coma, star, and three lobe, show non-rotational symmetric damping. It is confirmed that the odd-symmetric wave aberrations have influences on the attenuation of an image via spatial coherence. Examples of image simulations of haemoglobin and Si [211] are shown by using the spatial coherence for an aberration-corrected electron microscope. - Highlights: • The formula of partial coherence for aberration corrected TEM is derived. • Transfer functions are calculated with several residual aberrations. • The calculated transfer function shows the characteristic damping. • The odd-symmetric wave aberrations can cause the attenuation of image via coherence. • The examples of aberration corrected TEM image simulations are shown.

  1. Long, partial-short, and special conformal fields

    Energy Technology Data Exchange (ETDEWEB)

    Metsaev, R.R. [Department of Theoretical Physics, P.N. Lebedev Physical Institute,Leninsky prospect 53, Moscow 119991 (Russian Federation)

    2016-05-17

    In the framework of metric-like approach, totally symmetric arbitrary spin bosonic conformal fields propagating in flat space-time are studied. Depending on the values of conformal dimension, spin, and dimension of space-time, we classify all conformal field as long, partial-short, short, and special conformal fields. An ordinary-derivative (second-derivative) Lagrangian formulation for such conformal fields is obtained. The ordinary-derivative Lagrangian formulation is realized by using double-traceless gauge fields, Stueckelberg fields, and auxiliary fields. Gauge-fixed Lagrangian invariant under global BRST transformations is obtained. The gauge-fixed BRST Lagrangian is used for the computation of partition functions for all conformal fields. Using the result for the partition functions, numbers of propagating D.o.F for the conformal fields are also found.

  2. Hidden physics models: Machine learning of nonlinear partial differential equations

    Science.gov (United States)

    Raissi, Maziar; Karniadakis, George Em

    2018-03-01

    While there is currently a lot of enthusiasm about "big data", useful data is usually "small" and expensive to acquire. In this paper, we present a new paradigm of learning partial differential equations from small data. In particular, we introduce hidden physics models, which are essentially data-efficient learning machines capable of leveraging the underlying laws of physics, expressed by time dependent and nonlinear partial differential equations, to extract patterns from high-dimensional data generated from experiments. The proposed methodology may be applied to the problem of learning, system identification, or data-driven discovery of partial differential equations. Our framework relies on Gaussian processes, a powerful tool for probabilistic inference over functions, that enables us to strike a balance between model complexity and data fitting. The effectiveness of the proposed approach is demonstrated through a variety of canonical problems, spanning a number of scientific domains, including the Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time dependent linear fractional equations. The methodology provides a promising new direction for harnessing the long-standing developments of classical methods in applied mathematics and mathematical physics to design learning machines with the ability to operate in complex domains without requiring large quantities of data.

  3. Determination of partial molar volumes from free energy perturbation theory†

    Science.gov (United States)

    Vilseck, Jonah Z.; Tirado-Rives, Julian

    2016-01-01

    Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood–Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm3 mol−1. The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute–solvent interactions. PMID:25589343

  4. Determination of partial molar volumes from free energy perturbation theory.

    Science.gov (United States)

    Vilseck, Jonah Z; Tirado-Rives, Julian; Jorgensen, William L

    2015-04-07

    Partial molar volume is an important thermodynamic property that gives insights into molecular size and intermolecular interactions in solution. Theoretical frameworks for determining the partial molar volume (V°) of a solvated molecule generally apply Scaled Particle Theory or Kirkwood-Buff theory. With the current abilities to perform long molecular dynamics and Monte Carlo simulations, more direct methods are gaining popularity, such as computing V° directly as the difference in computed volume from two simulations, one with a solute present and another without. Thermodynamically, V° can also be determined as the pressure derivative of the free energy of solvation in the limit of infinite dilution. Both approaches are considered herein with the use of free energy perturbation (FEP) calculations to compute the necessary free energies of solvation at elevated pressures. Absolute and relative partial molar volumes are computed for benzene and benzene derivatives using the OPLS-AA force field. The mean unsigned error for all molecules is 2.8 cm(3) mol(-1). The present methodology should find use in many contexts such as the development and testing of force fields for use in computer simulations of organic and biomolecular systems, as a complement to related experimental studies, and to develop a deeper understanding of solute-solvent interactions.

  5. Partial tooth gear bearings

    Science.gov (United States)

    Vranish, John M. (Inventor)

    2010-01-01

    A partial gear bearing including an upper half, comprising peak partial teeth, and a lower, or bottom, half, comprising valley partial teeth. The upper half also has an integrated roller section between each of the peak partial teeth with a radius equal to the gear pitch radius of the radially outwardly extending peak partial teeth. Conversely, the lower half has an integrated roller section between each of the valley half teeth with a radius also equal to the gear pitch radius of the peak partial teeth. The valley partial teeth extend radially inwardly from its roller section. The peak and valley partial teeth are exactly out of phase with each other, as are the roller sections of the upper and lower halves. Essentially, the end roller bearing of the typical gear bearing has been integrated into the normal gear tooth pattern.

  6. Effects of Four-Month Exercise Program on Correction of Body Posture of Persons with Different Visual Impairment

    Directory of Open Access Journals (Sweden)

    Damira Vranesic-Hadzimehmedovic

    2018-04-01

    Full Text Available The aim of this study was to determine the effect of a four-month specific exercise program on correcting the posture of persons with different visual impairment. The sample consisted of 20 elementary students with visual impairment diagnosis, 11 boys and 9 girls aged 9-14 (12±0.6. The classification of the examinees was performed according to the established degree of visual impairment, 10 blind persons and 10 partially sighted persons. The pupils voluntarily participated in the exercise program. The exercise program was structured of two phases: exercise on dryland and exercise in water. A total of 36 exercise units were completed during four months period. Seven tests were used to evaluate the body posture, based on the determination of segmental dimensions and the visual projection of the marked points. The contents of the program were performed with the aim of preventing and correcting the observed irregularities of the body posture. The t-test scores indicated statistically significant differences between two measurements (p<0.05, p<0.01. It can be concluded that elementary movements, performed through dryland and especially water exercises, had a good effect on correcting the body's posture of blind and partially sighted persons.

  7. Incorporation of QCD effects in basic corrections of the electroweak theory

    CERN Document Server

    Fanchiotti, Sergio; Sirlin, Alberto; Fanchiotti, Sergio; Kniehl, Bernd; Sirlin, Alberto

    1993-01-01

    We study the incorporation of QCD effects in the basic electroweak corrections \\drcar, \\drcarw, and \\dr. They include perturbative \\Ord{\\alpha\\alpha_s} contributions and $t\\bar{t}$ threshold effects. The latter are studied in the resonance and Green-function approaches, in the framework of dispersion relations that automatically satisfy relevant Ward identities. Refinements in the treatment of the electroweak corrections, in both the \\ms\\ and the on-shell schemes of renormalization, are introduced, including the decoupling of the top quark in certain amplitudes, its effect on $\\hat{e}^2(\\mz)$ and \\sincarmz, the incorporation of recent results on the leading irreducible \\Ord{\\alpha^2} corrections, and simple expressions for the residual, i.e.\\ ``non-electromagnetic'', parts of \\drcar, \\drcarw, and \\dr. The results are used to obtain accurate values for \\mw\\ and \\sincarmz, as functions of \\mt\\ and \\mh. The higher-order effects induce shifts in these parameters comparable to the expected experimental accuracy, a...

  8. Partial Fourier techniques in single-shot cross-term spatiotemporal encoded MRI.

    Science.gov (United States)

    Zhang, Zhiyong; Frydman, Lucio

    2018-03-01

    Cross-term spatiotemporal encoding (xSPEN) is a single-shot approach with exceptional immunity to field heterogeneities, the images of which faithfully deliver 2D spatial distributions without requiring a priori information or using postacquisition corrections. xSPEN, however, suffers from signal-to-noise ratio penalties due to its non-Fourier nature and due to diffusion losses-especially when seeking high resolution. This study explores partial Fourier transform approaches that, acting along either the readout or the spatiotemporally encoded dimensions, reduce these penalties. xSPEN uses an orthogonal (e.g., z) gradient to read, in direct space, the low-bandwidth (e.g., y) dimension. This substantially changes the nature of partial Fourier acquisitions vis-à-vis conventional imaging counterparts. A suitable theoretical analysis is derived to implement these procedures, along either the spatiotemporally or readout axes. Partial Fourier single-shot xSPEN images were recorded on preclinical and human scanners. Owing to their reduction in the experiments' acquisition times, this approach provided substantial sensitivity gains vis-à-vis previous implementations for a given targeted in-plane resolution. The physical origins of these gains are explained. Partial Fourier approaches, particularly when implemented along the low-bandwidth spatiotemporal dimension, provide several-fold sensitivity advantages at minimal costs to the execution and processing of the single-shot experiments. Magn Reson Med 79:1506-1514, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Student Beliefs towards Written Corrective Feedback: The Case of Filipino High School Students

    Science.gov (United States)

    Balanga, Roselle A.; Fidel, Irish Van B.; Gumapac, Mone Virma Ginry P.; Ho, Howell T.; Tullo, Riza Mae C.; Villaraza, Patricia Monette L.; Vizconde, Camilla J.

    2016-01-01

    The study identified the beliefs of high school students toward Written Corrective Feedback (WCF), based on the framework of Anderson (2010). It also investigated the most common errors that students commit in writing stories and the type of WCF students receive from teachers. Data in the form of stories which were checked by teachers were…

  10. Alkylamine functionalized metal-organic frameworks for composite gas separations

    Science.gov (United States)

    Long, Jeffrey R.; McDonald, Thomas M.; D'Alessandro, Deanna M.

    2018-01-09

    Functionalized metal-organic framework adsorbents with ligands containing basic nitrogen groups such as alkylamines and alkyldiamines appended to the metal centers and method of isolating carbon dioxide from a stream of combined gases and carbon dioxide partial pressures below approximately 1 and 1000 mbar. The adsorption material has an isosteric heat of carbon dioxide adsorption of greater than -60 kJ/mol at zero coverage using a dual-site Langmuir model.

  11. Framework for successfully implementing an inaugural GRI reporting process

    OpenAIRE

    Dudik, Anna

    2012-01-01

    Project submitted as partial requirement for the conferral of Master in International Management This thesis is a corporate project analyzing the Global Reporting Initiative (GRI) reporting process. Its main objective is to propose a practical framework to guide organizations that plan to engage in first-time voluntary sustainability reporting using GRI’s Sustainability Reporting Guidelines. The thesis provides insight into the exact tasks involved in each stage of the GRI repo...

  12. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ... out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment of jaws ...

  13. The petroleum products financial system in the law of finances for 2003 and the correcting law of finances for 2002

    International Nuclear Information System (INIS)

    2003-03-01

    To analyze to petroleum products financial system in the framework of the law of finances for 2003 and the corrective law of finances for 2002, the document presents the simplification measures, the measures for the planning and the prorogation of past dispositions, juridical references, the biofuels, and some elements of macro-economical framework. (A.L.B.)

  14. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques.

    Science.gov (United States)

    Hofmann, Matthias; Pichler, Bernd; Schölkopf, Bernhard; Beyer, Thomas

    2009-03-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data.

  15. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    International Nuclear Information System (INIS)

    Hofmann, Matthias; Pichler, Bernd; Schoelkopf, Bernhard; Beyer, Thomas

    2009-01-01

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  16. Towards quantitative PET/MRI: a review of MR-based attenuation correction techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hofmann, Matthias [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); University of Oxford, Wolfson Medical Vision Laboratory, Department of Engineering Science, Oxford (United Kingdom); Pichler, Bernd [University of Tuebingen, Laboratory for Preclinical Imaging and Imaging Technology of the Werner Siemens-Foundation, Department of Radiology, Tuebingen (Germany); Schoelkopf, Bernhard [Max Planck Institute for Biological Cybernetics, Tuebingen (Germany); Beyer, Thomas [University Hospital Duisburg-Essen, Department of Nuclear Medicine, Essen (Germany); Cmi-Experts GmbH, Zurich (Switzerland)

    2009-03-15

    Positron emission tomography (PET) is a fully quantitative technology for imaging metabolic pathways and dynamic processes in vivo. Attenuation correction of raw PET data is a prerequisite for quantification and is typically based on separate transmission measurements. In PET/CT attenuation correction, however, is performed routinely based on the available CT transmission data. Recently, combined PET/magnetic resonance (MR) has been proposed as a viable alternative to PET/CT. Current concepts of PET/MRI do not include CT-like transmission sources and, therefore, alternative methods of PET attenuation correction must be found. This article reviews existing approaches to MR-based attenuation correction (MR-AC). Most groups have proposed MR-AC algorithms for brain PET studies and more recently also for torso PET/MR imaging. Most MR-AC strategies require the use of complementary MR and transmission images, or morphology templates generated from transmission images. We review and discuss these algorithms and point out challenges for using MR-AC in clinical routine. MR-AC is work-in-progress with potentially promising results from a template-based approach applicable to both brain and torso imaging. While efforts are ongoing in making clinically viable MR-AC fully automatic, further studies are required to realize the potential benefits of MR-based motion compensation and partial volume correction of the PET data. (orig.)

  17. Inverse problems for partial differential equations

    CERN Document Server

    Isakov, Victor

    2017-01-01

    This third edition expands upon the earlier edition by adding nearly 40 pages of new material reflecting the analytical and numerical progress in inverse problems in last 10 years. As in the second edition, the emphasis is on new ideas and methods rather than technical improvements. These new ideas include use of the stationary phase method in the two-dimensional elliptic problems and of multi frequencies\\temporal data to improve stability and numerical resolution. There are also numerous corrections and improvements of the exposition throughout. This book is intended for mathematicians working with partial differential equations and their applications, physicists, geophysicists, and financial, electrical, and mechanical engineers involved with nondestructive evaluation, seismic exploration, remote sensing, and various kinds of tomography. Review of the second edition: "The first edition of this excellent book appeared in 1998 and became a standard reference for everyone interested in analysis and numerics of...

  18. A Study of Corrective Feedback and Learner's Uptake in Classroom Interactions

    Directory of Open Access Journals (Sweden)

    Fatemeh Esmaeili

    2014-07-01

    Full Text Available The present study aims to examine corrective feedback and learner uptake in classroom interactions. Inspired by Lyster and Ranta’s corrective feedback framework (1997, this study intends to describe and analyze the patterns of corrective feedback utilized by Iranian teachers, and learners' uptake and the repair of those errors. To this aim, 400 minutes of classroom interaction from three elementary EFL classes which comprised 29 EFL learners were audiotaped and transcribed. The learners were within age range of 16-29 and were native speakers of Turkish language. The teachers were within 26-31 age range and had 3-4 years experience of teaching and hold MA degree in TOEFL. Analysis of data constituted the frequency of six different feedback types used by three teachers, in addition distribution of learners' uptake following each feedback type. The findings indicated that among six corrective feedback types, recast was the most frequent feedback utilized by teachers although it did not lead to high amount of learner uptake. Metalinguistic feedback, elicitation and clarification request led to higher level of uptake. It was also found that explicit feedback was more effective than implicit feedback in promoting learner uptake.

  19. Framework for Shared Drinking Water Risk Assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Lowry, Thomas Stephen [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Tidwell, Vincent C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Peplinski, William John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Mitchell, Roger [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Binning, David [AEM Corp., Herndon, VA (United States); Meszaros, Jenny [AEM Corp., Herndon, VA (United States)

    2017-01-01

    Central to protecting our nation's critical infrastructure is the development of methodologies for prioritizing action and supporting resource allocation decisions associated with risk-reduction initiatives. Toward this need a web-based risk assessment framework that promotes the anonymous sharing of results among water utilities is demonstrated. Anonymous sharing of results offers a number of potential advantages such as assistance in recognizing and correcting bias, identification of 'unknown, unknowns', self-assessment and benchmarking for the local utility, treatment of shared assets and/or threats across multiple utilities, and prioritization of actions beyond the scale of a single utility. The constructed framework was demonstrated for three water utilities. Demonstration results were then compared to risk assessment results developed using a different risk assessment application by a different set of analysts.

  20. Unified chiral analysis of the vector meson spectrum from lattice QCD

    Energy Technology Data Exchange (ETDEWEB)

    Wes Armour; Chris Allton; Derek Leinweber; Anthony Thomas; Ross Young

    2005-10-13

    The chiral extrapolation of the vector meson mass calculated in partially-quenched lattice simulations is investigated. The leading one-loop corrections to the vector meson mass are derived for partially-quenched QCD. A large sample of lattice results from the CP-PACS Collaboration is analysed, with explicit corrections for finite lattice spacing artifacts. To incorporate the effect of the opening decay channel as the chiral limit is approached, the extrapolation is studied using a necessary phenomenological extension of chiral effective field theory. This chiral analysis also provides a quantitative estimate of the leading finite volume corrections. It is found that the discretisation, finite-volume and partial quenching effects can all be very well described in this framework, producing an extrapolated value of $M_\\rho$ in excellent agreement with experiment. This procedure is also compared with extrapolations based on polynomial forms, where the results are much less enlightening.

  1. Three tesla magnetic resonance imaging of the anterior cruciate ligament of the knee: can we differentiate complete from partial tears?

    Energy Technology Data Exchange (ETDEWEB)

    Dyck, Pieter van; Gielen, Jan L.; Parizel, Paul M. [University Hospital Antwerp and University of Antwerp, Department of Radiology, Antwerp (Edegem) (Belgium); Vanhoenacker, Filip M. [University Hospital Antwerp and University of Antwerp, Department of Radiology, Antwerp (Edegem) (Belgium); AZ St-Maarten Duffel/Mechelen, Department of Radiology, Duffel (Belgium); Dossche, Lieven; Gestel, Jozef van [University Hospital Antwerp and University of Antwerp, Department of Orthopedics, Antwerp (Edegem) (Belgium); Wouters, Kristien [University Hospital Antwerp and University of Antwerp, Department of Scientific Coordination and Biostatistics, Antwerp (Edegem) (Belgium)

    2011-06-15

    To determine the ability of 3.0T magnetic resonance (MR) imaging to identify partial tears of the anterior cruciate ligament (ACL) and to allow distinction of complete from partial ACL tears. One hundred seventy-two patients were prospectively studied by 3.0T MR imaging and arthroscopy in our institution. MR images were interpreted in consensus by two experienced reviewers, and the ACL was diagnosed as being normal, partially torn, or completely torn. Diagnostic accuracy of 3.0T MR for the detection of both complete and partial tears of the ACL was calculated using arthroscopy as the standard of reference. There were 132 patients with an intact ACL, 17 had a partial, and 23 had a complete tear of the ACL seen at arthroscopy. Sensitivity, specificity, and accuracy of 3.0T MR for complete ACL tears were 83, 99, and 97%, respectively, and, for partial ACL tears, 77, 97, and 95%, respectively. Five of 40 ACL lesions (13%) could not correctly be identified as complete or partial ACL tears. MR imaging at 3.0T represents a highly accurate method for identifying tears of the ACL. However, differentiation between complete and partial ACL tears and identification of partial tears of this ligament remains difficult, even at 3.0T. (orig.)

  2. Three tesla magnetic resonance imaging of the anterior cruciate ligament of the knee: can we differentiate complete from partial tears?

    International Nuclear Information System (INIS)

    Dyck, Pieter van; Gielen, Jan L.; Parizel, Paul M.; Vanhoenacker, Filip M.; Dossche, Lieven; Gestel, Jozef van; Wouters, Kristien

    2011-01-01

    To determine the ability of 3.0T magnetic resonance (MR) imaging to identify partial tears of the anterior cruciate ligament (ACL) and to allow distinction of complete from partial ACL tears. One hundred seventy-two patients were prospectively studied by 3.0T MR imaging and arthroscopy in our institution. MR images were interpreted in consensus by two experienced reviewers, and the ACL was diagnosed as being normal, partially torn, or completely torn. Diagnostic accuracy of 3.0T MR for the detection of both complete and partial tears of the ACL was calculated using arthroscopy as the standard of reference. There were 132 patients with an intact ACL, 17 had a partial, and 23 had a complete tear of the ACL seen at arthroscopy. Sensitivity, specificity, and accuracy of 3.0T MR for complete ACL tears were 83, 99, and 97%, respectively, and, for partial ACL tears, 77, 97, and 95%, respectively. Five of 40 ACL lesions (13%) could not correctly be identified as complete or partial ACL tears. MR imaging at 3.0T represents a highly accurate method for identifying tears of the ACL. However, differentiation between complete and partial ACL tears and identification of partial tears of this ligament remains difficult, even at 3.0T. (orig.)

  3. NLO QCD corrections to electroweak Higgs boson plus three jet production at the LHC

    Energy Technology Data Exchange (ETDEWEB)

    Campanario, Francisco [Valencia-CSIC Univ. (Spain). IFIC; Figy, Terrance M. [Manchester Univ. (United Kingdom). School of Physics and Astronomy; Plaetzer, Simon [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group; Sjoedahl, Malin [Lund Univ. (Sweden). Dept. of Astronomy and Theoretical Physics

    2013-11-15

    The implementation of the full next-to-leading order (NLO) QCD corrections to electroweak Higgs boson plus three jet production at hadron colliders such as the LHC within the Matchbox NLO framework of the Herwig++ event generator is discussed. We present numerical results for integrated cross sections and kinematic distributions.

  4. Reconsidering harmonic and anharmonic coherent states: Partial differential equations approach

    Energy Technology Data Exchange (ETDEWEB)

    Toutounji, Mohamad, E-mail: Mtoutounji@uaeu.ac.ae

    2015-02-15

    This article presents a new approach to dealing with time dependent quantities such as autocorrelation function of harmonic and anharmonic systems using coherent states and partial differential equations. The approach that is normally used to evaluate dynamical quantities involves formidable operator algebra. That operator algebra becomes insurmountable when employing Morse oscillator coherent states. This problem becomes even more complicated in case of Morse oscillator as it tends to exhibit divergent dynamics. This approach employs linear partial differential equations, some of which may be solved exactly and analytically, thereby avoiding the cumbersome noncommutative algebra required to manipulate coherent states of Morse oscillator. Additionally, the arising integrals while using the herein presented method feature stability and high numerical efficiency. The correctness, applicability, and utility of the above approach are tested by reproducing the partition and optical autocorrelation function of the harmonic oscillator. A closed-form expression for the equilibrium canonical partition function of the Morse oscillator is derived using its coherent states and partial differential equations. Also, a nonequilibrium autocorrelation function expression for weak electron–phonon coupling in condensed systems is derived for displaced Morse oscillator in electronic state. Finally, the utility of the method is demonstrated through further simplifying the Morse oscillator partition function or autocorrelation function expressions reported by other researchers in unevaluated form of second-order derivative exponential. Comparison with exact dynamics shows identical results.

  5. Regional estimation of rainfall intensity-duration-frequency curves using generalized least squares regression of partial duration series statistics

    DEFF Research Database (Denmark)

    Madsen, H.; Mikkelsen, Peter Steen; Rosbjerg, Dan

    2002-01-01

    A general framework for regional analysis and modeling of extreme rainfall characteristics is presented. The model is based on the partial duration series (PDS) method that includes in the analysis all events above a threshold level. In the PDS model the average annual number of exceedances...

  6. A regional and nonstationary model for partial duration series of extreme rainfall

    DEFF Research Database (Denmark)

    Gregersen, Ida Bülow; Madsen, Henrik; Rosbjerg, Dan

    2017-01-01

    as the explanatory variables in the regional and temporal domain, respectively. Further analysis of partial duration series with nonstationary and regional thresholds shows that the mean exceedances also exhibit a significant variation in space and time for some rainfall durations, while the shape parameter is found...... of extreme rainfall. The framework is built on a partial duration series approach with a nonstationary, regional threshold value. The model is based on generalized linear regression solved by generalized estimation equations. It allows a spatial correlation between the stations in the network and accounts...... furthermore for variable observation periods at each station and in each year. Marginal regional and temporal regression models solved by generalized least squares are used to validate and discuss the results of the full spatiotemporal model. The model is applied on data from a large Danish rain gauge network...

  7. The lowest order total electromagnetic correction to the deep inelastic scattering of polarized leptons on polarized nucleons

    International Nuclear Information System (INIS)

    Shumeiko, N.M.; Timoshin, S.I.

    1991-01-01

    Compact formulae for a total 1-loop electromagnetic corrections, including the contribution of electromagnetic hadron effects to the deep inelastic scattering of polarized leptons on polarized nucleons in the quark-parton model have been obtained. The cases of longitudinal and transverse nucleon polarization are considered in detail. A thorough numerical calculation of corrections to cross sections and polarization asymmetries at muon (electron) energies over the range of 200-2000 GeV (10-16 GeV) has been made. It has been established that the contribution of corrections to the hadron current considerably affects the behaviour of longitudinal asymmetry. A satisfactory agreement is found between the model calculations of corrections to the lepton current and the phenomenological calculation results, which makes it possible to find the total 1-loop correction within the framework of a common approach. (Author)

  8. Use of theoretical and conceptual frameworks in qualitative research.

    Science.gov (United States)

    Green, Helen Elise

    2014-07-01

    To debate the definition and use of theoretical and conceptual frameworks in qualitative research. There is a paucity of literature to help the novice researcher to understand what theoretical and conceptual frameworks are and how they should be used. This paper acknowledges the interchangeable usage of these terms and researchers' confusion about the differences between the two. It discusses how researchers have used theoretical and conceptual frameworks and the notion of conceptual models. Detail is given about how one researcher incorporated a conceptual framework throughout a research project, the purpose for doing so and how this led to a resultant conceptual model. Concepts from Abbott (1988) and Witz ( 1992 ) were used to provide a framework for research involving two case study sites. The framework was used to determine research questions and give direction to interviews and discussions to focus the research. Some research methods do not overtly use a theoretical framework or conceptual framework in their design, but this is implicit and underpins the method design, for example in grounded theory. Other qualitative methods use one or the other to frame the design of a research project or to explain the outcomes. An example is given of how a conceptual framework was used throughout a research project. Theoretical and conceptual frameworks are terms that are regularly used in research but rarely explained. Textbooks should discuss what they are and how they can be used, so novice researchers understand how they can help with research design. Theoretical and conceptual frameworks need to be more clearly understood by researchers and correct terminology used to ensure clarity for novice researchers.

  9. Quality Management Framework for Total Diet Study centres in Europe.

    Science.gov (United States)

    Pité, Marina; Pinchen, Hannah; Castanheira, Isabel; Oliveira, Luisa; Roe, Mark; Ruprich, Jiri; Rehurkova, Irena; Sirot, Veronique; Papadopoulos, Alexandra; Gunnlaugsdóttir, Helga; Reykdal, Ólafur; Lindtner, Oliver; Ritvanen, Tiina; Finglas, Paul

    2018-02-01

    A Quality Management Framework to improve quality and harmonization of Total Diet Study practices in Europe was developed within the TDS-Exposure Project. Seventeen processes were identified and hazards, Critical Control Points and associated preventive and corrective measures described. The Total Diet Study process was summarized in a flowchart divided into planning and practical (sample collection, preparation and analysis; risk assessment analysis and publication) phases. Standard Operating Procedures were developed and implemented in pilot studies in five organizations. The flowchart was used to develop a quality framework for Total Diet Studies that could be included in formal quality management systems. Pilot studies operated by four project partners were visited by project assessors who reviewed implementation of the proposed framework and identified areas that could be improved. The quality framework developed can be the starting point for any Total Diet Study centre and can be used within existing formal quality management approaches. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Renormalization group in the theory of fully developed turbulence. Problem of the infrared relevant corrections to the Navier-Stokes equation

    International Nuclear Information System (INIS)

    Antonov, N.V.; Borisenok, S.V.; Girina, V.I.

    1996-01-01

    Within the framework of the renormalization group approach to the theory of fully developed turbulence we consider the problem of possible IR relevant corrections to the Navier-Stokes equation. We formulate an exact criterion of the actual IR relevance of the corrections. In accordance with this criterion we verify the IR relevance for certain classes of composite operators. 17 refs., 2 tabs

  11. Pressure of a partially ionized hydrogen gas : numerical results from exact low temperature expansions

    OpenAIRE

    Alastuey , Angel; Ballenegger , Vincent

    2010-01-01

    8 pages; International audience; We consider a partially ionized hydrogen gas at low densities, where it reduces almost to an ideal mixture made with hydrogen atoms in their ground-state, ionized protons and ionized electrons. By performing systematic low-temperature expansions within the physical picture, in which the system is described as a quantum electron-proton plasma interacting via the Coulomb potential, exact formulae for the first five leading corrections to the ideal Saha equation ...

  12. Quantum Corrected Non-Thermal Radiation Spectrum from the Tunnelling Mechanism

    Directory of Open Access Journals (Sweden)

    Subenoy Chakraborty

    2015-06-01

    Full Text Available The tunnelling mechanism is today considered a popular and widely used method in describing Hawking radiation. However, in relation to black hole (BH emission, this mechanism is mostly used to obtain the Hawking temperature by comparing the probability of emission of an outgoing particle with the Boltzmann factor. On the other hand, Banerjee and Majhi reformulated the tunnelling framework deriving a black body spectrum through the density matrix for the outgoing modes for both the Bose-Einstein distribution and the Fermi-Dirac distribution. In contrast, Parikh and Wilczek introduced a correction term performing an exact calculation of the action for a tunnelling spherically symmetric particle and, as a result, the probability of emission of an outgoing particle corresponds to a non-strictly thermal radiation spectrum. Recently, one of us (C. Corda introduced a BH effective state and was able to obtain a non-strictly black body spectrum from the tunnelling mechanism corresponding to the probability of emission of an outgoing particle found by Parikh and Wilczek. The present work introduces the quantum corrected effective temperature and the corresponding quantum corrected effective metric is written using Hawking’s periodicity arguments. Thus, we obtain further corrections to the non-strictly thermal BH radiation spectrum as the final distributions take into account both the BH dynamical geometry during the emission of the particle and the quantum corrections to the semiclassical Hawking temperature.

  13. Network topology of olivine-basalt partial melts

    Science.gov (United States)

    Skemer, Philip; Chaney, Molly M.; Emmerich, Adrienne L.; Miller, Kevin J.; Zhu, Wen-lu

    2017-07-01

    The microstructural relationship between melt and solid grains in partially molten rocks influences many physical properties, including permeability, rheology, electrical conductivity and seismic wave speeds. In this study, the connectivity of melt networks in the olivine-basalt system is explored using a systematic survey of 3-D X-ray microtomographic data. Experimentally synthesized samples with 2 and 5 vol.% melt are analysed as a series of melt tubules intersecting at nodes. Each node is characterized by a coordination number (CN), which is the number of melt tubules that intersect at that location. Statistically representative volumes are described by coordination number distributions (CND). Polyhedral grains can be packed in many configurations yielding different CNDs, however widely accepted theory predicts that systems with small dihedral angles, such as olivine-basalt, should exhibit a predominant CN of four. In this study, melt objects are identified with CN = 2-8, however more than 50 per cent are CN = 4, providing experimental verification of this theoretical prediction. A conceptual model that considers the role of heterogeneity in local grain size and melt fraction is proposed to explain the formation of nodes with CN ≠ 4. Correctly identifying the melt network topology is essential to understanding the relationship between permeability and porosity, and hence the transport properties of partial molten mantle rocks.

  14. Neutral current Drell-Yan with combined QCD and electroweak corrections in the POWHEG BOX

    CERN Document Server

    Barze', Luca; Nason, Paolo; Nicrosini, Oreste; Piccinini, Fulvio; Vicini, Alessandro

    2013-01-01

    Following recent work on the combination of electroweak and strong radiative corrections to single W-boson hadroproduction in the POWHEG BOX framework, we generalize the above treatment to cover the neutral current Drell-Yan process. According to the POWHEG method, we combine both the next-to-leading order (NLO) electroweak and QED multiple photon corrections with the native NLO and Parton Shower QCD contributions. We show comparisons with the predictions of the electroweak generator HORACE, to validate the reliability and accuracy of the approach. We also present phenomenological results obtained with the new tool for physics studies at the LHC.

  15. A simple correction to remove the bias of the gini coefficient due to grouping

    NARCIS (Netherlands)

    T.G.M. van Ourti (Tom); Ph. Clarke (Philip)

    2011-01-01

    textabstractAbstract-We propose a first-order bias correction term for the Gini index to reduce the bias due to grouping. It depends on only the number of individuals in each group and is derived from a measurement error framework. We also provide a formula for the remaining second-order bias. Both

  16. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  17. Correction of incomplete penoscrotal transposition by a modified Glenn-Anderson technique

    Directory of Open Access Journals (Sweden)

    Saleh Amin

    2010-01-01

    Full Text Available Purpose: Penoscrotal transposition may be partial or complete, resulting in variable degrees of positional exchanges between the penis and the scrotum. Repairs of penoscrotal transposition rely on the creation of rotational flaps to mobilise the scrotum downwards or transpose the penis to a neo hole created in the skin of the mons-pubis. All known techniques result in complete circular incision around the root of the penis, resulting in severe and massive oedema of the penile skin, which delays correction of the associated hypospadias and increases the incidence of complications, as the skin vascularity and lymphatics are impaired by the designed incision. A new design to prevent this post-operative oedema, allowing early correction of the associated hypospadias and lowering the incidence of possible complications, had been used, whose results were compared with other methods of correction. Materials and Methods: Ten patients with incomplete penoscrotal transposition had been corrected by designing rotational flaps that push the scrotum back while the penile skin remains attached by small strip to the skin of the mons-pubis. Results : All patients showed an excellent cosmetic outcome. There was minimal post-operative oedema and no vascular compromise to the penile or scrotal skin. Correction of associated hypospadias can be performed in the same sitting or in another sitting, without or with minimal complications. Conclusion: This modification, which maintains the penile skin connected to the skin of the lower abdomen by a small strip of skin during correction of penoscrotal transposition, prevents post-operative oedema and improves healing with excellent cosmetic appearance, allows one-stage repair with minimal complications and reduce post-operative complications such as urinary fistula and flap necrosis.

  18. On the decoding process in ternary error-correcting output codes.

    Science.gov (United States)

    Escalera, Sergio; Pujol, Oriol; Radeva, Petia

    2010-01-01

    A common way to model multiclass classification problems is to design a set of binary classifiers and to combine them. Error-Correcting Output Codes (ECOC) represent a successful framework to deal with these type of problems. Recent works in the ECOC framework showed significant performance improvements by means of new problem-dependent designs based on the ternary ECOC framework. The ternary framework contains a larger set of binary problems because of the use of a "do not care" symbol that allows us to ignore some classes by a given classifier. However, there are no proper studies that analyze the effect of the new symbol at the decoding step. In this paper, we present a taxonomy that embeds all binary and ternary ECOC decoding strategies into four groups. We show that the zero symbol introduces two kinds of biases that require redefinition of the decoding design. A new type of decoding measure is proposed, and two novel decoding strategies are defined. We evaluate the state-of-the-art coding and decoding strategies over a set of UCI Machine Learning Repository data sets and into a real traffic sign categorization problem. The experimental results show that, following the new decoding strategies, the performance of the ECOC design is significantly improved.

  19. Optimized difference schemes for multidimensional hyperbolic partial differential equations

    Directory of Open Access Journals (Sweden)

    Adrian Sescu

    2009-04-01

    Full Text Available In numerical solutions to hyperbolic partial differential equations in multidimensions, in addition to dispersion and dissipation errors, there is a grid-related error (referred to as isotropy error or numerical anisotropy that affects the directional dependence of the wave propagation. Difference schemes are mostly analyzed and optimized in one dimension, wherein the anisotropy correction may not be effective enough. In this work, optimized multidimensional difference schemes with arbitrary order of accuracy are designed to have improved isotropy compared to conventional schemes. The derivation is performed based on Taylor series expansion and Fourier analysis. The schemes are restricted to equally-spaced Cartesian grids, so the generalized curvilinear transformation method and Cartesian grid methods are good candidates.

  20. Black holes in higher dimensional gravity theory with corrections quadratic in curvature

    International Nuclear Information System (INIS)

    Frolov, Valeri P.; Shapiro, Ilya L.

    2009-01-01

    Static spherically symmetric black holes are discussed in the framework of higher dimensional gravity with quadratic in curvature terms. Such terms naturally arise as a result of quantum corrections induced by quantum fields propagating in the gravitational background. We focus our attention on the correction of the form C 2 =C αβγδ C αβγδ . The Gauss-Bonnet equation in four-dimensional spacetime enables one to reduce this term in the action to the terms quadratic in the Ricci tensor and scalar curvature. As a result the Schwarzschild solution which is Ricci flat will be also a solution of the theory with the Weyl scalar C 2 correction. An important new feature of the spaces with dimension D>4 is that in the presence of the Weyl curvature-squared term a necessary solution differs from the corresponding 'classical' vacuum Tangherlini metric. This difference is related to the presence of secondary or induced hair. We explore how the Tangherlini solution is modified by 'quantum corrections', assuming that the gravitational radius r 0 is much larger than the scale of the quantum corrections. We also demonstrated that finding a general solution beyond the perturbation method can be reduced to solving a single third order ordinary differential equation (master equation).

  1. Color correction with blind image restoration based on multiple images using a low-rank model

    Science.gov (United States)

    Li, Dong; Xie, Xudong; Lam, Kin-Man

    2014-03-01

    We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.

  2. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    Science.gov (United States)

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  3. Experts' understanding of partial derivatives using the Partial Derivative Machine

    OpenAIRE

    Roundy, David; Dorko, Allison; Dray, Tevian; Manogue, Corinne A.; Weber, Eric

    2014-01-01

    Partial derivatives are used in a variety of different ways within physics. Most notably, thermodynamics uses partial derivatives in ways that students often find confusing. As part of a collaboration with mathematics faculty, we are at the beginning of a study of the teaching of partial derivatives, a goal of better aligning the teaching of multivariable calculus with the needs of students in STEM disciplines. As a part of this project, we have performed a pilot study of expert understanding...

  4. Hyperon resonances in SU(3) soliton models

    International Nuclear Information System (INIS)

    Scoccola, N.N.

    1990-01-01

    Hyperon resonances excited in kaon-nucleon scattering are investigated in the framework of an SU(3) soliton model in which kaon degrees of freedom are treated as small fluctuations around an SU(2) soliton. For partial waves l≥2 the model predicts correctly the quantum numbers and average excitation energies of most of the experimentally observed Λ and Σ resonances. Some disagreements are found for lower partial waves. (orig.)

  5. Gamma-Ray Emission Tomography: Modeling and Evaluation of Partial-Defect Testing Capabilities

    International Nuclear Information System (INIS)

    Jacobsson Svard, S.; Jansson, P.; Davour, A.; Grape, S.; White, T.A.; Smith, L.E.; Deshmukh, N.; Wittman, R.S.; Mozin, V.; Trellue, H.

    2015-01-01

    Gamma emission tomography (GET) for spent nuclear fuel verification is the subject for IAEA MSP project JNT1955. In line with IAEA Safeguards R&D plan 2012-2023, the aim of this effort is to ''develop more sensitive and less intrusive alternatives to existing NDA instruments to perform partial defect test on spent fuel assembly prior to transfer to difficult to access storage''. The current viability study constitutes the first phase of three, with evaluation and decision points between each phase. Two verification objectives have been identified; (1) counting of fuel pins in tomographic images without any a priori knowledge of the fuel assembly under study, and (2) quantitative measurements of pinby- pin properties, e.g., burnup, for the detection of anomalies and/or verification of operator-declared data. Previous measurements performed in Sweden and Finland have proven GET highly promising for detecting removed or substituted fuel rods in BWR and VVER-440 fuel assemblies even down to the individual fuel rod level. The current project adds to previous experiences by pursuing a quantitative assessment of the capabilities of GET for partial defect detection, across a broad range of potential IAEA applications, fuel types and fuel parameters. A modelling and performance-evaluation framework has been developed to provide quantitative GET performance predictions, incorporating burn-up and cooling-time calculations, Monte Carlo radiation-transport and detector-response modelling, GET instrument definitions (existing and notional) and tomographic reconstruction algorithms, which use recorded gamma-ray intensities to produce images of the fuel's internal source distribution or conclusive rod-by-rod data. The framework also comprises image-processing algorithms and performance metrics that recognize the inherent tradeoff between the probability of detecting missing pins and the false-alarm rate. Here, the modelling and analysis framework is

  6. Iterative CT shading correction with no prior information

    Science.gov (United States)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  7. In Vivo Gene Therapy of Hemophilia B: Sustained Partial Correction in Factor IX-Deficient Dogs

    Science.gov (United States)

    Kay, Mark A.; Rothenberg, Steven; Landen, Charles N.; Bellinger, Dwight A.; Leland, Frances; Toman, Carol; Finegold, Milton; Thompson, Arthur R.; Read, M. S.; Brinkhous, Kenneth M.; Woo, Savio L. C.

    1993-10-01

    The liver represents a model organ for gene therapy. A method has been developed for hepatic gene transfer in vivo by the direct infusion of recombinant retroviral vectors into the portal vasculature, which results in the persistent expression of exogenous genes. To determine if these technologies are applicable for the treatment of hemophilia B patients, preclinical efficacy studies were done in a hemophilia B dog model. When the canine factor IX complementary DNA was transduced directly into the hepatocytes of affected dogs in vivo, the animals constitutively expressed low levels of canine factor IX for more than 5 months. Persistent expression of the clotting. factor resulted in reductions of whole blood clotting and partial thromboplastin times of the treated animals. Thus, long-term treatment of hemophilia B patients may be feasible by direct hepatic gene therapy in vivo.

  8. Supramolecular Isomers of Metal-Organic Frameworks Derived from a Partially Flexible Ligand with Distinct Binding Motifs

    KAUST Repository

    Abdul Halim, Racha Ghassan

    2016-01-04

    Three novel metal-organic frameworks (MOFs) were isolated upon reacting a heterofunctional ligand 4 (pyrimidin-5 yl)benzoic acid (4,5-pmbc) with mixed valence Cu(I,II) under solvothermal conditions. X-ray crystal structural analysis reveals that the first compound is a layered structure composed of one type of inorganic building block, dinuclear paddlewheel [Cu2(O2C–)4], which are linked through 4,5-pmbc ligands. The two other supramolecular isomers are composed of the same Cu(II) dinuclear paddlewheel and a dinuclear Cu2I2 cluster, which are linked via the 4,5-pmbc linkers to yield two different 3-periodic frameworks with underlying topologies related to lvt and nbo. The observed structural diversity in these structures is due to the distinct coordination modes of the two coordinating moieties (the carboxylate group on the phenyl ring and the N-donor atoms from the pyrimidine moiety).

  9. Supramolecular Isomers of Metal-Organic Frameworks Derived from a Partially Flexible Ligand with Distinct Binding Motifs

    KAUST Repository

    AbdulHalim, Rasha; Shkurenko, Aleksander; Al Kordi, Mohamed; Eddaoudi, Mohamed

    2016-01-01

    Three novel metal-organic frameworks (MOFs) were isolated upon reacting a heterofunctional ligand 4 (pyrimidin-5 yl)benzoic acid (4,5-pmbc) with mixed valence Cu(I,II) under solvothermal conditions. X-ray crystal structural analysis reveals that the first compound is a layered structure composed of one type of inorganic building block, dinuclear paddlewheel [Cu2(O2C–)4], which are linked through 4,5-pmbc ligands. The two other supramolecular isomers are composed of the same Cu(II) dinuclear paddlewheel and a dinuclear Cu2I2 cluster, which are linked via the 4,5-pmbc linkers to yield two different 3-periodic frameworks with underlying topologies related to lvt and nbo. The observed structural diversity in these structures is due to the distinct coordination modes of the two coordinating moieties (the carboxylate group on the phenyl ring and the N-donor atoms from the pyrimidine moiety).

  10. Proving the correctness of unfold/fold program transformations using bisimulation

    DEFF Research Database (Denmark)

    Hamilton, Geoff W.; Jones, Neil

    2011-01-01

    by a labelled transition system whose bisimilarity relation is a congruence that coincides with contextual equivalence. Labelled transition systems are well-suited to represent global program behaviour. On the other hand, unfold/fold program transformations use generalization and folding, and neither is easy......This paper shows that a bisimulation approach can be used to prove the correctness of unfold/fold program transformation algorithms. As an illustration, we show how our approach can be use to prove the correctness of positive supercompilation (due to Sørensen et al). Traditional program equivalence...... to describe contextually, due to use of non-local information. We show that weak bisimulation on labelled transition systems gives an elegant framework to prove contextual equivalence of original and transformed programs. One reason is that folds can be seen in the context of corresponding unfolds....

  11. Meson exchange corrections in deep inelastic scattering on deuteron

    International Nuclear Information System (INIS)

    Kaptari, L.P.; Titov, A.I.

    1989-01-01

    Starting with the general equations of motion of the nucleons interacting with the mesons the one-particle Schroedinger-like equation for the nucleon wave function and the deep inelastic scattering amplitude with the meson-exchange currents are obtained. Effective pion-, sigma-, and omega-meson exchanges are considered. It is found that the mesonic corrections only partially (about 60%) restore the energy sum rule breaking because of the nucleon off-mass-shell effects in nuclei. This results contradicts with the prediction based on the calculation of the energy sum rule limited by the second order of the nucleon-meson vertex and static approximation. 17 refs.; 3 figs

  12. An improved approach to reduce partial volume errors in brain SPET

    International Nuclear Information System (INIS)

    Hatton, R.L.; Hatton, B.F.; Michael, G.; Barnden, L.; QUT, Brisbane, QLD; The Queen Elizabeth Hospital, Adelaide, SA

    1999-01-01

    Full text: Limitations in SPET resolution give rise to significant partial volume error (PVE) in small brain structures We have investigated a previously published method (Muller-Gartner et al., J Cereb Blood Flow Metab 1992;16: 650-658) to correct PVE in grey matter using MRI. An MRI is registered and segmented to obtain a grey matter tissue volume which is then smoothed to obtain resolution matched to the corresponding SPET. By dividing the original SPET with this correction map, structures can be corrected for PVE on a pixel-by-pixel basis. Since this approach is limited by space-invariant filtering, modification was made by estimating projections for the segmented MRI and reconstructing these using identical parameters to SPET. The methods were tested on simulated brain scans, reconstructed with the ordered subsets EM algorithm (8,16, 32, 64 equivalent EM iterations) The new method provided better recovery visually. For 32 EM iterations, recovery coefficients were calculated for grey matter regions. The effects of potential errors in the method were examined. Mean recovery was unchanged with one pixel registration error, the maximum error found in most registration programs. Errors in segmentation > 2 pixels results in loss of accuracy for small structures. The method promises to be useful for reducing PVE in brain SPET

  13. Solution of Nonlinear Partial Differential Equations by New Laplace Variational Iteration Method

    Directory of Open Access Journals (Sweden)

    Eman M. A. Hilal

    2014-01-01

    Full Text Available The aim of this study is to give a good strategy for solving some linear and nonlinear partial differential equations in engineering and physics fields, by combining Laplace transform and the modified variational iteration method. This method is based on the variational iteration method, Laplace transforms, and convolution integral, introducing an alternative Laplace correction functional and expressing the integral as a convolution. Some examples in physical engineering are provided to illustrate the simplicity and reliability of this method. The solutions of these examples are contingent only on the initial conditions.

  14. PROMO – Real-time Prospective Motion Correction in MRI using Image-based Tracking

    Science.gov (United States)

    White, Nathan; Roddey, Cooper; Shankaranarayanan, Ajit; Han, Eric; Rettmann, Dan; Santos, Juan; Kuperman, Josh; Dale, Anders

    2010-01-01

    Artifacts caused by patient motion during scanning remain a serious problem in most MRI applications. The prospective motion correction technique attempts to address this problem at its source by keeping the measurement coordinate system fixed with respect to the patient throughout the entire scan process. In this study, a new image-based approach for prospective motion correction is described, which utilizes three orthogonal 2D spiral navigator acquisitions (SP-Navs) along with a flexible image-based tracking method based on the Extended Kalman Filter (EKF) algorithm for online motion measurement. The SP-Nav/EKF framework offers the advantages of image-domain tracking within patient-specific regions-of-interest and reduced sensitivity to off-resonance-induced corruption of rigid-body motion estimates. The performance of the method was tested using offline computer simulations and online in vivo head motion experiments. In vivo validation results covering a broad range of staged head motions indicate a steady-state error of the SP-Nav/EKF motion estimates of less than 10 % of the motion magnitude, even for large compound motions that included rotations over 15 degrees. A preliminary in vivo application in 3D inversion recovery spoiled gradient echo (IR-SPGR) and 3D fast spin echo (FSE) sequences demonstrates the effectiveness of the SP-Nav/EKF framework for correcting 3D rigid-body head motion artifacts prospectively in high-resolution 3D MRI scans. PMID:20027635

  15. Improving Terminology Mapping in Clinical Text with Context-Sensitive Spelling Correction.

    Science.gov (United States)

    Dziadek, Juliusz; Henriksson, Aron; Duneld, Martin

    2017-01-01

    The mapping of unstructured clinical text to an ontology facilitates meaningful secondary use of health records but is non-trivial due to lexical variation and the abundance of misspellings in hurriedly produced notes. Here, we apply several spelling correction methods to Swedish medical text and evaluate their impact on SNOMED CT mapping; first in a controlled evaluation using medical literature text with induced errors, followed by a partial evaluation on clinical notes. It is shown that the best-performing method is context-sensitive, taking into account trigram frequencies and utilizing a corpus-based dictionary.

  16. 'TrueCoinc' software utility for calculation of the true coincidence correction

    International Nuclear Information System (INIS)

    Sudar, S.

    2002-01-01

    The true coincidence correction plays an important role in the overall accuracy of the γ ray spectrometry especially in the case of present-day high volume detectors. The calculation of true coincidence corrections needs detailed nuclear structure information. Recently these data are available in computerized form from the Nuclear Data Centers through the Internet or on a CD-ROM of the Table of Isotopes. The aim has been to develop software for this calculation, using available databases for the levels data. The user has to supply only the parameters of the detector to be used. The new computer program runs under the Windows 95/98 operating system. In the framework of the project a new formula was prepared for calculating the summing out correction and calculation of the intensity of alias lines (sum peaks). The file converter for reading the ENDSF-2 type files was completed. Reading and converting the original ENDSF was added to the program. A computer accessible database of the X rays energies and intensities was created. The X ray emissions were taken in account in the 'summing out' calculation. Calculation of the true coincidence 'summing in' correction was done. The output was arranged to show independently two types of corrections and to calculate the final correction as multiplication of the two. A minimal intensity threshold can be set to show the final list only for the strongest lines. The calculation takes into account all the transitions, independently of the threshold. The program calculates the intensity of X rays (K, L lines). The true coincidence corrections for X rays were calculated. The intensities of the alias γ lines were calculated. (author)

  17. Outcomes in patients with esotropic duane retraction syndrome and a partially accommodative component

    Directory of Open Access Journals (Sweden)

    Ramesh Kekunnaya

    2013-01-01

    Full Text Available Background: The management of Duane retraction syndrome (DRS is challenging and may become more difficult if an associated accommodative component due to high hyperopia is present. The purpose of this study is to review clinical features and outcomes in patients with partially accommodative esotropia and DRS. Setting and Design: Retrospective, non-comparative case series. Materials and Methods: Six cases of DRS with high hyperopia were reviewed. Results: Of the patients studied, the mean age of presentation was 1.3 years (range: 0.5-2.5 years. The mean amount of hyperopia was + 5D (range: 3.50-8.50 in both eyes. The mean follow up period was 7 years (range: 4 months-12 years. Five cases were unilateral while one was bilateral. Four cases underwent vertical rectus muscle transposition (VRT and one had medial rectus recession prior to presentation; all were given optical correction. Two (50% of the four patients who underwent vertical rectus transposition cases developed consecutive exotropia, one of whom did not have spectacles prescribed pre-operatively. All other cases (four had minimal residual esotropia and face turn at the last follow-up with spectacle correction. Conclusion: Patients with Duane syndrome can have an accommodative component to their esotropia, which is crucial to detect and correct prior to surgery to decrease the risk of long-term over-correction. Occasionally, torticollis in Duane syndrome can be satisfactorily corrected with spectacles alone.

  18. A QoS Framework with Traffic Request in Wireless Mesh Network

    Science.gov (United States)

    Fu, Bo; Huang, Hejiao

    In this paper, we consider major issues in ensuring greater Quality-of-Service (QoS) in Wireless Mesh Networks (WMNs), specifically with regard to reliability and delay. To this end, we use traffic request to record QoS requirements of data flows. In order to achieve required QoS for all data flows efficiently and with high portability, we develop Network State Update Algorithm. All assumptions, definitions, and algorithms are made exclusively with WMNs in mind, guaranteeing the portability of our framework to various environments in WMNs. The simulation results in proof that our framework is correct.

  19. Topological order and memory time in marginally-self-correcting quantum memory

    Science.gov (United States)

    Siva, Karthik; Yoshida, Beni

    2017-03-01

    We examine two proposals for marginally-self-correcting quantum memory: the cubic code by Haah and the welded code by Michnicki. In particular, we prove explicitly that they are absent of topological order above zero temperature, as their Gibbs ensembles can be prepared via a short-depth quantum circuit from classical ensembles. Our proof technique naturally gives rise to the notion of free energy associated with excitations. Further, we develop a framework for an ergodic decomposition of Davies generators in CSS codes which enables formal reduction to simpler classical memory problems. We then show that memory time in the welded code is doubly exponential in inverse temperature via the Peierls argument. These results introduce further connections between thermal topological order and self-correction from the viewpoint of free energy and quantum circuit depth.

  20. Cardiac motion correction based on partial angle reconstructed images in x-ray CT

    International Nuclear Information System (INIS)

    Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom

    2015-01-01

    Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogram with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of

  1. Cardiac motion correction based on partial angle reconstructed images in x-ray CT

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seungeon; Chang, Yongjin; Ra, Jong Beom, E-mail: jbra@kaist.ac.kr [Department of Electrical Engineering, KAIST, Daejeon 305-701 (Korea, Republic of)

    2015-05-15

    Purpose: Cardiac x-ray CT imaging is still challenging due to heart motion, which cannot be ignored even with the current rotation speed of the equipment. In response, many algorithms have been developed to compensate remaining motion artifacts by estimating the motion using projection data or reconstructed images. In these algorithms, accurate motion estimation is critical to the compensated image quality. In addition, since the scan range is directly related to the radiation dose, it is preferable to minimize the scan range in motion estimation. In this paper, the authors propose a novel motion estimation and compensation algorithm using a sinogram with a rotation angle of less than 360°. The algorithm estimates the motion of the whole heart area using two opposite 3D partial angle reconstructed (PAR) images and compensates the motion in the reconstruction process. Methods: A CT system scans the thoracic area including the heart over an angular range of 180° + α + β, where α and β denote the detector fan angle and an additional partial angle, respectively. The obtained cone-beam projection data are converted into cone-parallel geometry via row-wise fan-to-parallel rebinning. Two conjugate 3D PAR images, whose center projection angles are separated by 180°, are then reconstructed with an angular range of β, which is considerably smaller than a short scan range of 180° + α. Although these images include limited view angle artifacts that disturb accurate motion estimation, they have considerably better temporal resolution than a short scan image. Hence, after preprocessing these artifacts, the authors estimate a motion model during a half rotation for a whole field of view via nonrigid registration between the images. Finally, motion-compensated image reconstruction is performed at a target phase by incorporating the estimated motion model. The target phase is selected as that corresponding to a view angle that is orthogonal to the center view angles of

  2. Dominant two-loop electroweak corrections to the hadroproduction of a pseudoscalar Higgs boson and its photonic decay

    International Nuclear Information System (INIS)

    Brod, J.; Kniehl, B.A.

    2008-01-01

    We present the dominant two-loop electroweak corrections to the partial decay widths to gluon jets and prompt photons of the neutral CP-odd Higgs boson A 0 , with mass M A 0 W , in the two-Higgs-doublet model for low to intermediate values of the ratio tan β=v 2 /v 1 of the vacuum expectation values. They apply as they stand to the production cross sections in hadronic and two-photon collisions, at the Tevatron, the LHC, and a future photon collider. The appearance of three γ 5 matrices in closed fermion loops requires special care in the dimensional regularization of ultraviolet divergences. The corrections are negative and amount to several percent, so that they fully compensate or partly screen the enhancement due to QCD corrections. (orig.)

  3. Calculation and measurement of radiation corrections for plasmon resonances in nanoparticles

    Science.gov (United States)

    Hung, L.; Lee, S. Y.; McGovern, O.; Rabin, O.; Mayergoyz, I.

    2013-08-01

    The problem of plasmon resonances in metallic nanoparticles can be formulated as an eigenvalue problem under the condition that the wavelengths of the incident radiation are much larger than the particle dimensions. As the nanoparticle size increases, the quasistatic condition is no longer valid. For this reason, the accuracy of the electrostatic approximation may be compromised and appropriate radiation corrections for the calculation of resonance permittivities and resonance wavelengths are needed. In this paper, we present the radiation corrections in the framework of the eigenvalue method for plasmon mode analysis and demonstrate that the computational results accurately match analytical solutions (for nanospheres) and experimental data (for nanorings and nanocubes). We also demonstrate that the optical spectra of silver nanocube suspensions can be fully assigned to dipole-type resonance modes when radiation corrections are introduced. Finally, our method is used to predict the resonance wavelengths for face-to-face silver nanocube dimers on glass substrates. These results may be useful for the indirect measurements of the gaps in the dimers from extinction cross-section observations.

  4. NLO QCD corrections to Higgs pair production including dimension-6 operators

    Energy Technology Data Exchange (ETDEWEB)

    Groeber, Ramona [INFN, Sezione di Roma Tre, Roma (Italy); Muehlleitner, Margarete; Streicher, Juraj [Karlsruher Institut fuer Technologie (KIT), Karlsruhe (Germany). Institut fuer Theoretische Physik; Spira, Michael [Paul Scherrer Institut, Villigen (Switzerland)

    2016-07-01

    The role of the Higgs boson has developed from the long-sought particle into a tool for exploring beyond Standard Model (BSM) physics. While the Higgs boson signal strengths are close to the values predicted in the Standard Model (SM), the trilinear Higgs-selfcoupling can still deviate significantly from the SM expectations in some BSM scenarios. The Effective Field Theory (EFT) framework provides a way to describe these deviations in a rather model independent way, by including higher-dimensional operators which modify the Higgs boson couplings and induce novel couplings not present in the SM. The trilinear Higgs-selfcoupling is accessible in Higgs pair production, for which the gluon fusion is the dominant production channel. The next-to-leading (NLO) QCD corrections to this process are important for a proper prediction of the cross section and are known in the limit of heavy top quark masses. In our work, we provide the NLO QCD corrections in the large top quark mass limit to Higgs pair production including dimension-6 operators. The various higher-dimensional contributions are affected differently by the QCD corrections, leading to deviations in the relative NLO QCD corrections of several per-cent, while modifying the cross section by up to an order of magnitude.

  5. Hybrid Cascading Outage Analysis of Extreme Events with Optimized Corrective Actions

    Energy Technology Data Exchange (ETDEWEB)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.; Samaan, Nader A.; Makarov, Yuri V.; Diao, Ruisheng; Huang, Qiuhua; Ke, Xinda

    2017-10-19

    Power system are vulnerable to extreme contingencies (like an outage of a major generating substation) that can cause significant generation and load loss and can lead to further cascading outages of other transmission facilities and generators in the system. Some cascading outages are seen within minutes following a major contingency, which may not be captured exclusively using the dynamic simulation of the power system. The utilities plan for contingencies either based on dynamic or steady state analysis separately which may not accurately capture the impact of one process on the other. We address this gap in cascading outage analysis by developing Dynamic Contingency Analysis Tool (DCAT) that can analyze hybrid dynamic and steady state behavior of the power system, including protection system models in dynamic simulations, and simulating corrective actions in post-transient steady state conditions. One of the important implemented steady state processes is to mimic operator corrective actions to mitigate aggravated states caused by dynamic cascading. This paper presents an Optimal Power Flow (OPF) based formulation for selecting corrective actions that utility operators can take during major contingency and thus automate the hybrid dynamic-steady state cascading outage process. The improved DCAT framework with OPF based corrective actions is demonstrated on IEEE 300 bus test system.

  6. Type-Directed Partial Evaluation

    DEFF Research Database (Denmark)

    Danvy, Olivier

    1998-01-01

    Type-directed partial evaluation uses a normalization function to achieve partial evaluation. These lecture notes review its background, foundations, practice, and applications. Of specific interest is the modular technique of offline and online type-directed partial evaluation in Standard ML...

  7. Type-Directed Partial Evaluation

    DEFF Research Database (Denmark)

    Danvy, Olivier

    1998-01-01

    Type-directed partial evaluation uses a normalization function to achieve partial evaluation. These lecture notes review its background, foundations, practice, and applications. Of specific interest is the modular technique of offline and online type-directed partial evaluation in Standard ML of ...

  8. Fabrication of a Customized Ball Abutment to Correct a Nonparallel Implant Abutment for a Mandibular Implant-Supported Removable Partial Prosthesis: A Case Report

    OpenAIRE

    Hossein Dasht; Mohammadreza Nakhaei; Nafiseh teimouri

    2017-01-01

    Introduction: While using an implant-supported removable partial prosthesis, the implant abutments should be parallel to one another along the path of insertion. If the implants and their attachments are placed vertically on a similar occlusal plane, not only is the retention improved, the prosthesis will also be maintained for a longer period. Case Report: A 65-year-old male patient referred to the School of Dentistry in Mashhad, Iran with complaints of discomfort with the removable partial ...

  9. The usefulness and the problems of attenuation correction using simultaneous transmission and emission data acquisition method. Studies on normal volunteers and phantom

    International Nuclear Information System (INIS)

    Kijima, Tetsuji; Kumita, Shin-ichiro; Mizumura, Sunao; Cho, Keiichi; Ishihara, Makiko; Toba, Masahiro; Kumazaki, Tatsuo; Takahashi, Munehiro.

    1997-01-01

    Attenuation correction using simultaneous transmission data (TCT) and emission data (ECT) acquisition method was applied to 201 Tl myocardial SPECT with ten normal adults and the phantom in order to validate the efficacy of attenuation correction using this method. Normal adults study demonstrated improved 201 Tl accumulation to the septal wall and the posterior wall of the left ventricle and relative decreased activities in the lateral wall with attenuation correction (p 201 Tl uptake organs such as the liver and the stomach pushed up the activities in the septal wall and the posterior wall. Cardiac dynamic phantom studies showed partial volume effect due to cardiac motion contributed to under-correction of the apex, which might be overcome using gated SPECT. Although simultaneous TCT and ECT acquisition was conceived of the advantageous method for attenuation correction, miss-correction of the special myocardial segments should be taken into account in assessment of attenuation correction compensated images. (author)

  10. Beyond WKB quantum corrections to Hamilton-Jacobi theory

    International Nuclear Information System (INIS)

    Jurisch, Alexander

    2007-01-01

    In this paper, we develop quantum mechanics of quasi-one-dimensional systems upon the framework of the quantum-mechanical Hamilton-Jacobi theory. We will show that the Schroedinger point of view and the Hamilton-Jacobi point of view are fully equivalent in their description of physical systems, but differ in their descriptive manner. As a main result of this, a wavefunction in Hamilton-Jacobi theory can be decomposed into travelling waves in any point in space, not only asymptotically. Using the quasi-linearization technique, we derive quantum correction functions in every order of h-bar. The quantum correction functions will remove the turning-point singularity that plagues the WKB-series expansion already in zeroth order and thus provide an extremely good approximation to the full solution of the Schroedinger equation. In the language of quantum action it is also possible to elegantly solve the connection problem without asymptotic approximations. The use of quantum action further allows us to derive an equation by which the Maslov index is directly calculable without any approximations. Stationary quantum trajectories will also be considered and thoroughly discussed

  11. Restoration-Guided Implant Rehabilitation of the Complex Partial Edentulism: a Clinical Report

    Directory of Open Access Journals (Sweden)

    Nikitas Sykaras

    2010-01-01

    Full Text Available Background: The hard and soft tissue deficiency is a limiting factor for the prosthetic restoration and any surgical attempt to correct the anatomic foundation needs to be precisely executed for optimal results. The purpose of this paper is to describe the clinical steps that are needed to confirm the treatment plan and allow its proper execution.Methods: Team work and basic principles are emphasized in a step-by-step description of clinical methods and techniques. This clinical report describes the interdisciplinary approach in the rehabilitation of a partially edentulous patient. The importance of the transitional restoration which sets the guidelines for the proper execution of the treatment plan is especially emphasized along with all the steps that have to be followed.Results: The clinical report describes the diagnostic arrangement of teeth, the ridge augmentation based on the diagnostic evaluation of the removable prosthesis, the implant placement with a surgical guide in the form of the removable partial denture duplicate and finally the special 2-piece design of the final fixed prosthesis.Conclusions: Clinical approach and prosthesis design described above offers a predictable way to restore partial edentulism with a fixed yet retrievable prosthesis, restoring soft tissue and teeth and avoiding an implant supported overdenture.

  12. Including gauge corrections to thermal leptogenesis

    International Nuclear Information System (INIS)

    Huetig, Janine

    2013-01-01

    This thesis provides the first approach of a systematic inclusion of gauge corrections to leading order to the ansatz of thermal leptogenesis. We have derived a complete expression for the integrated lepton number matrix including all resummations needed. For this purpose, a new class of diagram has been invented, namely the cylindrical diagram, which allows diverse investigations into the topic of leptogenesis such as the case of resonant leptogenesis. After a brief introduction of the topic of the baryon asymmetry in the universe and a discussion of its most promising solutions as well as their advantages and disadvantages, we have presented our framework of thermal leptogenesis. An effective model was described as well as the associated Feynman rules. The basis for using nonequilibrium quantum field theory has been built in chapter 3. At first, the main definitions have been presented for equilibrium thermal field theory, afterwards we have discussed the Kadanoff-Baym equations for systems out of equilibrium using the example of the Majorana neutrino. The equations have also been solved in the context of leptogenesis in chapter 4. Since gauge corrections play a crucial role throughout this thesis, we have also repeated the naive ansatz by replacing the free equilibrium propagator by propagators including thermal damping rates due to the Standard Model damping widths for lepton and Higgs fields. It is shown that this leads to a comparable result to the solutions of the Boltzmann equations for thermal leptogenesis. Thus it becomes obvious that Standard Model corrections are not negligible for thermal leptogenesis and therefore need to be included systematically from first principles. In order to achieve this we have started discussing the calculation of ladder rung diagrams for Majorana neutrinos using the HTL and the CTL approach in chapter 5. All gauge corrections are included in this framework and thus it has become the basis for the following considerations

  13. Including gauge corrections to thermal leptogenesis

    Energy Technology Data Exchange (ETDEWEB)

    Huetig, Janine

    2013-05-17

    This thesis provides the first approach of a systematic inclusion of gauge corrections to leading order to the ansatz of thermal leptogenesis. We have derived a complete expression for the integrated lepton number matrix including all resummations needed. For this purpose, a new class of diagram has been invented, namely the cylindrical diagram, which allows diverse investigations into the topic of leptogenesis such as the case of resonant leptogenesis. After a brief introduction of the topic of the baryon asymmetry in the universe and a discussion of its most promising solutions as well as their advantages and disadvantages, we have presented our framework of thermal leptogenesis. An effective model was described as well as the associated Feynman rules. The basis for using nonequilibrium quantum field theory has been built in chapter 3. At first, the main definitions have been presented for equilibrium thermal field theory, afterwards we have discussed the Kadanoff-Baym equations for systems out of equilibrium using the example of the Majorana neutrino. The equations have also been solved in the context of leptogenesis in chapter 4. Since gauge corrections play a crucial role throughout this thesis, we have also repeated the naive ansatz by replacing the free equilibrium propagator by propagators including thermal damping rates due to the Standard Model damping widths for lepton and Higgs fields. It is shown that this leads to a comparable result to the solutions of the Boltzmann equations for thermal leptogenesis. Thus it becomes obvious that Standard Model corrections are not negligible for thermal leptogenesis and therefore need to be included systematically from first principles. In order to achieve this we have started discussing the calculation of ladder rung diagrams for Majorana neutrinos using the HTL and the CTL approach in chapter 5. All gauge corrections are included in this framework and thus it has become the basis for the following considerations

  14. Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.

    Science.gov (United States)

    Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya

    2018-05-05

    This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.

  15. An energy estimation framework for event-based methods in Non-Intrusive Load Monitoring

    International Nuclear Information System (INIS)

    Giri, Suman; Bergés, Mario

    2015-01-01

    Highlights: • Energy estimation is NILM has not yet accounted for complexity of appliance models. • We present a data-driven framework for appliance modeling in supervised NILM. • We test the framework on 3 houses and report average accuracies of 5.9–22.4%. • Appliance models facilitate the estimation of energy consumed by the appliance. - Abstract: Non-Intrusive Load Monitoring (NILM) is a set of techniques used to estimate the electricity consumed by individual appliances in a building from measurements of the total electrical consumption. Most commonly, NILM works by first attributing any significant change in the total power consumption (also known as an event) to a specific load and subsequently using these attributions (i.e. the labels for the events) to estimate energy for each load. For this last step, most published work in the field makes simplifying assumptions to make the problem more tractable. In this paper, we present a framework for creating appliance models based on classification labels and aggregate power measurements that can help to relax many of these assumptions. Our framework automatically builds models for appliances to perform energy estimation. The model relies on feature extraction, clustering via affinity propagation, perturbation of extracted states to ensure that they mimic appliance behavior, creation of finite state models, correction of any errors in classification that might violate the model, and estimation of energy based on corrected labels. We evaluate our framework on 3 houses from standard datasets in the field and show that the framework can learn data-driven models based on event labels and use that to estimate energy with lower error margins (e.g., 1.1–42.3%) than when using the heuristic models used by others

  16. An analytic, approximate method for modeling steady, three-dimensional flow to partially penetrating wells

    Science.gov (United States)

    Bakker, Mark

    2001-05-01

    An analytic, approximate solution is derived for the modeling of three-dimensional flow to partially penetrating wells. The solution is written in terms of a correction on the solution for a fully penetrating well and is obtained by dividing the aquifer up, locally, in a number of aquifer layers. The resulting system of differential equations is solved by application of the theory for multiaquifer flow. The presented approach has three major benefits. First, the solution may be applied to any groundwater model that can simulate flow to a fully penetrating well; the solution may be superimposed onto the solution for the fully penetrating well to simulate the local three-dimensional drawdown and flow field. Second, the approach is applicable to isotropic, anisotropic, and stratified aquifers and to both confined and unconfined flow. Third, the solution extends over a small area around the well only; outside this area the three-dimensional effect of the partially penetrating well is negligible, and no correction to the fully penetrating well is needed. A number of comparisons are made to existing three-dimensional, analytic solutions, including radial confined and unconfined flow and a well in a uniform flow field. It is shown that a subdivision in three layers is accurate for many practical cases; very accurate solutions are obtained with more layers.

  17. Perturbative corrections for approximate inference in gaussian latent variable models

    DEFF Research Database (Denmark)

    Opper, Manfred; Paquet, Ulrich; Winther, Ole

    2013-01-01

    Expectation Propagation (EP) provides a framework for approximate inference. When the model under consideration is over a latent Gaussian field, with the approximation being Gaussian, we show how these approximations can systematically be corrected. A perturbative expansion is made of the exact b...... illustrate on tree-structured Ising model approximations. Furthermore, they provide a polynomial-time assessment of the approximation error. We also provide both theoretical and practical insights on the exactness of the EP solution. © 2013 Manfred Opper, Ulrich Paquet and Ole Winther....

  18. The cosmological perturbation theory in loop cosmology with holonomy corrections

    International Nuclear Information System (INIS)

    Wu, Jian-Pin; Ling, Yi

    2010-01-01

    In this paper we investigate the scalar mode of first-order metric perturbations over spatially flat FRW spacetime when the holonomy correction is taken into account in the semi-classical framework of loop quantum cosmology. By means of the Hamiltonian derivation, the cosmological perturbation equations is obtained in longitudinal gauge. It turns out that in the presence of metric perturbation the holonomy effects influence both background and perturbations, and contribute the non-trivial terms S h1 and S h2 in the cosmological perturbation equations

  19. Fission track dating of volcanic glass: experimental evidence for the validity of the Size-Correction Method

    International Nuclear Information System (INIS)

    Bernardes, C.; Hadler Neto, J.C.; Lattes, C.M.G.; Araya, A.M.O.; Bigazzi, G.; Cesar, M.F.

    1986-01-01

    Two techniques may be employed for correcting thermally lowered fission track ages on glass material: the so called 'size-correcting method' and 'Plateau method'. Several results from fission track dating on obsidian were analysed in order to compare the model rising size-correction method with experimental evidences. The results from this work can be summarized as follows: 1) The assumption that mean size of spontaneous and induced etched tracks are equal on samples unaffected by partial fading is supported by experimental results. If reactor effects such as an enhancing of the etching rate in the irradiated fraction due to the radiation damage and/or to the fact that induced fission releases a quantity of energy slightly greater than spontaneous one exist, their influence on size-correction method is very small. 2) The above two correction techniques produce concordant results. 3) Several samples from the same obsidian, affected by 'instantaneous' as well as 'continuous' natural fading to different degrees were analysed: the curve showing decreasing of spontaneous track mean-size vs. fraction of spontaneous tracks lost by fading is in close agreement with the correction curve constructed for the same obsidian by imparting artificial thermal treatements on induced tracks. By the above points one can conclude that the assumptions on which size-correction method is based are well supported, at least in first approximation. (Author) [pt

  20. Offset Free Tracking Predictive Control Based on Dynamic PLS Framework

    Directory of Open Access Journals (Sweden)

    Jin Xin

    2017-10-01

    Full Text Available This paper develops an offset free tracking model predictive control based on a dynamic partial least square (PLS framework. First, state space model is used as the inner model of PLS to describe the dynamic system, where subspace identification method is used to identify the inner model. Based on the obtained model, multiple independent model predictive control (MPC controllers are designed. Due to the decoupling character of PLS, these controllers are running separately, which is suitable for distributed control framework. In addition, the increment of inner model output is considered in the cost function of MPC, which involves integral action in the controller. Hence, the offset free tracking performance is guaranteed. The results of an industry background simulation demonstrate the effectiveness of proposed method.

  1. Program Correctness, Verification and Testing for Exascale (Corvette)

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Koushik [Univ. of California, Berkeley, CA (United States); Iancu, Costin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Demmel, James W [UC Berkeley

    2018-01-26

    The goal of this project is to provide tools to assess the correctness of parallel programs written using hybrid parallelism. There is a dire lack of both theoretical and engineering know-how in the area of finding bugs in hybrid or large scale parallel programs, which our research aims to change. In the project we have demonstrated novel approaches in several areas: 1. Low overhead automated and precise detection of concurrency bugs at scale. 2. Using low overhead bug detection tools to guide speculative program transformations for performance. 3. Techniques to reduce the concurrency required to reproduce a bug using partial program restart/replay. 4. Techniques to provide reproducible execution of floating point programs. 5. Techniques for tuning the floating point precision used in codes.

  2. Parton distribution functions with QED corrections in the valon model

    Science.gov (United States)

    Mottaghizadeh, Marzieh; Taghavi Shahri, Fatemeh; Eslami, Parvin

    2017-10-01

    The parton distribution functions (PDFs) with QED corrections are obtained by solving the QCD ⊗QED DGLAP evolution equations in the framework of the "valon" model at the next-to-leading-order QCD and the leading-order QED approximations. Our results for the PDFs with QED corrections in this phenomenological model are in good agreement with the newly related CT14QED global fits code [Phys. Rev. D 93, 114015 (2016), 10.1103/PhysRevD.93.114015] and APFEL (NNPDF2.3QED) program [Comput. Phys. Commun. 185, 1647 (2014), 10.1016/j.cpc.2014.03.007] in a wide range of x =[10-5,1 ] and Q2=[0.283 ,108] GeV2 . The model calculations agree rather well with those codes. In the latter, we proposed a new method for studying the symmetry breaking of the sea quark distribution functions inside the proton.

  3. Corrections to the leading eikonal amplitude for high-energy scattering and quasipotential approach

    International Nuclear Information System (INIS)

    Nguyen Suan Hani; Nguyen Duy Hung

    2003-12-01

    Asymptotic behaviour of the scattering amplitude for two scalar particle at high energy and fixed momentum transfers is reconsidered in quantum field theory. In the framework of the quasipotential approach and the modified perturbation theory a systematic scheme of finding the leading eikonal scattering amplitudes and its corrections is developed and constructed. The connection between the solutions obtained by quasipotential and functional approaches is also discussed. (author)

  4. Topology optimization for optical microlithography with partially coherent illumination

    DEFF Research Database (Denmark)

    Zhou, Mingdong; Lazarov, Boyan Stefanov; Sigmund, Ole

    2017-01-01

    in microlithography/nanolithography. The key steps include (i) modeling the physical inputs of the fabrication process, including the ultraviolet light illumination source and the mask, as the design variables in optimization and (ii) applying physical filtering and heaviside projection for topology optimization......This article revisits a topology optimization design approach for micro-manufacturing and extends it to optical microlithography with partially coherent illumination. The solution is based on a combination of two technologies, the topology optimization and the proximity error correction....... Meanwhile, the performance of the device is optimized and robust with respect to process variations, such as dose/photo-resist variations and lens defocus. A compliant micro-gripper design example is considered to demonstrate the applicability of this approach....

  5. Liquid nitrogen enhancement of partially annealed fission tracks in glass

    International Nuclear Information System (INIS)

    Pilione, L.J.; Gold, D.P.

    1976-01-01

    It is known that the number density of fission tracks in solids is reduced if the sample is heated before chemical etching, and the effect of annealing must be allowed for before an age can be assigned to the sample. The extent of annealing can be determined by measuring the reduction of track parameters (diameter and/or length) and comparison with unannealed tracks. Correct ages can be obtained by careful calibration studies of track density reduction against track diameter or length reduction at different annealing temperatures and times. For crystallised minerals, however, the resulting correction techniques are not generally valid. In the experimental work described glass samples were partially annealed and then immersed in liquid N 2 for various periods, and it was shown that the properties of the glass and the track parameters could be altered so as to observe tracks that would normally be erased by annealing. The results of track density measurements against liquid N 2 immersion times are shown graphically. A gain of about 40% was achieved after 760 hours immersion time. The size of the tracks was not noticeably affected by the immersion. It was thought that thermal shock might be the cause of the track enhancement, but it was found that repeated immersion for about 2 hours did not lead to an increase in track density. Other studies suggest that the mechanism that erases the tracks through annealing may be partially reversed when the temperature of the sample is significantly lowered for a sufficient length of time. Further work is under way to find whether or not the process of enhancement is a reversal of the annealing process. Similar enhancement effects using liquid N 2 have been observed for d-particle tracks in polycarbonate detectors. (U.K.)

  6. Modeling Phase-transitions Using a High-performance, Isogeometric Analysis Framework

    KAUST Repository

    Vignal, Philippe

    2014-06-06

    In this paper, we present a high-performance framework for solving partial differential equations using Isogeometric Analysis, called PetIGA, and show how it can be used to solve phase-field problems. We specifically chose the Cahn-Hilliard equation, and the phase-field crystal equation as test cases. These two models allow us to highlight some of the main advantages that we have access to while using PetIGA for scientific computing.

  7. Essays on partial retirement

    NARCIS (Netherlands)

    Kantarci, T.

    2012-01-01

    The five essays in this dissertation address a range of topics in the micro-economic literature on partial retirement. The focus is on the labor market behavior of older age groups. The essays examine the economic and non-economic determinants of partial retirement behavior, the effect of partial

  8. Enhancing generalisation in biofeedback intervention using the challenge point framework: A case study

    Science.gov (United States)

    HITCHCOCK, ELAINE R.; BYUN, TARA McALLISTER

    2014-01-01

    Biofeedback intervention can help children achieve correct production of a treatment-resistant error sound, but generalisation is often limited. This case study suggests that generalisation can be enhanced when biofeedback intervention is structured in accordance with a “challenge point” framework for speech-motor learning. The participant was an 11-year-old with residual /r/ misarticulation who had previously attained correct /r/ production through a structured course of ultrasound biofeedback treatment but did not generalise these gains beyond the word level. Treatment difficulty was adjusted in an adaptive manner following predetermined criteria for advancing, maintaining, or moving back a level in a multidimensional hierarchy of functional task complexity. The participant achieved and maintained virtually 100% accuracy in producing /r/ at both word and sentence levels. These preliminary results support the efficacy of a semi-structured implementation of the challenge point framework as a means of achieving generalisation and maintenance of treatment gains. PMID:25216375

  9. Quantum size correction to the work function and centroid of excess charge in positively ionized simple metal clusters

    International Nuclear Information System (INIS)

    Payami, M.

    2004-01-01

    In this work, we have shown the important role of the finite-size correction to the work function in predicting the correct position of the centroid of excess charge in positively charged simple metal clusters with different r s values (2≤ r s ≥ 7). For this purpose, firstly we have calculated the self-consistent Kohn-Sham energies of neutral and singly-ionized clusters with sizes 2≤ N ≥100 in the framework of local spin-density approximation and stabilized jellium model as well as simple jellium model with rigid jellium. Secondly, we have fitted our results to the asymptotic ionization formulas both with and without the size correction to the work function. The results of fittings show that the formula containing the size correction predict a correct position of the centroid inside the jellium while the other predicts a false position, outside the jellium sphere

  10. A Governance and Management Framework for Green IT

    Directory of Open Access Journals (Sweden)

    J. David Patón-Romero

    2017-09-01

    Full Text Available In recent years, Green Information Technology (IT has grown enormously, and has become an increasingly important and essential area, providing multiple benefits to the organizations that focus on it. It is for this reason that there is an increasing number of organizations embracing the idea of Green IT. However, Green IT is a very young field and each organization implements it according to its own criteria. That is why it is extremely important to develop the bases or best practices of governance and management that allow organizations to implement Green IT practices correctly and standardize them. In this article, we propose the “Governance and Management Framework for Green IT”, establishing the characteristics needed to carry out the governance and management of Green IT in an organization, and perform audits in this area. This framework is based on COBIT 5, which is a general framework for the control and audit of different areas related to IT. The results obtained through different validations demonstrate the validity and usefulness of the framework developed in the field of Green IT, providing a complete guide to the organizations in their efforts to implement, control and/or improve the practices of Green IT in their processes and day-to-day operations.

  11. Brivaracetam: review of its pharmacology and potential use as adjunctive therapy in patients with partial onset seizures [Corrigendum

    OpenAIRE

    Russo, Emilio; Mumoli,Laura; Palleria,Caterina; Gasparini,Sara; Citraro,Rita; Labate,Angelo; Ferlazzo,Edoardo; Gambardella,Antonio; De Sarro,Giovambattista

    2015-01-01

    Brivaracetam: review of its pharmacology and potential use as adjunctive therapy in patients with partial onset seizures [Corrigendum] Mumoli L, Palleria C, Gasparini S, et al. Drug Des Devel Ther. 2015;9:5719–5725.   The authors advise several errors in the paper that are corrected in Corrigendum. View the original article by Mumoli et al.

  12. Brivaracetam: review of its pharmacology and potential use as adjunctive therapy in patients with partial onset seizures [Corrigendum

    Directory of Open Access Journals (Sweden)

    Mumoli L

    2015-12-01

    Full Text Available Brivaracetam: review of its pharmacology and potential use as adjunctive therapy in patients with partial onset seizures [Corrigendum] Mumoli L, Palleria C, Gasparini S, et al. Drug Des Devel Ther. 2015;9:5719–5725.   The authors advise several errors in the paper that are corrected in Corrigendum. View the original article by Mumoli et al.

  13. Quantum size correction to the work function and the centroid of excess charge in positively ionized simple metal clusters

    Directory of Open Access Journals (Sweden)

    M. Payami

    2003-12-01

    Full Text Available  In this work, we have shown the important role of the finite-size correction to the work function in predicting the correct position of the centroid of excess charge in positively charged simple metal clusters with different values . For this purpose, firstly we have calculated the self-consistent Kohn-Sham energies of neutral and singly-ionized clusters with sizes in the framework of local spin-density approximation and stabilized jellium model (SJM as well as simple jellium model (JM with rigid jellium. Secondly, we have fitted our results to the asymptotic ionization formulas both with and without the size correction to the work function. The results of fittings show that the formula containing the size correction predict a correct position of the centroid inside the jellium while the other predicts a false position, outside the jellium sphere.

  14. The equilibrium structures of the 900 partial dislocation in silicon

    International Nuclear Information System (INIS)

    Valladares, Alexander; Sutton, A P

    2005-01-01

    We consider the free energies of the single-period (SP) and double-period (DP) core reconstructions of the straight 90 0 partial dislocation in silicon. The vibrational contributions are calculated with a harmonic model. It is found that it leads to a diminishing difference between the free energies of the two core reconstructions with increasing temperature. The question of the relative populations of SP and DP reconstructions in a single straight 90 0 partial dislocation is solved by mapping the problem onto a one-dimensional Ising model in a magnetic field. The model contains only two parameters and is solved analytically. It leads to the conclusion that for the majority of the published energy differences between the SP and DP reconstructions the equilibrium core structure is dominated by the DP reconstruction at all temperatures up to the melting point. We review whether it is possible to distinguish between the SP and DP reconstructions experimentally, both in principle and in practice. We conclude that aberration corrected transmission electron microscopy should be able to distinguish between these two core reconstructions, but published high resolution micrographs do not allow the distinction to be made

  15. Understanding Decision-Making in Specialized Domestic Violence Courts: Can Contemporary Theoretical Frameworks Help Guide These Decisions?

    Science.gov (United States)

    Pinchevsky, Gillian M

    2016-05-22

    This study fills a gap in the literature by exploring the utility of contemporary courtroom theoretical frameworks-uncertainty avoidance, causal attribution, and focal concerns-for explaining decision-making in specialized domestic violence courts. Using data from two specialized domestic violence courts, this study explores the predictors of prosecutorial and judicial decision-making and the extent to which these factors are congruent with theoretical frameworks often used in studies of court processing. Findings suggest that these theoretical frameworks only partially help explain decision-making in the courts under study. A discussion of the findings and implications for future research is provided. © The Author(s) 2016.

  16. Amniotic membrane transplantation for reconstruction of corneal epithelial surface in cases of partial limbal stem cell deficiency.

    Directory of Open Access Journals (Sweden)

    Sangwan Virender

    2004-01-01

    Full Text Available Purpose: To assess the efficacy of amniotic membrane for treatment of partial limbal stem cell deficiency (LSCD. Methods: Medical records of four patients with partial LSCD who underwent pannus resection and amniotic membrane transplantation (AMT were reviewed for ocular surface stability and improvement in visual acuity. Clinico-histopathological correlation was done with the resected pannus tissue. Results: All the eyes exhibited stable corneal epithelial surface by an average of 7 weeks postoperatively with improvement in subjective symptoms. Best corrected visual acuity improved from preoperative (range: 6/9p-6/120 to postoperative (range: 6/6p-6/15 by an average of 4.5 lines on Snellen visual acuity charts. Histopathological examination of excised tissue showed features of conjunctivalisation. Conclusion: Amniotic membrane transplantation appears to be an effective means of reconstructing the corneal epithelial surface and for visual rehabilitation of patients with partial limbal stem cell deficiency. It may be considered as an alternative primary procedure to limbal transplantation in these cases.

  17. Integration of laboratory bioassays into the risk-based corrective action process

    International Nuclear Information System (INIS)

    Edwards, D.; Messina, F.; Clark, J.

    1995-01-01

    Recent data generated by the Gas Research Institute (GRI) and others indicate that residual hydrocarbon may be bound/sequestered in soil such that it is unavailable for microbial degradation, and thus possibly not bioavailable to human/ecological receptors. A reduction in bioavailability would directly equate to reduced exposure and, therefore, potentially less-conservative risk-based cleanup soil goals. Laboratory bioassays which measure bioavailability/toxicity can be cost-effectively integrated into the risk-based corrective action process. However, in order to maximize the cost-effective application of bioassays several site-specific parameters should be addressed up front. This paper discusses (1) the evaluation of parameters impacting the application of bioassays to soils contaminated with metals and/or petroleum hydrocarbons and (2) the cost-effective integration of bioassays into a tiered ASTM type framework for risk-based corrective action

  18. LOCAL TEXTURE DESCRIPTION FRAMEWORK FOR TEXTURE BASED FACE RECOGNITION

    Directory of Open Access Journals (Sweden)

    R. Reena Rose

    2014-02-01

    Full Text Available Texture descriptors have an important role in recognizing face images. However, almost all the existing local texture descriptors use nearest neighbors to encode a texture pattern around a pixel. But in face images, most of the pixels have similar characteristics with that of its nearest neighbors because the skin covers large area in a face and the skin tone at neighboring regions are same. Therefore this paper presents a general framework called Local Texture Description Framework that uses only eight pixels which are at certain distance apart either circular or elliptical from the referenced pixel. Local texture description can be done using the foundation of any existing local texture descriptors. In this paper, the performance of the proposed framework is verified with three existing local texture descriptors Local Binary Pattern (LBP, Local Texture Pattern (LTP and Local Tetra Patterns (LTrPs for the five issues viz. facial expression, partial occlusion, illumination variation, pose variation and general recognition. Five benchmark databases JAFFE, Essex, Indian faces, AT&T and Georgia Tech are used for the experiments. Experimental results demonstrate that even with less number of patterns, the proposed framework could achieve higher recognition accuracy than that of their base models.

  19. Motion-corrected whole-heart PET-MR for the simultaneous visualisation of coronary artery integrity and myocardial viability: an initial clinical validation.

    Science.gov (United States)

    Munoz, Camila; Kunze, Karl P; Neji, Radhouene; Vitadello, Teresa; Rischpler, Christoph; Botnar, René M; Nekolla, Stephan G; Prieto, Claudia

    2018-05-12

    Cardiac PET-MR has shown potential for the comprehensive assessment of coronary heart disease. However, image degradation due to physiological motion remains a challenge that could hinder the adoption of this technology in clinical practice. The purpose of this study was to validate a recently proposed respiratory motion-corrected PET-MR framework for the simultaneous visualisation of myocardial viability ( 18 F-FDG PET) and coronary artery anatomy (coronary MR angiography, CMRA) in patients with chronic total occlusion (CTO). A cohort of 14 patients was scanned with the proposed PET-CMRA framework. PET and CMRA images were reconstructed with and without the proposed motion correction approach for comparison purposes. Metrics of image quality including visible vessel length and sharpness were obtained for CMRA for both the right and left anterior descending coronary arteries (RCA, LAD), and relative increase in 18 F-FDG PET signal after motion correction for standard 17-segment polar maps was computed. Resulting coronary anatomy by CMRA and myocardial integrity by PET were visually compared against X-ray angiography and conventional Late Gadolinium Enhancement (LGE) MRI, respectively. Motion correction increased CMRA visible vessel length by 49.9% and 32.6% (RCA, LAD) and vessel sharpness by 12.3% and 18.9% (RCA, LAD) on average compared to uncorrected images. Coronary lumen delineation on motion-corrected CMRA images was in good agreement with X-ray angiography findings. For PET, motion correction resulted in an average 8% increase in 18 F-FDG signal in the inferior and inferolateral segments of the myocardial wall. An improved delineation of myocardial viability defects and reduced noise in the 18 F-FDG PET images was observed, improving correspondence to subendocardial LGE-MRI findings compared to uncorrected images. The feasibility of the PET-CMRA framework for simultaneous cardiac PET-MR imaging in a short and predictable scan time (~11 min) has been

  20. Lyapunov stability and thermal stability of partially relaxed fluids and plasmas

    International Nuclear Information System (INIS)

    Elsaesser, K.; Spiess, P.

    1996-01-01

    The relation between the Lyapunov stability of a Hamiltonian system and the thermal stability of a fluid whose temperature is controlled from outside is explored: The free energy as a functional of the correct variables (specific volume, local entropy, and some Clebsch potentials of the velocity) may serve as a Lyapunov functional, depending on the open-quote open-quote Casimirs close-quote close-quote as exchanged quantities. For a multi-species plasma one obtains a sufficient condition for stability: γ(v 2 /c 2 s )-1 s the sound speed. Some features of partially relaxed (T=const) cylindrical plasmas are also discussed. copyright 1996 American Institute of Physics

  1. Correction of magnetization sextupole and decapole in a 5 centimeter bore SSC dipole using passive superconductor

    International Nuclear Information System (INIS)

    Green, M.A.

    1991-05-01

    Higher multipoles due to magnetization of the superconductor in four and five centimeter bore Superconducting Super Collider (SSC) superconducting dipole magnets have been observed. The use of passive superconductor to correct out the magnetization sextupole has been demonstrated on two dipoles built by the Lawrence Berkeley Laboratory (LBL). This reports shows how passive correction can be applied to the five centimeter SSC dipoles to remove sextupole and decapole caused by magnetization of the dipole superconductor. Two passive superconductor corrector options will be presented. The change in magnetization sextupole and decapole due to flux creep decay of the superconductor during injection can be partially compensated for using the passive superconductor. 9 refs; 5 figs

  2. An interactive 3D framework for anatomical education

    OpenAIRE

    Vázquez Alcocer, Pere Pau; Götzelmann, Timo; Hartmann, Knut; Nürnberger, Andreas

    2008-01-01

    Object: This paper presents a 3D framework for Anatomy teaching. We are mainly concerned with the proper understanding of human anatomical 3D structures. Materials and methods: The main idea of our approach is taking an electronic book such as Henry Gray’s Anatomy of the human body, and a set of 3D models properly labeled, and constructing the correct linking that allows users to perform mutual searches between both media. Results: We implemented a system where learners can intera...

  3. Partial breaking of N = 1, D = 10 supersymmetry

    International Nuclear Information System (INIS)

    Bellucci, S.

    1999-01-01

    In this paper is described the spontaneous partial breaking of N =1, D =10 supersymmetry to N = (1, 0), d = 6 and its dimensionally-reduced versions in the framework of nonlinear realizations. The basic Goldstone superfield is N = (1, 0), d = 6 hyper multiplet superfield satisfying a nonlinear generalization of the standard hyper multiplet constraint. It is here interpreted the generalized constraint as the manifestly world volume supersymmetric form of equations of motion of the type 1 super 5-brane in D 10. The related issues here addressed are a possible existence of brane extension of off-shell hyper multiplet actions, the possibility to utilize vector N = (1, 0), d =6 supermultiplet as the Goldstone one, and the description of 1/4 breaking of N =1, D = 11 supersymmetry

  4. A simple and efficient dispersion correction to the Hartree-Fock theory (2): Incorporation of a geometrical correction for the basis set superposition error.

    Science.gov (United States)

    Yoshida, Tatsusada; Hayashi, Takahisa; Mashima, Akira; Chuman, Hiroshi

    2015-10-01

    One of the most challenging problems in computer-aided drug discovery is the accurate prediction of the binding energy between a ligand and a protein. For accurate estimation of net binding energy ΔEbind in the framework of the Hartree-Fock (HF) theory, it is necessary to estimate two additional energy terms; the dispersion interaction energy (Edisp) and the basis set superposition error (BSSE). We previously reported a simple and efficient dispersion correction, Edisp, to the Hartree-Fock theory (HF-Dtq). In the present study, an approximation procedure for estimating BSSE proposed by Kruse and Grimme, a geometrical counterpoise correction (gCP), was incorporated into HF-Dtq (HF-Dtq-gCP). The relative weights of the Edisp (Dtq) and BSSE (gCP) terms were determined to reproduce ΔEbind calculated with CCSD(T)/CBS or /aug-cc-pVTZ (HF-Dtq-gCP (scaled)). The performance of HF-Dtq-gCP (scaled) was compared with that of B3LYP-D3(BJ)-bCP (dispersion corrected B3LYP with the Boys and Bernadi counterpoise correction (bCP)), by taking ΔEbind (CCSD(T)-bCP) of small non-covalent complexes as 'a golden standard'. As a critical test, HF-Dtq-gCP (scaled)/6-31G(d) and B3LYP-D3(BJ)-bCP/6-31G(d) were applied to the complex model for HIV-1 protease and its potent inhibitor, KNI-10033. The present results demonstrate that HF-Dtq-gCP (scaled) is a useful and powerful remedy for accurately and promptly predicting ΔEbind between a ligand and a protein, albeit it is a simple correction procedure. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    Science.gov (United States)

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  6. Database of normal human cerebral blood flow measured by SPECT: II. Quantification of I-123-IMP studies with ARG method and effects of partial volume correction.

    Science.gov (United States)

    Inoue, Kentaro; Ito, Hiroshi; Shidahara, Miho; Goto, Ryoi; Kinomura, Shigeo; Sato, Kazunori; Taki, Yasuyuki; Okada, Ken; Kaneta, Tomohiro; Fukuda, Hiroshi

    2006-02-01

    The limited spatial resolution of SPECT causes a partial volume effect (PVE) and can lead to the significant underestimation of regional tracer concentration in the small structures surrounded by a low tracer concentration, such as the cortical gray matter of an atrophied brain. The aim of the present study was to determine, using 123I-IMP and SPECT, normal CBF of elderly subjects with and without PVE correction (PVC), and to determine regional differences in the effect of PVC and their association with the regional tissue fraction of the brain. Quantitative CBF SPECT using 123I-IMP was performed in 33 healthy elderly subjects (18 males, 15 females, 54-74 years old) using the autoradiographic method. We corrected CBF for PVE using segmented MR images, and analyzed quantitative CBF and regional differences in the effect of PVC using tissue fractions of gray matter (GM) and white matter (WM) in regions of interest (ROIs) placed on the cortical and subcortical GM regions and deep WM regions. The mean CBF in GM-ROIs were 31.7 +/- 6.6 and 41.0 +/- 8.1 ml/100 g/min for males and females, and in WM-ROIs, 18.2 +/- 0.7 and 22.9 +/- 0.8 ml/100 g/min for males and females, respectively. The mean CBF in GM-ROIs after PVC were 50.9 +/- 12.8 and 65.8 +/- 16.1 ml/100 g/min for males and females, respectively. There were statistically significant differences in the effect of PVC among ROIs, but not between genders. The effect of PVC was small in the cerebellum and parahippocampal gyrus, and it was large in the superior frontal gyrus, superior parietal lobule and precentral gyrus. Quantitative CBF in GM recovered significantly, but did not reach values as high as those obtained by invasive methods or in the H2(15)O PET study that used PVC. There were significant regional differences in the effect of PVC, which were considered to result from regional differences in GM tissue fraction, which is more reduced in the frontoparietal regions in the atrophied brain of the elderly.

  7. Database of normal human cerebral blood flow measured by SPECT. II. Quantification of I-123-IMP studies with ARG method and effects of partial volume correction

    International Nuclear Information System (INIS)

    Inoue, Kentaro; Ito, Hiroshi; Shidahara, Miho

    2006-01-01

    The limited spatial resolution of SPECT causes a partial volume effect (PVE) and can lead to the significant underestimation of regional tracer concentration in the small structures surrounded by a low tracer concentration, such as the cortical gray matter of an atrophied brain. The aim of the present study was to determine, using 123 I-IMP and SPECT, normal cerebral blood flow (CBF) of elderly subjects with and without PVE correction (PVC), and to determine regional differences in the effect of PVC and their association with the regional tissue fraction of the brain. Quantitative CBF SPECT using 123 I-IMP was performed in 33 healthy elderly subjects (18 males, 15 females, 54-74 years old) using the autoradiographic method. We corrected CBF for PVE using segmented MR images, and analyzed quantitative CBF and regional differences in the effect of PVC using tissue fractions of gray matter (GM) and white matter (WM) in regions of interest (ROIs) placed on the cortical and subcortical GM regions and deep WM regions. The mean CBF in GM-ROIs were 31.7±6.6 and 41.0±8.1 ml/100 g/min for males and females, and in WM-ROIs, 18.2±0.7 and 22.9±0.8 ml/100 g/min for males and females, respectively. The mean CBF in GM-ROIs after PVC were 50.9±12.8 and 65.8±16.1 ml/100 g/min for males and females, respectively. There were statistically significant differences in the effect of PVC among ROIs, but not between genders. The effect of PVC was small in the cerebellum and parahippocampal gyrus, and it was large in the superior frontal gyrus, superior parietal lobule and precentral gyrus. Quantitative CBF in GM recovered significantly, but did not reach values as high as those obtained by invasive methods or in the H 2 15 O PET study that used PVC. There were significant regional differences in the effect of PVC, which were considered to result from regional differences in GM tissue fraction, which is more reduced in the frontoparietal regions in the atrophied brain of the elderly

  8. Towards New Empirical Versions of Financial and Accounting Models Corrected for Measurement Errors

    OpenAIRE

    Francois-Éric Racicot; Raymond Théoret; Alain Coen

    2006-01-01

    In this paper, we propose a new empirical version of the Fama and French Model based on the Hausman (1978) specification test and aimed at discarding measurement errors in the variables. The proposed empirical framework is general enough to be used for correcting other financial and accounting models of measurement errors. Removing measurement errors is important at many levels as information disclosure, corporate governance and protection of investors.

  9. A model-based risk management framework

    Energy Technology Data Exchange (ETDEWEB)

    Gran, Bjoern Axel; Fredriksen, Rune

    2002-08-15

    The ongoing research activity addresses these issues through two co-operative activities. The first is the IST funded research project CORAS, where Institutt for energiteknikk takes part as responsible for the work package for Risk Analysis. The main objective of the CORAS project is to develop a framework to support risk assessment of security critical systems. The second, called the Halden Open Dependability Demonstrator (HODD), is established in cooperation between Oestfold University College, local companies and HRP. The objective of HODD is to provide an open-source test bed for testing, teaching and learning about risk analysis methods, risk analysis tools, and fault tolerance techniques. The Inverted Pendulum Control System (IPCON), which main task is to keep a pendulum balanced and controlled, is the first system that has been established. In order to make risk assessment one need to know what a system does, or is intended to do. Furthermore, the risk assessment requires correct descriptions of the system, its context and all relevant features. A basic assumption is that a precise model of this knowledge, based on formal or semi-formal descriptions, such as UML, will facilitate a systematic risk assessment. It is also necessary to have a framework to integrate the different risk assessment methods. The experiences so far support this hypothesis. This report presents CORAS and the CORAS model-based risk management framework, including a preliminary guideline for model-based risk assessment. The CORAS framework for model-based risk analysis offers a structured and systematic approach to identify and assess security issues of ICT systems. From the initial assessment of IPCON, we also believe that the framework is applicable in a safety context. Further work on IPCON, as well as the experiences from the CORAS trials, will provide insight and feedback for further improvements. (Author)

  10. Long-range correlation in synchronization and syncopation tapping: a linear phase correction model.

    Directory of Open Access Journals (Sweden)

    Didier Delignières

    Full Text Available We propose in this paper a model for accounting for the increase in long-range correlations observed in asynchrony series in syncopation tapping, as compared with synchronization tapping. Our model is an extension of the linear phase correction model for synchronization tapping. We suppose that the timekeeper represents a fractal source in the system, and that a process of estimation of the half-period of the metronome, obeying a random-walk dynamics, combines with the linear phase correction process. Comparing experimental and simulated series, we show that our model allows accounting for the experimentally observed pattern of serial dependence. This model complete previous modeling solutions proposed for self-paced and synchronization tapping, for a unifying framework of event-based timing.

  11. Quantum-corrected drift-diffusion models for transport in semiconductor devices

    International Nuclear Information System (INIS)

    De Falco, Carlo; Gatti, Emilio; Lacaita, Andrea L.; Sacco, Riccardo

    2005-01-01

    In this paper, we propose a unified framework for Quantum-corrected drift-diffusion (QCDD) models in nanoscale semiconductor device simulation. QCDD models are presented as a suitable generalization of the classical drift-diffusion (DD) system, each particular model being identified by the constitutive relation for the quantum-correction to the electric potential. We examine two special, and relevant, examples of QCDD models; the first one is the modified DD model named Schroedinger-Poisson-drift-diffusion, and the second one is the quantum-drift-diffusion (QDD) model. For the decoupled solution of the two models, we introduce a functional iteration technique that extends the classical Gummel algorithm widely used in the iterative solution of the DD system. We discuss the finite element discretization of the various differential subsystems, with special emphasis on their stability properties, and illustrate the performance of the proposed algorithms and models on the numerical simulation of nanoscale devices in two spatial dimensions

  12. Text Mining Metal-Organic Framework Papers.

    Science.gov (United States)

    Park, Sanghoon; Kim, Baekjun; Choi, Sihoon; Boyd, Peter G; Smit, Berend; Kim, Jihan

    2018-02-26

    We have developed a simple text mining algorithm that allows us to identify surface area and pore volumes of metal-organic frameworks (MOFs) using manuscript html files as inputs. The algorithm searches for common units (e.g., m 2 /g, cm 3 /g) associated with these two quantities to facilitate the search. From the sample set data of over 200 MOFs, the algorithm managed to identify 90% and 88.8% of the correct surface area and pore volume values. Further application to a test set of randomly chosen MOF html files yielded 73.2% and 85.1% accuracies for the two respective quantities. Most of the errors stem from unorthodox sentence structures that made it difficult to identify the correct data as well as bolded notations of MOFs (e.g., 1a) that made it difficult identify its real name. These types of tools will become useful when it comes to discovering structure-property relationships among MOFs as well as collecting a large set of data for references.

  13. PET motion correction in context of integrated PET/MR: Current techniques, limitations, and future projections.

    Science.gov (United States)

    Gillman, Ashley; Smith, Jye; Thomas, Paul; Rose, Stephen; Dowson, Nicholas

    2017-12-01

    Patient motion is an important consideration in modern PET image reconstruction. Advances in PET technology mean motion has an increasingly important influence on resulting image quality. Motion-induced artifacts can have adverse effects on clinical outcomes, including missed diagnoses and oversized radiotherapy treatment volumes. This review aims to summarize the wide variety of motion correction techniques available in PET and combined PET/CT and PET/MR, with a focus on the latter. A general framework for the motion correction of PET images is presented, consisting of acquisition, modeling, and correction stages. Methods for measuring, modeling, and correcting motion and associated artifacts, both in literature and commercially available, are presented, and their relative merits are contrasted. Identified limitations of current methods include modeling of aperiodic and/or unpredictable motion, attaining adequate temporal resolution for motion correction in dynamic kinetic modeling acquisitions, and maintaining availability of the MR in PET/MR scans for diagnostic acquisitions. Finally, avenues for future investigation are discussed, with a focus on improvements that could improve PET image quality, and that are practical in the clinical environment. © 2017 American Association of Physicists in Medicine.

  14. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    International Nuclear Information System (INIS)

    Bergstrom, P

    2016-01-01

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source in the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.

  15. SU-F-I-13: Correction Factor Computations for the NIST Ritz Free Air Chamber for Medium-Energy X Rays

    Energy Technology Data Exchange (ETDEWEB)

    Bergstrom, P [National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, MD 20899 (United States)

    2016-06-15

    Purpose: The National Institute of Standards and Technology (NIST) uses 3 free-air chambers to establish primary standards for radiation dosimetry at x-ray energies. For medium-energy × rays, the Ritz free-air chamber is the main measurement device. In order to convert the charge or current collected by the chamber to the radiation quantities air kerma or air kerma rate, a number of correction factors specific to the chamber must be applied. Methods: We used the Monte Carlo codes EGSnrc and PENELOPE. Results: Among these correction factors are the diaphragm correction (which accounts for interactions of photons from the x-ray source in the beam-defining diaphragm of the chamber), the scatter correction (which accounts for the effects of photons scattered out of the primary beam), the electron-loss correction (which accounts for electrons that only partially expend their energy in the collection region), the fluorescence correction (which accounts for ionization due to reabsorption ffluorescence photons and the bremsstrahlung correction (which accounts for the reabsorption of bremsstrahlung photons). We have computed monoenergetic corrections for the NIST Ritz chamber for the 1 cm, 3 cm and 7 cm collection plates. Conclusion: We find good agreement with other’s results for the 7 cm plate. The data used to obtain these correction factors will be used to establish air kerma and it’s uncertainty in the standard NIST x-ray beams.

  16. Patients' perceptions and experiences of cardiovascular disease and diabetes prevention programmes: A systematic review and framework synthesis using the Theoretical Domains Framework.

    Science.gov (United States)

    Shaw, Rachel L; Holland, Carol; Pattison, Helen M; Cooke, Richard

    2016-05-01

    This review provides a worked example of 'best fit' framework synthesis using the Theoretical Domains Framework (TDF) of health psychology theories as an a priori framework in the synthesis of qualitative evidence. Framework synthesis works best with 'policy urgent' questions. The review question selected was: what are patients' experiences of prevention programmes for cardiovascular disease (CVD) and diabetes? The significance of these conditions is clear: CVD claims more deaths worldwide than any other; diabetes is a risk factor for CVD and leading cause of death. A systematic review and framework synthesis were conducted. This novel method for synthesizing qualitative evidence aims to make health psychology theory accessible to implementation science and advance the application of qualitative research findings in evidence-based healthcare. Findings from 14 original studies were coded deductively into the TDF and subsequently an inductive thematic analysis was conducted. Synthesized findings produced six themes relating to: knowledge, beliefs, cues to (in)action, social influences, role and identity, and context. A conceptual model was generated illustrating combinations of factors that produce cues to (in)action. This model demonstrated interrelationships between individual (beliefs and knowledge) and societal (social influences, role and identity, context) factors. Several intervention points were highlighted where factors could be manipulated to produce favourable cues to action. However, a lack of transparency of behavioural components of published interventions needs to be corrected and further evaluations of acceptability in relation to patient experience are required. Further work is needed to test the comprehensiveness of the TDF as an a priori framework for 'policy urgent' questions using 'best fit' framework synthesis. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Gas adsorption in Mg-porphyrin-based porous organic frameworks: A computational simulation by first-principles derived force field.

    Science.gov (United States)

    Pang, Yujia; Li, Wenliang; Zhang, Jingping

    2017-09-15

    A novel type of porous organic frameworks, based on Mg-porphyrin, with diamond-like topology, named POF-Mgs is computationally designed, and the gas uptakes of CO 2 , H 2 , N 2 , and H 2 O in POF-Mgs are investigated by Grand canonical Monte Carlo simulations based on first-principles derived force fields (FF). The FF, which describes the interactions between POF-Mgs and gases, are fitted by dispersion corrected double-hybrid density functional theory, B2PLYP-D3. The good agreement between the obtained FF and the first-principle energies data confirms the reliability of the FF. Furthermore our simulation shows the presence of a small amount of H 2 O (≤ 0.01 kPa) does not much affect the adsorption quantity of CO 2 , but the presence of higher partial pressure of H 2 O (≥ 0.1 kPa) results in the CO 2 adsorption decrease significantly. The good performance of POF-Mgs in the simulation inspires us to design novel porous materials experimentally for gas adsorption and purification. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  18. Indirect transformation in reciprocal space: desmearing of small-angle scattering data from partially ordered systems

    International Nuclear Information System (INIS)

    Glatter, O.; Gruber, K.

    1993-01-01

    Indirect Fourier transformation is a widely used technique for the desmearing of instrumental broadening effects, for data smoothing and for Fourier transformation of small-angle scattering data. This technique, however, can only be applied to scattering curves with a band-limited Fourier transform, i.e. separated and noninteracting scattering centers. It can therefore not be used for scattering data from partially ordered systems. In this paper, a modified technique for partially ordered systems working in reciprocal space is presented. A peak-recognition technique allows its application to scattering functions with narrow peaks, such as the scattering functions of layered systems like lamellar stacks or strongly interacting particles. Arbitrary geometry effects and wavelength effects can be corrected. Examples of simulations show the merits and limits of this new method. One example shows its applicability to real data. (orig.)

  19. Tag-KEM from Set Partial Domain One-Way Permutations

    Science.gov (United States)

    Abe, Masayuki; Cui, Yang; Imai, Hideki; Kurosawa, Kaoru

    Recently a framework called Tag-KEM/DEM was introduced to construct efficient hybrid encryption schemes. Although it is known that generic encode-then-encrypt construction of chosen ciphertext secure public-key encryption also applies to secure Tag-KEM construction and some known encoding method like OAEP can be used for this purpose, it is worth pursuing more efficient encoding method dedicated for Tag-KEM construction. This paper proposes an encoding method that yields efficient Tag-KEM schemes when combined with set partial one-way permutations such as RSA and Rabin's encryption scheme. To our knowledge, this leads to the most practical hybrid encryption scheme of this type. We also present an efficient Tag-KEM which is CCA-secure under general factoring assumption rather than Blum factoring assumption.

  20. Finite volume for three-flavour Partially Quenched Chiral Perturbation Theory through NNLO in the meson sector

    International Nuclear Information System (INIS)

    Bijnens, Johan; Rössler, Thomas

    2015-01-01

    We present a calculation of the finite volume corrections to meson masses and decay constants in three flavour Partially Quenched Chiral Perturbation Theory (PQChPT) through two-loop order in the chiral expansion for the flavour-charged (or off-diagonal) pseudoscalar mesons. The analytical results are obtained for three sea quark flavours with one, two or three different masses. We reproduce the known infinite volume results and the finite volume results in the unquenched case. The calculation has been performed using the supersymmetric formulation of PQChPT as well as with a quark flow technique. Partial analytical results can be found in the appendices. Some examples of cases relevant to lattice QCD are studied numerically. Numerical programs for all results are available as part of the CHIRON package.

  1. Finite volume for three-flavour Partially Quenched Chiral Perturbation Theory through NNLO in the meson sector

    Energy Technology Data Exchange (ETDEWEB)

    Bijnens, Johan; Rössler, Thomas [Department of Astronomy and Theoretical Physics, Lund University,Sölvegatan 14A, SE 223-62 Lund (Sweden)

    2015-11-16

    We present a calculation of the finite volume corrections to meson masses and decay constants in three flavour Partially Quenched Chiral Perturbation Theory (PQChPT) through two-loop order in the chiral expansion for the flavour-charged (or off-diagonal) pseudoscalar mesons. The analytical results are obtained for three sea quark flavours with one, two or three different masses. We reproduce the known infinite volume results and the finite volume results in the unquenched case. The calculation has been performed using the supersymmetric formulation of PQChPT as well as with a quark flow technique. Partial analytical results can be found in the appendices. Some examples of cases relevant to lattice QCD are studied numerically. Numerical programs for all results are available as part of the CHIRON package.

  2. A DSP-based neural network non-uniformity correction algorithm for IRFPA

    Science.gov (United States)

    Liu, Chong-liang; Jin, Wei-qi; Cao, Yang; Liu, Xiu

    2009-07-01

    An effective neural network non-uniformity correction (NUC) algorithm based on DSP is proposed in this paper. The non-uniform response in infrared focal plane array (IRFPA) detectors produces corrupted images with a fixed-pattern noise(FPN).We introduced and analyzed the artificial neural network scene-based non-uniformity correction (SBNUC) algorithm. A design of DSP-based NUC development platform for IRFPA is described. The DSP hardware platform designed is of low power consumption, with 32-bit fixed point DSP TMS320DM643 as the kernel processor. The dependability and expansibility of the software have been improved by DSP/BIOS real-time operating system and Reference Framework 5. In order to realize real-time performance, the calibration parameters update is set at a lower task priority then video input and output in DSP/BIOS. In this way, calibration parameters updating will not affect video streams. The work flow of the system and the strategy of real-time realization are introduced. Experiments on real infrared imaging sequences demonstrate that this algorithm requires only a few frames to obtain high quality corrections. It is computationally efficient and suitable for all kinds of non-uniformity.

  3. The Conical Singularity and Quantum Corrections to Entropy of Black Hole

    International Nuclear Information System (INIS)

    Solodukhin, S.N.

    1994-01-01

    It is well known that at the temperature different from the Hawking temperature there appears a conical singularity in the Euclidean classical solution of gravitational equations. The method of regularizing the cone by regular surface is used to determine the curvature tensors for such metrics. It allows to calculate the one-loop matter effective action and the corresponding one-loop quantum corrections to the entropy in the framework of the path integral approach of Gibbons and Hawking. The two-dimensional and four-dimensional cases are considered. The entropy of the Rindler space is shown to be divergent logarithmically in two dimensions and quadratically in four dimensions. It corresponds to the results obtained earlier. For the eternal 2D black hole we observe finite, dependent on the mass, correction to the entropy. The entropy of the 4D Schwarzschild black hole is shown to possess an additional (in comparison to the 4D Rindler space) logarithmically divergent correction which does not vanish in the limit of infinite mass of the black hole. We argue that infinities of the entropy in four dimensions are renormalized with the renormalization of the gravitational coupling. (author). 35 refs

  4. Factors influencing the provision of removable partial dentures by dentists in Ireland.

    Science.gov (United States)

    Allen, Finbarr

    2010-01-01

    Factors influencing clinical treatment of partially dentate patients are varied, and there is a need to identify factors influencing success in the provision of removable partial dentures. The aim of this study was to assess the attitudes of general dental practitioners (GDPs) in Ireland towards tooth replacement and use of RPDs, in partially dentate older adults. The sample frame was the Register of Dentists in Ireland; data were also collected from a sample of dentists practising under NHS regulations in Northern Ireland. Validated questionnaires were sent to all dentists on the Register of Dentists in the Republic of Ireland, and dentists working under NHS regulations registered with the Central Services Agency in Northern Ireland. Content of the questionnaire included details of the dentist themselves, their dental practice and the profile of partial denture provision. They were also asked to give their views on factors influencing the success or failure of an RPD, the process of providing RPDs and their attitudes to RPD provision. A total of 1,143 responses were received, a response rate of 45%. A mean number of 61 RPDs per annum were provided, with 75% of dentures provided being acrylic based. Respondents indicate their belief that cobalt-chromium based dentures had a longer prognosis than acrylic dentures, but less than half (46%) claim to design the frameworks themselves. Patients' attitudes are considered influential in the success of RPD provision, and their influence on appearance is considered the most important factor influencing success. The most important factors influencing failure are: the patient not requesting a denture; an RPD restoring unbounded saddles; and, lower RPDs. Although considered important, approximately 60% of the sample do not routinely organise follow-up appointments for patients provided with RPDs. The fee structures in the DTSS and DTBS are considered a barrier to quality in the provision of partial dentures.

  5. Factors influencing the provision of removable partial dentures by dentists in Ireland.

    LENUS (Irish Health Repository)

    Allen, Finbarr

    2011-03-15

    Factors influencing clinical treatment of partially dentate patients are varied, and there is a need to identify factors influencing success in the provision of removable partial dentures. The aim of this study was to assess the attitudes of general dental practitioners (GDPs) in Ireland towards tooth replacement and use of RPDs, in partially dentate older adults. The sample frame was the Register of Dentists in Ireland; data were also collected from a sample of dentists practising under NHS regulations in Northern Ireland. Validated questionnaires were sent to all dentists on the Register of Dentists in the Republic of Ireland, and dentists working under NHS regulations registered with the Central Services Agency in Northern Ireland. Content of the questionnaire included details of the dentist themselves, their dental practice and the profile of partial denture provision. They were also asked to give their views on factors influencing the success or failure of an RPD, the process of providing RPDs and their attitudes to RPD provision. A total of 1,143 responses were received, a response rate of 45%. A mean number of 61 RPDs per annum were provided, with 75% of dentures provided being acrylic based. Respondents indicate their belief that cobalt-chromium based dentures had a longer prognosis than acrylic dentures, but less than half (46%) claim to design the frameworks themselves. Patients\\' attitudes are considered influential in the success of RPD provision, and their influence on appearance is considered the most important factor influencing success. The most important factors influencing failure are: the patient not requesting a denture; an RPD restoring unbounded saddles; and, lower RPDs. Although considered important, approximately 60% of the sample do not routinely organise follow-up appointments for patients provided with RPDs. The fee structures in the DTSS and DTBS are considered a barrier to quality in the provision of partial dentures.

  6. Factors influencing the provision of removable partial dentures by dentists in Ireland.

    LENUS (Irish Health Repository)

    Allen, Finbarr

    2010-10-01

    Factors influencing clinical treatment of partially dentate patients are varied, and there is a need to identify factors influencing success in the provision of removable partial dentures. The aim of this study was to assess the attitudes of general dental practitioners (GDPs) in Ireland towards tooth replacement and use of RPDs, in partially dentate older adults. The sample frame was the Register of Dentists in Ireland; data were also collected from a sample of dentists practising under NHS regulations in Northern Ireland. Validated questionnaires were sent to all dentists on the Register of Dentists in the Republic of Ireland, and dentists working under NHS regulations registered with the Central Services Agency in Northern Ireland. Content of the questionnaire included details of the dentist themselves, their dental practice and the profile of partial denture provision. They were also asked to give their views on factors influencing the success or failure of an RPD, the process of providing RPDs and their attitudes to RPD provision. A total of 1,143 responses were received, a response rate of 45%. A mean number of 61 RPDs per annum were provided, with 75% of dentures provided being acrylic based. Respondents indicate their belief that cobalt-chromium based dentures had a longer prognosis than acrylic dentures, but less than half (46%) claim to design the frameworks themselves. Patients\\' attitudes are considered influential in the success of RPD provision, and their influence on appearance is considered the most important factor influencing success. The most important factors influencing failure are: the patient not requesting a denture; an RPD restoring unbounded saddles; and, lower RPDs. Although considered important, approximately 60% of the sample do not routinely organise follow-up appointments for patients provided with RPDs. The fee structures in the DTSS and DTBS are considered a barrier to quality in the provision of partial dentures.

  7. Molr - A delegation framework for accelerator commissioning

    CERN Document Server

    Valliappan, Nachiappan

    2017-01-01

    Accelerator commissioning is the process of preparing an accelerator for beam operations. A typical commissioning period at CERN involves running thousands of tests on many complex systems and machinery to ensure smooth beam operations and correct functioning of the machine protection systems. AccTesting is a software framework which helps orchestrate the commissioning of CERN’s accelerators and it’s equipment systems. This involves running and managing tests provided by various commissioning tools and analyzing their outcomes. Currently, AccTesting only supports a specific set of commissioning tools. In this project, we aim to widen the spectrum of commissioning tools supported by AccTesting by developing a generic and programmable integration framework called Molr, which would enable the integration of more commissioning tools with AccTesting. In this report, we summarize the work done during the summer student project and lay out a brief overview of the current status and next steps for Molr.

  8. Poles of the Zagreb analysis partial-wave T matrices

    Science.gov (United States)

    Batinić, M.; Ceci, S.; Švarc, A.; Zauner, B.

    2010-09-01

    The Zagreb analysis partial-wave T matrices included in the Review of Particle Physics [by the Particle Data Group (PDG)] contain Breit-Wigner parameters only. As the advantages of pole over Breit-Wigner parameters in quantifying scattering matrix resonant states are becoming indisputable, we supplement the original solution with the pole parameters. Because of an already reported numeric error in the S11 analytic continuation [Batinić , Phys. Rev. CPRVCAN0556-281310.1103/PhysRevC.57.1004 57, 1004(E) (1997); arXiv:nucl-th/9703023], we declare the old BATINIC 95 solution, presently included by the PDG, invalid. Instead, we offer two new solutions: (A) corrected BATINIC 95 and (B) a new solution with an improved S11 πN elastic input. We endorse solution (B).

  9. TUnfold, an algorithm for correcting migration effects in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Schmitt, Stefan

    2012-07-15

    TUnfold is a tool for correcting migration and background effects in high energy physics for multi-dimensional distributions. It is based on a least square fit with Tikhonov regularisation and an optional area constraint. For determining the strength of the regularisation parameter, the L-curve method and scans of global correlation coefficients are implemented. The algorithm supports background subtraction and error propagation of statistical and systematic uncertainties, in particular those originating from limited knowledge of the response matrix. The program is interfaced to the ROOT analysis framework.

  10. The Z decay width in the SMEFT: y{sub t} and λ corrections at one loop

    Energy Technology Data Exchange (ETDEWEB)

    Hartmann, Christine [Niels Bohr International Academy, University of Copenhagen,Blegdamsvej 17, DK-2100 Copenhagen (Denmark); Shepherd, William [Niels Bohr International Academy, University of Copenhagen,Blegdamsvej 17, DK-2100 Copenhagen (Denmark); Institut für Physik, Johannes-Gutenberg-Universität Mainz,Staudingerweg 7, D-55128 Mainz (Germany); Trott, Michael [Niels Bohr International Academy, University of Copenhagen,Blegdamsvej 17, DK-2100 Copenhagen (Denmark)

    2017-03-10

    We calculate one loop y{sub t} and λ dependent corrections to Γ̄{sub Z},R̄{sub f}{sup 0} and the partial Z widths due to dimension six operators in the Standard Model Effective Field Theory (SMEFT), including finite terms. We assume CP symmetry and a U(3){sup 5} symmetry in the UV matching onto the dimension six operators, dominantly broken by the Standard Model Yukawa matrices. Corrections to these observables are predicted using the input parameters {α̂_e_w,M̂_Z,Ĝ_F,m̂_t,m̂_h} extracted with one loop corrections in the same limit. We show that at one loop the number of SMEFT parameters contributing to the precise LEPI pseudo-observables exceeds the number of measurements. As a result the SMEFT parameters contributing to LEP data are formally unbounded when the size of loop corrections are reached until other data is considered in a global analysis. The size of these loop effects is generically a correction of order ∼% to leading effects in the SMEFT, but we find multiple large numerical coefficients in our calculation at this order. We use a (MS)-bar scheme, modified for the SMEFT, for renormalization. Some subtleties involving novel evanescent scheme dependence present in this result are explained.

  11. NLO corrections to the photon impact factor: Combining real and virtual corrections

    International Nuclear Information System (INIS)

    Bartels, J.; Colferai, D.; Kyrieleis, A.; Gieseke, S.

    2002-08-01

    In this third part of our calculation of the QCD NLO corrections to the photon impact factor we combine our previous results for the real corrections with the singular pieces of the virtual corrections and present finite analytic expressions for the quark-antiquark-gluon intermediate state inside the photon impact factor. We begin with a list of the infrared singular pieces of the virtual correction, obtained in the first step of our program. We then list the complete results for the real corrections (longitudinal and transverse photon polarization). In the next step we defined, for the real corrections, the collinear and soft singular regions and calculate their contributions to the impact factor. We then subtract the contribution due to the central region. Finally, we combine the real corrections with the singular pieces of the virtual corrections and obtain our finite results. (orig.)

  12. Anatomic partial nephrectomy: technique evolution.

    Science.gov (United States)

    Azhar, Raed A; Metcalfe, Charles; Gill, Inderbir S

    2015-03-01

    Partial nephrectomy provides equivalent long-term oncologic and superior functional outcomes as radical nephrectomy for T1a renal masses. Herein, we review the various vascular clamping techniques employed during minimally invasive partial nephrectomy, describe the evolution of our partial nephrectomy technique and provide an update on contemporary thinking about the impact of ischemia on renal function. Recently, partial nephrectomy surgical technique has shifted away from main artery clamping and towards minimizing/eliminating global renal ischemia during partial nephrectomy. Supported by high-fidelity three-dimensional imaging, novel anatomic-based partial nephrectomy techniques have recently been developed, wherein partial nephrectomy can now be performed with segmental, minimal or zero global ischemia to the renal remnant. Sequential innovations have included early unclamping, segmental clamping, super-selective clamping and now culminating in anatomic zero-ischemia surgery. By eliminating 'under-the-gun' time pressure of ischemia for the surgeon, these techniques allow an unhurried, tightly contoured tumour excision with point-specific sutured haemostasis. Recent data indicate that zero-ischemia partial nephrectomy may provide better functional outcomes by minimizing/eliminating global ischemia and preserving greater vascularized kidney volume. Contemporary partial nephrectomy includes a spectrum of surgical techniques ranging from conventional-clamped to novel zero-ischemia approaches. Technique selection should be tailored to each individual case on the basis of tumour characteristics, surgical feasibility, surgeon experience, patient demographics and baseline renal function.

  13. Corrective Jaw Surgery

    Medline Plus

    Full Text Available ... and Craniofacial Surgery Cleft Lip/Palate and Craniofacial Surgery A cleft lip may require one or more ... find out more. Corrective Jaw Surgery Corrective Jaw Surgery Orthognathic surgery is performed to correct the misalignment ...

  14. Radiative corrections of O(α) to B{sup -} → V{sup 0}l{sup -} anti ν{sub l} decays

    Energy Technology Data Exchange (ETDEWEB)

    Tostado, S.L. [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional, Departamento de Fisica, Mexico, D.F. (Mexico); Castro, G.L. [Centro de Investigacion y de Estudios Avanzados del Instituto Politecnico Nacional, Departamento de Fisica, Mexico, D.F. (Mexico); CSIC- Universitat de Valencia, Instituto de Fisica Corpuscular, Valencia (Spain)

    2016-09-15

    The O(α) electromagnetic radiative corrections to the B{sup -} → V{sup 0}l{sup -} anti ν{sub l} (V is a vector meson and l a charged lepton) decay rates are evaluated using the cutoff method to regularize virtual corrections and incorporating intermediate resonance states in the real-photon amplitude to extend the region of validity of the soft-photon approximation. The electromagnetic and weak form factors of hadrons are assumed to vary smoothly over the energies of virtual and real photons under consideration. The cutoff dependence of radiative corrections upon the scale Λ that separates the long- and short-distance regimes is found to be mild and is considered as an uncertainty of the calculation. Owing to partial cancellations of electromagnetic corrections evaluated over the three- and four-body regions of phase space, the photon-inclusive corrected rates are found to be dominated by the short-distance contribution. These corrections will be relevant for a precise determination of the b quark mixing angles by testing isospin symmetry when measurements of semileptonic rates of charged and neutral B mesons at the few percent level become available. For completeness, we also provide numerical values of radiative corrections in the three-body region of the Dalitz plot distributions of these decays. (orig.)

  15. Tutorial on Online Partial Evaluation

    Directory of Open Access Journals (Sweden)

    William R. Cook

    2011-09-01

    Full Text Available This paper is a short tutorial introduction to online partial evaluation. We show how to write a simple online partial evaluator for a simple, pure, first-order, functional programming language. In particular, we show that the partial evaluator can be derived as a variation on a compositionally defined interpreter. We demonstrate the use of the resulting partial evaluator for program optimization in the context of model-driven development.

  16. A comparison of fit of CNC-milled titanium and zirconia frameworks to implants.

    Science.gov (United States)

    Abduo, Jaafar; Lyons, Karl; Waddell, Neil; Bennani, Vincent; Swain, Michael

    2012-05-01

    Computer numeric controlled (CNC) milling was proven to be predictable method to fabricate accurately fitting implant titanium frameworks. However, no data are available regarding the fit of CNC-milled implant zirconia frameworks. To compare the precision of fit of implant frameworks milled from titanium and zirconia and relate it to peri-implant strain development after framework fixation. A partially edentulous epoxy resin models received two Branemark implants in the areas of the lower left second premolar and second molar. From this model, 10 identical frameworks were fabricated by mean of CNC milling. Half of them were made from titanium and the other half from zirconia. Strain gauges were mounted close to the implants to qualitatively and quantitatively assess strain development as a result of framework fitting. In addition, the fit of the framework implant interface was measured using an optical microscope, when only one screw was tightened (passive fit) and when all screws were tightened (vertical fit). The data was statistically analyzed using the Mann-Whitney test. All frameworks produced measurable amounts of peri-implant strain. The zirconia frameworks produced significantly less strain than titanium. Combining the qualitative and quantitative information indicates that the implants were under vertical displacement rather than horizontal. The vertical fit was similar for zirconia (3.7 µm) and titanium (3.6 µm) frameworks; however, the zirconia frameworks exhibited a significantly finer passive fit (5.5 µm) than titanium frameworks (13.6 µm). CNC milling produced zirconia and titanium frameworks with high accuracy. The difference between the two materials in terms of fit is expected to be of minimal clinical significance. The strain developed around the implants was more related to the framework fit rather than framework material. © 2011 Wiley Periodicals, Inc.

  17. Exotic muon-to-positron conversion in nuclei: partial transition sum evaluation by using shell model

    International Nuclear Information System (INIS)

    Divari, P.C.; Vergados, J.D.; Kosmas, T.S.; Skouras, L.D.

    2001-01-01

    A comprehensive study of the exotic (μ - ,e + ) conversion in 27 Al, 27 Al(μ - ,e + ) 27 Na is presented. The relevant operators are deduced assuming one-pion and two-pion modes in the framework of intermediate neutrino mixing models, paying special attention to the light neutrino case. The total rate is calculated by summing over partial transition strengths for all kinematically accessible final states derived with s-d shell model calculations employing the well-known Wildenthal realistic interaction

  18. On higher-order corrections in M theory

    International Nuclear Information System (INIS)

    Howe, P.S.; Tsimpis, D.

    2003-01-01

    A theoretical analysis of higher-order corrections to D=11 supergravity is given in a superspace framework. It is shown that any deformation of D=11 supergravity for which the lowest-dimensional component of the four-form G 4 vanishes is trivial. This implies that the equations of motion of D=11 supergravity are specified by an element of a certain spinorial cohomology group and generalises previous results obtained using spinorial or pure spinor cohomology to the fully non-linear theory. The first deformation of the theory is given by an element of a different spinorial cohomology group with coefficients which are local tensorial functions of the massless supergravity fields. The four-form Bianchi Identities are solved, to first order and at dimension -{1/2}, in the case that the lowest-dimensional component of G 4 is non-zero. Moreover, it is shown how one can calculate the first-order correction to the dimension-zero torsion and thus to the supergravity equations of motion given an explicit expression for this object in terms of the supergravity fields. The version of the theory with both a four-form and a seven-form is discussed in the presence of the five-brane anomaly-cancelling term. It is shown that the supersymmetric completion of this term exists and it is argued that it is the unique anomaly-cancelling invariant at this dimension which is at least quartic in the fields. This implies that the first deformation of the theory is completely determined by the anomaly term from which one can, in principle, read off the corrections to all of the superspace field strength tensors. (author)

  19. Measurements of passive correction of magnetization higher multipoles in one meter long dipoles

    International Nuclear Information System (INIS)

    Green, M.A.; Althaus, R.F.; Barale, P.J.; Benjegerdes, R.W.; Gilbert, W.S.; Green, M.I.; Scanlan, R.M.; Taylor, C.E.

    1990-09-01

    The use of passive superconductor to correct the magnetization sextupole and decapole in SSC dipoles appears to be promising. This paper presents the results of a series of experiments of passive superconductor correctors in one meter long dipole magnets. Reduction of the magnetization sextupole by a factor of five to ten has been achieved using the passive superconductor correctors. The magnetization decapole was also reduced. The passive superconductor correctors reduced the sextupole temperature sensitivity by an order of magnitude. Flux creep decay was partially compensated for by the correctors. 13 refs., 7 figs

  20. Partial volume effect (PVE) on the arterial input function (AIF) in T1-weighted perfusion imaging and limitations of the multiplicative rescaling approach

    DEFF Research Database (Denmark)

    Hansen, Adam Espe; Pedersen, Henrik; Rostrup, Egill

    2009-01-01

    The partial volume effect (PVE) on the arterial input function (AIF) remains a major obstacle to absolute quantification of cerebral blood flow (CBF) using MRI. This study evaluates the validity and performance of a commonly used multiplicative rescaling of the AIF to correct for the PVE. In a gr...

  1. Paper recycling framework, the "Wheel of Fiber".

    Science.gov (United States)

    Ervasti, Ilpo; Miranda, Ruben; Kauranen, Ilkka

    2016-06-01

    At present, there is no reliable method in use that unequivocally describes paper industry material flows and makes it possible to compare geographical regions with each other. A functioning paper industry Material Flow Account (MFA) that uses uniform terminology and standard definitions for terms and structures is necessary. Many of the presently used general level MFAs, which are called frameworks in this article, stress the importance of input and output flows but do not provide a uniform picture of material recycling. Paper industry is an example of a field in which recycling plays a key role. Additionally, terms related to paper industry recycling, such as collection rate, recycling rate, and utilization rate, are not defined uniformly across regions and time. Thus, reliably comparing material recycling activity between geographical regions or calculating any regional summaries is difficult or even impossible. The objective of this study is to give a partial solution to the problem of not having a reliable method in use that unequivocally describes paper industry material flows. This is done by introducing a new material flow framework for paper industry in which the flow and stage structure supports the use of uniform definitions for terms related to paper recycling. This new framework is termed the Detailed Wheel of Fiber. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Energy Sharing Framework for Microgrid-Powered Cellular Base Stations

    KAUST Repository

    Farooq, Muhammad Junaid

    2017-02-07

    Cellular base stations (BSs) are increasingly becoming equipped with renewable energy generators to reduce operational expenditures and carbon footprint of wireless communications. Moreover, advancements in the traditional electricity grid allow two-way power flow and metering that enable the integration of distributed renewable energy generators at BS sites into a microgrid. In this paper, we develop an optimized energy management framework for microgrid-connected cellular BSs that are equipped with renewable energy generators and finite battery storage to minimize energy cost. The BSs share excess renewable energy with others to reduce the dependency on the conventional electricity grid. Three cases are investigated where the renewable energy generation is unknown, perfectly known, and partially known ahead of time. For the partially known case where only the statistics of renewable energy generation are available, stochastic programming is used to achieve a conservative solution. Results show the time varying energy management behaviour of the BSs and the effect of energy sharing between them.

  3. Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model

    Science.gov (United States)

    Von Davier, Matthias; Yamamoto, Kentaro

    2004-01-01

    The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…

  4. Partial lesions of the intratemporal segment of the facial nerve: graft versus partial reconstruction.

    Science.gov (United States)

    Bento, Ricardo F; Salomone, Raquel; Brito, Rubens; Tsuji, Robinson K; Hausen, Mariana

    2008-09-01

    In cases of partial lesions of the intratemporal segment of the facial nerve, should the surgeon perform an intraoperative partial reconstruction, or partially remove the injured segment and place a graft? We present results from partial lesion reconstruction on the intratemporal segment of the facial nerve. A retrospective study on 42 patients who presented partial lesions on the intratemporal segment of the facial nerve was performed between 1988 and 2005. The patients were divided into 3 groups based on the procedure used: interposition of the partial graft on the injured area of the nerve (group 1; 12 patients); keeping the preserved part and performing tubulization (group 2; 8 patients); and dividing the parts of the injured nerve (proximal and distal) and placing a total graft of the sural nerve (group 3; 22 patients). Fracture of the temporal bone was the most frequent cause of the lesion in all groups, followed by iatrogenic causes (p lesion of the facial nerve is still questionable. Among these 42 patients, the best results were those from the total graft of the facial nerve.

  5. Role of shell corrections in the phenomenon of cluster radioactivity

    Science.gov (United States)

    Kaur, Mandeep; Singh, Bir Bikram; Sharma, Manoj K.

    2018-05-01

    The detailed investigation has been carried out to explore the role of shell corrections in the decay of various radioactive parent nuclei in trans-lead region, specifically, which lead to doubly magic 208Pb daughter nucleus through emission of clusters such as 14C, 18,20O, 22,24,26Ne, 28,30 Mg and 34S i. The fragmentation potential comprises of binding energies (BE), Coulomb potential (Vc) and nuclear or proximity potential (VP) of the decaying fragments (or clusters). It is relevant to mention here that the contributions of VLDM (T=0) and δU (T=0) in the BE have been analysed within the Strutinsky renormanlization procedure. In the framework of quantum mechanical fragmentation theory (QMFT), we have investigated the above mentioned cluster decays with and without inclusion of shell corrections in the fragmentation potential for spherical as well as non-compact oriented nuclei. We find that the experimentally observed clusters 14C, 18,20O, 22,24,26 Ne, 28,30 Mg and 34Si having doubly magic 208 Pb daughter nucleus are not strongly minimized, they do so only after the inclusion of shell corrections in the fragmentation potential. The nuclear structure information carried by the shell corrections have been explored via these calculations, within the collective clusterisation process of QMFT, in the study of ground state decay of radioactive nuclei. The role of different parts of fragmentation potentials such as VLDM, δU, Vc and Vp is dually analysed for better understanding of radioactive cluster decay.

  6. Quantifying structural uncertainty on fault networks using a marked point process within a Bayesian framework

    Science.gov (United States)

    Aydin, Orhun; Caers, Jef Karel

    2017-08-01

    Faults are one of the building-blocks for subsurface modeling studies. Incomplete observations of subsurface fault networks lead to uncertainty pertaining to location, geometry and existence of faults. In practice, gaps in incomplete fault network observations are filled based on tectonic knowledge and interpreter's intuition pertaining to fault relationships. Modeling fault network uncertainty with realistic models that represent tectonic knowledge is still a challenge. Although methods that address specific sources of fault network uncertainty and complexities of fault modeling exists, a unifying framework is still lacking. In this paper, we propose a rigorous approach to quantify fault network uncertainty. Fault pattern and intensity information are expressed by means of a marked point process, marked Strauss point process. Fault network information is constrained to fault surface observations (complete or partial) within a Bayesian framework. A structural prior model is defined to quantitatively express fault patterns, geometries and relationships within the Bayesian framework. Structural relationships between faults, in particular fault abutting relations, are represented with a level-set based approach. A Markov Chain Monte Carlo sampler is used to sample posterior fault network realizations that reflect tectonic knowledge and honor fault observations. We apply the methodology to a field study from Nankai Trough & Kumano Basin. The target for uncertainty quantification is a deep site with attenuated seismic data with only partially visible faults and many faults missing from the survey or interpretation. A structural prior model is built from shallow analog sites that are believed to have undergone similar tectonics compared to the site of study. Fault network uncertainty for the field is quantified with fault network realizations that are conditioned to structural rules, tectonic information and partially observed fault surfaces. We show the proposed

  7. Ground-state properties of ordered, partially ordered, and random Cu-Au and Ni-Pt alloys

    DEFF Research Database (Denmark)

    Ruban, Andrei; Abrikosov, I. A.; Skriver, Hans Lomholt

    1995-01-01

    We have studied the ground-state properties of ordered, partially ordered, and random Cu-Au and Ni-Pt alloys at the stoichiometric 1/4, 1/2, and 3/4 compositions in the framework of the multisublattice single-site (SS) coherent potential approximation (CPA). Charge-transfer effects in the random ...... for the ordered alloys are in good agreement with experimental data. For all the alloys the calculated ordering energy and the equilibrium lattices parameters are found to be almost exact quadratic functions of the long-range-order parameter....... and the partially ordered alloys are included in the screened impurity model. The prefactor in the Madelung energy is determined by the requirement that the total energy obtained in direct SS CPA calculations should equal the total energy given by the Connolly-Williams expansion based on Green’s function...

  8. Net improvement of correct answers to therapy questions after pubmed searches: pre/post comparison.

    Science.gov (United States)

    McKibbon, Kathleen Ann; Lokker, Cynthia; Keepanasseril, Arun; Wilczynski, Nancy L; Haynes, R Brian

    2013-11-08

    Clinicians search PubMed for answers to clinical questions although it is time consuming and not always successful. To determine if PubMed used with its Clinical Queries feature to filter results based on study quality would improve search success (more correct answers to clinical questions related to therapy). We invited 528 primary care physicians to participate, 143 (27.1%) consented, and 111 (21.0% of the total and 77.6% of those who consented) completed the study. Participants answered 14 yes/no therapy questions and were given 4 of these (2 originally answered correctly and 2 originally answered incorrectly) to search using either the PubMed main screen or PubMed Clinical Queries narrow therapy filter via a purpose-built system with identical search screens. Participants also picked 3 of the first 20 retrieved citations that best addressed each question. They were then asked to re-answer the original 14 questions. We found no statistically significant differences in the rates of correct or incorrect answers using the PubMed main screen or PubMed Clinical Queries. The rate of correct answers increased from 50.0% to 61.4% (95% CI 55.0%-67.8%) for the PubMed main screen searches and from 50.0% to 59.1% (95% CI 52.6%-65.6%) for Clinical Queries searches. These net absolute increases of 11.4% and 9.1%, respectively, included previously correct answers changing to incorrect at a rate of 9.5% (95% CI 5.6%-13.4%) for PubMed main screen searches and 9.1% (95% CI 5.3%-12.9%) for Clinical Queries searches, combined with increases in the rate of being correct of 20.5% (95% CI 15.2%-25.8%) for PubMed main screen searches and 17.7% (95% CI 12.7%-22.7%) for Clinical Queries searches. PubMed can assist clinicians answering clinical questions with an approximately 10% absolute rate of improvement in correct answers. This small increase includes more correct answers partially offset by a decrease in previously correct answers.

  9. Phase holograms in PMMA with proximity effect correction

    Science.gov (United States)

    Maker, Paul D.; Muller, R. E.

    1993-01-01

    Complex computer generated phase holograms (CGPH's) have been fabricated in PMMA by partial e-beam exposure and subsequent partial development. The CGPH was encoded as a sequence of phase delay pixels and written by the JEOL JBX-5D2 E-beam lithography system, a different dose being assigned to each value of phase delay. Following carefully controlled partial development, the pattern appeared rendered in relief in the PMMA, which then acts as the phase-delay medium. The exposure dose was in the range 20-200 micro-C/sq cm, and very aggressive development in pure acetone led to low contrast. This enabled etch depth control to better than plus or minus lambda(sub vis)/60. That result was obtained by exposing isolated 50 micron square patches and measuring resist removal over the central area where the proximity effect dose was uniform and related only to the local exposure. For complex CGPH's with pixel size of the order of the e-beam proximity effect radius, the patterns must be corrected for the extra exposure caused by electrons scattered back up out of the substrate. This has been accomplished by deconvolving the two-dimensional dose deposition function with the desired dose pattern. The deposition function, which plays much the same role as an instrument response function, was carefully measured under the exact conditions used to expose the samples. The devices fabricated were designed with 16 equal phase steps per retardation cycle, were up to 1 cm square, and consisted of up to 100 million 0.3-2.0 micron square pixels. Data files were up to 500 MB long and exposure times ranged to tens of hours. A Fresnel phase lens was fabricated that had diffraction limited optical performance with better than 85 percent efficiency.

  10. Partial twisting for scalar mesons

    International Nuclear Information System (INIS)

    Agadjanov, Dimitri; Meißner, Ulf-G.; Rusetsky, Akaki

    2014-01-01

    The possibility of imposing partially twisted boundary conditions is investigated for the scalar sector of lattice QCD. According to the commonly shared belief, the presence of quark-antiquark annihilation diagrams in the intermediate state generally hinders the use of the partial twisting. Using effective field theory techniques in a finite volume, and studying the scalar sector of QCD with total isospin I=1, we however demonstrate that partial twisting can still be performed, despite the fact that annihilation diagrams are present. The reason for this are delicate cancellations, which emerge due to the graded symmetry in partially quenched QCD with valence, sea and ghost quarks. The modified Lüscher equation in case of partial twisting is given

  11. How does imaging frequency and soft tissue motion affect the PTV margin size in partial breast and boost radiotherapy?

    International Nuclear Information System (INIS)

    Harris, Emma J.; Donovan, Ellen M.; Coles, Charlotte E.; Boer, Hans C.J. de; Poynter, Andrew; Rawlings, Christine; Wishart, Gordon C.; Evans, Philip M.

    2012-01-01

    Purpose: This study investigates (i) the effect of verification protocols on treatment accuracy and PTV margins for partial breast and boost breast radiotherapy with short fractionation schema (15 fractions), (ii) the effect of deformation of the excision cavity (EC) on PTV margin size, (iii) the imaging dose required to achieve specific PTV margins. Methods and materials: Verification images using implanted EC markers were studied in 36 patients. Target motion was estimated for a 15 fraction partial breast regimen using imaging protocols based on on-line and off-line motion correction strategies (No Action Level (NAL) and the extended NAL (eNAL) protocols). Target motion was used to estimate a PTV margin for each protocol. To evaluate treatment errors due to deformation of the excision cavity, individual marker positions were obtained from 11 patients. The mean clip displacement and daily variation in clip position during radiotherapy were determined and the contribution of these errors to PTV margin calculated. Published imaging dose data were used to estimate total dose for each protocol. Finally the number of images required to obtain a specific PTV margin was evaluated and hence, the relationship between PTV margins and imaging dose was investigated. Results: The PTV margin required to account for excision cavity motion, varied between 10.2 and 2.4 mm depending on the correction strategy used. Average clip movement was 0.8 mm and average variation in clip position during treatment was 0.4 mm. The contribution to PTV margin from deformation was estimated to be small, less than 0.2 mm for both off-line and on-line correction protocols. Conclusion: A boost or partial breast PTV margin of ∼10 mm, is possible with zero imaging dose and workload, however, patients receiving boost radiotherapy may benefit from a margin reduction of ∼4 mm with imaging doses from 0.4 cGy to 25 cGy using an eNAL protocol. PTV margin contributions from deformation errors are likely

  12. Context-Sensitive Spelling Correction of Consumer-Generated Content on Health Care.

    Science.gov (United States)

    Zhou, Xiaofang; Zheng, An; Yin, Jiaheng; Chen, Rudan; Zhao, Xianyang; Xu, Wei; Cheng, Wenqing; Xia, Tian; Lin, Simon

    2015-07-31

    Consumer-generated content, such as postings on social media websites, can serve as an ideal source of information for studying health care from a consumer's perspective. However, consumer-generated content on health care topics often contains spelling errors, which, if not corrected, will be obstacles for downstream computer-based text analysis. In this study, we proposed a framework with a spelling correction system designed for consumer-generated content and a novel ontology-based evaluation system which was used to efficiently assess the correction quality. Additionally, we emphasized the importance of context sensitivity in the correction process, and demonstrated why correction methods designed for electronic medical records (EMRs) failed to perform well with consumer-generated content. First, we developed our spelling correction system based on Google Spell Checker. The system processed postings acquired from MedHelp, a biomedical bulletin board system (BBS), and saved misspelled words (eg, sertaline) and corresponding corrected words (eg, sertraline) into two separate sets. Second, to reduce the number of words needing manual examination in the evaluation process, we respectively matched the words in the two sets with terms in two biomedical ontologies: RxNorm and Systematized Nomenclature of Medicine -- Clinical Terms (SNOMED CT). The ratio of words which could be matched and appropriately corrected was used to evaluate the correction system's overall performance. Third, we categorized the misspelled words according to the types of spelling errors. Finally, we calculated the ratio of abbreviations in the postings, which remarkably differed between EMRs and consumer-generated content and could largely influence the overall performance of spelling checkers. An uncorrected word and the corresponding corrected word was called a spelling pair, and the two words in the spelling pair were its members. In our study, there were 271 spelling pairs detected, among

  13. Partial order infinitary term rewriting

    DEFF Research Database (Denmark)

    Bahr, Patrick

    2014-01-01

    We study an alternative model of infinitary term rewriting. Instead of a metric on terms, a partial order on partial terms is employed to formalise convergence of reductions. We consider both a weak and a strong notion of convergence and show that the metric model of convergence coincides with th...... to the metric setting -- orthogonal systems are both infinitarily confluent and infinitarily normalising in the partial order setting. The unique infinitary normal forms that the partial order model admits are Böhm trees....

  14. Beginning partial differential equations

    CERN Document Server

    O'Neil, Peter V

    2011-01-01

    A rigorous, yet accessible, introduction to partial differential equations-updated in a valuable new edition Beginning Partial Differential Equations, Second Edition provides a comprehensive introduction to partial differential equations (PDEs) with a special focus on the significance of characteristics, solutions by Fourier series, integrals and transforms, properties and physical interpretations of solutions, and a transition to the modern function space approach to PDEs. With its breadth of coverage, this new edition continues to present a broad introduction to the field, while also addres

  15. Framework Architecture Enabling an Agent-Based Inter-Company Integration with XML

    Directory of Open Access Journals (Sweden)

    Klement Fellner

    2000-11-01

    Full Text Available More and more cooperating companies utilize the World Wide Web (WWW to federate and further integrate their heterogeneous business application systems. At the same time, innovative business strategies, like virtual organizations, supply chain management or one-to-one marketing as well as trendsetting competitive strategies, like mass customisation are realisable. Both, the necessary integration and the innovative concepts are demanding software supporting automation of communication as well as coordination across system boundaries. In this paper, we describe a framework architecture for intercompany integration of business processes based on commonly accepted and (partially standardized concepts and techniques. Further on, it is shown how the framework architecture helps to automate procurement processes and how a cost-saving black-box re-use is achieved following a component oriented implementation paradigm.

  16. Goldmann tonometry tear film error and partial correction with a shaped applanation surface.

    Science.gov (United States)

    McCafferty, Sean J; Enikov, Eniko T; Schwiegerling, Jim; Ashley, Sean M

    2018-01-01

    The aim of the study was to quantify the isolated tear film adhesion error in a Goldmann applanation tonometer (GAT) prism and in a correcting applanation tonometry surface (CATS) prism. The separation force of a tonometer prism adhered by a tear film to a simulated cornea was measured to quantify an isolated tear film adhesion force. Acrylic hemispheres (7.8 mm radius) used as corneas were lathed over the apical 3.06 mm diameter to simulate full applanation contact with the prism surface for both GAT and CATS prisms. Tear film separation measurements were completed with both an artificial tear and fluorescein solutions as a fluid bridge. The applanation mire thicknesses were measured and correlated with the tear film separation measurements. Human cadaver eyes were used to validate simulated cornea tear film separation measurement differences between the GAT and CATS prisms. The CATS prism tear film adhesion error (2.74±0.21 mmHg) was significantly less than the GAT prism (4.57±0.18 mmHg, p film adhesion error was independent of applanation mire thickness ( R 2 =0.09, p =0.04). Fluorescein produces more tear film error than artificial tears (+0.51±0.04 mmHg; p film adhesion error (1.40±0.51 mmHg) was significantly less than that of the GAT prism (3.30±0.38 mmHg; p =0.002). Measured GAT tear film adhesion error is more than previously predicted. A CATS prism significantly reduced tear film adhesion error bŷ41%. Fluorescein solution increases the tear film adhesion compared to artificial tears, while mire thickness has a negligible effect.

  17. Interventions for replacing missing teeth: partially absent dentition.

    Science.gov (United States)

    Abt, Elliot; Carr, Alan B; Worthington, Helen V

    2012-02-15

    another. With fixed dental prostheses (FDPs), there was no evidence that high gold alloys are better or worse than other alloys, nor that gold alloys or frameworks are better or worse than titanium. There is insufficient evidence to determine whether zirconia is better or worse that other FDP materials, that ceramic abutments are better or worse than titanium, or that one cement was better or worse than another in retaining FDPs. There is insufficient evidence to determine the relative effectiveness of FDPs and RDPs in patients with shortened dental arch or to determine the relative advantages of implant supported FDPs versus tooth/implant supported FDPs. Based on trials meeting the inclusion criteria for this review, there is insufficient evidence to recommend a particular method of tooth replacement for partially edentulous patients.

  18. A Comprehensive Diagnostic Framework for Evaluating Business Intelligence and Analytics Effectiveness

    Directory of Open Access Journals (Sweden)

    Neil Foshay

    2015-09-01

    Full Text Available Business intelligence and analytics (BIA initiatives are costly, complex and experience high failure rates. Organizations require effective approaches to evaluate their BIA capabilities in order to develop strategies for their evolution. In this paper, we employ a design science paradigm to develop a comprehensive BIA effectiveness diagnostic (BIAED framework that can be easily operationalized. We propose that a useful BIAED framework must assess the correct factors, should be deployed in the proper process context and acquire the appropriate input from different constituencies within an organization. Drawing on the BIAED framework, we further develop an online diagnostic toolkit that includes a comprehensive survey instrument. We subsequently deploy the diagnostic mechanism within three large organizations in North America (involving over 1500 participants and use the results to inform BIA strategy formulation. Feedback from participating organizations indicates that BIA diagnostic toolkit provides insights that are essential inputs to strategy development. This work addresses a significant research gap in the area of BIA effectiveness assessment.

  19. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  20. Hyperbolic partial differential equations

    CERN Document Server

    Witten, Matthew

    1986-01-01

    Hyperbolic Partial Differential Equations III is a refereed journal issue that explores the applications, theory, and/or applied methods related to hyperbolic partial differential equations, or problems arising out of hyperbolic partial differential equations, in any area of research. This journal issue is interested in all types of articles in terms of review, mini-monograph, standard study, or short communication. Some studies presented in this journal include discretization of ideal fluid dynamics in the Eulerian representation; a Riemann problem in gas dynamics with bifurcation; periodic M