Maximizing results for lipofilling in facial reconstruction.
Barret, Juan P; Sarobe, Neus; Grande, Nelida; Vila, Delia; Palacin, Jose M
2009-07-01
Lipostructure (also known as structural fat grafts, lipofilling, or fat grafting) has become a technique with a good reputation and reproducible results. The application of this technology in patients undergoing reconstruction is a novel surgical alternative. Obtaining good results in this patient population is very difficult, but the application of small fat grafts with a strict Coleman technique produces long-term cosmetic effects. Adult-derived stem cells have been pointed out as important effectors of this regenerative technology, and future research should focus in this direction.
Reconstruction of phylogenetic trees of prokaryotes using maximal common intervals.
Heydari, Mahdi; Marashi, Sayed-Amir; Tusserkani, Ruzbeh; Sadeghi, Mehdi
2014-10-01
One of the fundamental problems in bioinformatics is phylogenetic tree reconstruction, which can be used for classifying living organisms into different taxonomic clades. The classical approach to this problem is based on a marker such as 16S ribosomal RNA. Since evolutionary events like genomic rearrangements are not included in reconstructions of phylogenetic trees based on single genes, much effort has been made to find other characteristics for phylogenetic reconstruction in recent years. With the increasing availability of completely sequenced genomes, gene order can be considered as a new solution for this problem. In the present work, we applied maximal common intervals (MCIs) in two or more genomes to infer their distance and to reconstruct their evolutionary relationship. Additionally, measures based on uncommon segments (UCS's), i.e., those genomic segments which are not detected as part of any of the MCIs, are also used for phylogenetic tree reconstruction. We applied these two types of measures for reconstructing the phylogenetic tree of 63 prokaryotes with known COG (clusters of orthologous groups) families. Similarity between the MCI-based (resp. UCS-based) reconstructed phylogenetic trees and the phylogenetic tree obtained from NCBI taxonomy browser is as high as 93.1% (resp. 94.9%). We show that in the case of this diverse dataset of prokaryotes, tree reconstruction based on MCI and UCS outperforms most of the currently available methods based on gene orders, including breakpoint distance and DCJ. We additionally tested our new measures on a dataset of 13 closely-related bacteria from the genus Prochlorococcus. In this case, distances like rearrangement distance, breakpoint distance and DCJ proved to be useful, while our new measures are still appropriate for phylogenetic reconstruction. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Arctic Sea Level Reconstruction
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde
Reconstruction of historical Arctic sea level is very difficult due to the limited coverage and quality of tide gauge and altimetry data in the area. This thesis addresses many of these issues, and discusses strategies to help achieve a stable and plausible reconstruction of Arctic sea level from...... 1950 to today.The primary record of historical sea level, on the order of several decades to a few centuries, is tide gauges. Tide gauge records from around the world are collected in the Permanent Service for Mean Sea Level (PSMSL) database, and includes data along the Arctic coasts. A reasonable...... amount of data is available along the Norwegian and Russian coasts since 1950, and most published research on Arctic sea level extends cautiously from these areas. Very little tide gauge data is available elsewhere in the Arctic, and records of a length of several decades,as generally recommended for sea...
National Oceanic and Atmospheric Administration, Department of Commerce — Records of past lake levels, mostly related to changes in moisture balance (evaporation-precipitation). Parameter keywords describe what was measured in this data...
International Nuclear Information System (INIS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-01-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)
International Nuclear Information System (INIS)
Zhang Jin; Shi Daxin; Anastasio, Mark A; Sillanpaa, Jussi; Chang Jenghwa
2005-01-01
We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)
Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET
International Nuclear Information System (INIS)
Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee
2016-01-01
Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design. - Highlights: • This paper proposed WL-MLEM algorithm for PET and demonstrated its performance. • WL-MLEM algorithm effectively combined wobbling and line spread function based MLEM. • WL-MLEM provided improvements in the spatial resolution and the PET image quality. • WL-MLEM can be easily extended to the other iterative
Avoiding Optimal Mean ℓ2,1-Norm Maximization-Based Robust PCA for Reconstruction.
Luo, Minnan; Nie, Feiping; Chang, Xiaojun; Yang, Yi; Hauptmann, Alexander G; Zheng, Qinghua
2017-04-01
Robust principal component analysis (PCA) is one of the most important dimension-reduction techniques for handling high-dimensional data with outliers. However, most of the existing robust PCA presupposes that the mean of the data is zero and incorrectly utilizes the average of data as the optimal mean of robust PCA. In fact, this assumption holds only for the squared [Formula: see text]-norm-based traditional PCA. In this letter, we equivalently reformulate the objective of conventional PCA and learn the optimal projection directions by maximizing the sum of projected difference between each pair of instances based on [Formula: see text]-norm. The proposed method is robust to outliers and also invariant to rotation. More important, the reformulated objective not only automatically avoids the calculation of optimal mean and makes the assumption of centered data unnecessary, but also theoretically connects to the minimization of reconstruction error. To solve the proposed nonsmooth problem, we exploit an efficient optimization algorithm to soften the contributions from outliers by reweighting each data point iteratively. We theoretically analyze the convergence and computational complexity of the proposed algorithm. Extensive experimental results on several benchmark data sets illustrate the effectiveness and superiority of the proposed method.
Maximal sustained levels of energy expenditure in humans during exercise.
Cooper, Jamie A; Nguyen, David D; Ruby, Brent C; Schoeller, Dale A
2011-12-01
Migrating birds have been able to sustain an energy expenditure (EE) that is five times their basal metabolic rate. Although humans can readily reach these levels, it is not yet clear what levels can be sustained for several days. The study's purposes were 1) to determine the upper limits of human EE and whether or not those levels can be sustained without inducing catabolism of body tissues and 2) to determine whether initial body weight is related to the levels that can be sustained. We compiled data on documented EE as measured by doubly labeled water during high levels of physical activity (minimum of five consecutive days). We calculated the physical activity level (PAL) of each individual studied (PAL = total EE / basal metabolic rate) from the published data. Correlations were run to examine the relationship between initial body weight and body weight lost with both total EE and PAL. The uppermost limit of EE was a peak PAL of 6.94 that was sustained for 10 consecutive days of a 95-d race. Only two studies reported PALs above 5.0; however, significant decreases in body mass were found in each study (0.45-1.39 kg·wk(-1) of weight loss). To test whether initial weight affects the ability to sustain high PALs, we found a significant positive correlation between TEE and initial body weight (r = 0.46, P body weight (r = 0.27, not statistically significant). Some elite humans are able to sustain PALs above 5.0 for a minimum of 10 d. Although significant decreases in body weight occur at this level, catabolism of body tissue may be preventable in situations with proper energy intake. Further, initial body weight does not seem to affect the sustainability of PALs.
High level waste at Hanford: Potential for waste loading maximization
International Nuclear Information System (INIS)
Hrma, P.R.; Bailey, A.W.
1995-09-01
The loading of Hanford nuclear waste in borosilicate glass is limited by phase-related phenomena, such as crystallization or formation of immiscible liquids, and by breakdown of the glass structure because of an excessive concentration of modifiers. The phase-related phenomena cause both processing and product quality problems. The deterioration of product durability determines the ultimate waste loading limit if all processing problems are resolved. Concrete examples and mass-balance based calculations show that a substantial potential exists for increasing waste loading of high-level wastes that contain a large fraction of refractory components
International Nuclear Information System (INIS)
Endrizzi, M.; Delogu, P.; Oliva, P.
2014-01-01
An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-07
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
A comparison of maximal torque levels of the different planes of ...
African Journals Online (AJOL)
It is often assumed that because different sports require specific skills, the torque levels differ from sport to sport. The purpose of this study was to establish whether there were significant differences in the maximal torque levels of the different planes of movement of the shoulder-girdle complex for different types of sport.
DEFF Research Database (Denmark)
Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R R
2015-01-01
that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression profiles of X-linked genes. Tissues whose tissue-specific genes are very highly expressed (e.g., secretory tissues, tissues...... abundant in structural proteins) are also tissues in which gene expression is relatively rare on the X chromosome. These trends cannot be fully accounted for in terms of alternative models of biased expression. In conclusion, the notion that it is hard for genes on the Therian X to be highly expressed...
Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R. R.; Huminiecki, Lukasz
2015-01-01
to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia
Directory of Open Access Journals (Sweden)
Sumiaki Maeo
Full Text Available Antagonistic muscle pairs cannot be fully activated simultaneously, even with maximal effort, under conditions of voluntary co-contraction, and their muscular activity levels are always below those during agonist contraction with maximal voluntary effort (MVE. Whether the muscular activity level during the task has trainability remains unclear. The present study examined this issue by comparing the muscular activity level during maximal voluntary co-contraction for highly experienced bodybuilders, who frequently perform voluntary co-contraction in their training programs, with that for untrained individuals (nonathletes. The electromyograms (EMGs of biceps brachii and triceps brachii muscles during maximal voluntary co-contraction of elbow flexors and extensors were recorded in 11 male bodybuilders and 10 nonathletes, and normalized to the values obtained during the MVE of agonist contraction for each of the corresponding muscles (% EMGMVE. The involuntary coactivation level in antagonist muscle during the MVE of agonist contraction was also calculated. In both muscles, % EMGMVE values during the co-contraction task for bodybuilders were significantly higher (P<0.01 than those for nonathletes (biceps brachii: 66±14% in bodybuilders vs. 46±13% in nonathletes, triceps brachii: 74±16% vs. 57±9%. There was a significant positive correlation between a length of bodybuilding experience and muscular activity level during the co-contraction task (r = 0.653, P = 0.03. Involuntary antagonist coactivation level during MVE of agonist contraction was not different between the two groups. The current result indicates that long-term participation in voluntary co-contraction training progressively enhances muscular activity during maximal voluntary co-contraction.
Maeo, Sumiaki; Takahashi, Takumi; Takai, Yohei; Kanehisa, Hiroaki
2013-01-01
Antagonistic muscle pairs cannot be fully activated simultaneously, even with maximal effort, under conditions of voluntary co-contraction, and their muscular activity levels are always below those during agonist contraction with maximal voluntary effort (MVE). Whether the muscular activity level during the task has trainability remains unclear. The present study examined this issue by comparing the muscular activity level during maximal voluntary co-contraction for highly experienced bodybuilders, who frequently perform voluntary co-contraction in their training programs, with that for untrained individuals (nonathletes). The electromyograms (EMGs) of biceps brachii and triceps brachii muscles during maximal voluntary co-contraction of elbow flexors and extensors were recorded in 11 male bodybuilders and 10 nonathletes, and normalized to the values obtained during the MVE of agonist contraction for each of the corresponding muscles (% EMGMVE). The involuntary coactivation level in antagonist muscle during the MVE of agonist contraction was also calculated. In both muscles, % EMGMVE values during the co-contraction task for bodybuilders were significantly higher (Pbodybuilders vs. 46±13% in nonathletes, triceps brachii: 74±16% vs. 57±9%). There was a significant positive correlation between a length of bodybuilding experience and muscular activity level during the co-contraction task (r = 0.653, P = 0.03). Involuntary antagonist coactivation level during MVE of agonist contraction was not different between the two groups. The current result indicates that long-term participation in voluntary co-contraction training progressively enhances muscular activity during maximal voluntary co-contraction. PMID:24260233
Directory of Open Access Journals (Sweden)
Laurence D Hurst
2015-12-01
Full Text Available X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE and data from the Functional Annotation of the Mammalian Genome (FANTOM5 project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds, as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased
Hurst, Laurence D.
2015-12-18
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Vera, Jesús; Jiménez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David
2017-10-01
Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017 Elsevier Inc. All rights reserved.
Confidence and sensitivity of sea-level reconstructions
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde
For the last two decades, satellite altimetry has provided a near-global view of spatial and temporal patterns in sea surface height (SSH). When combined with records from tide gauges, a historical reconstruction of sea level can be obtained; while tide gauge records span up to 200 years back...... nature of the data fields. We examine the sensitivity of a reconstruction with respect to the length of calibration time series, and the spatial distribution of tide gauges or other proxy data. In addition, we consider the eect of isolating certain physical phenomena (e.g. ENSO) and annual signals...... and modelling these outside the reconstruction. The implementation is currently based on data from compound satellite datasets (i.e., two decades of altimetry), and the Simple Ocean Data Assimilation (SODA) model, an existing reconstruction, where a calibration period can be easily extracted and our model...
Processing for maximizing the level of crystallinity in linear aromatic polyimides
St.clair, Terry L. (Inventor)
1991-01-01
The process of the present invention includes first treating a polyamide acid (such as LARC-TPI polyamide acid) in an amide-containing solvent (such as N-methyl pyrrolidone) with an aprotic organic base (such as triethylamine), followed by dehydrating with an organic dehydrating agent (such as acetic anhydride). The level of crystallinity in the linear aromatic polyimide so produced is maximized without any degradation in the molecular weight thereof.
A Practical Algorithm for Reconstructing Level-1 Phylogenetic Networks
K.T. Huber; L.J.J. van Iersel (Leo); S.M. Kelk (Steven); R. Suchecki
2010-01-01
htmlabstractRecently much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks - a type of
A practical algorithm for reconstructing level-1 phylogenetic networks
Huber, K.T.; Iersel, van L.J.J.; Kelk, S.M.; Suchecki, R.
2011-01-01
Recently, much attention has been devoted to the construction of phylogenetic networks which generalize phylogenetic trees in order to accommodate complex evolutionary processes. Here, we present an efficient, practical algorithm for reconstructing level-1 phylogenetic networks-a type of network
Arctic sea-level reconstruction analysis using recent satellite altimetry
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2014-01-01
We present a sea-level reconstruction for the Arctic Ocean using recent satellite altimetry data. The model, forced by historical tide gauge data, is based on empirical orthogonal functions (EOFs) from a calibration period; for this purpose, newly retracked satellite altimetry from ERS-1 and -2...... and Envisat has been used. Despite the limited coverage of these datasets, we have made a reconstruction up to 82 degrees north for the period 1950–2010. We place particular emphasis on determining appropriate preprocessing for the tide gauge data, and on validation of the model, including the ability...
Sea level reconstruction from satellite altimetry and tide gauge data
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2012-01-01
Ocean satellite altimetry has provided global sets of sea level data for the last two decades, allowing determination of spatial patterns in global sea level. For reconstructions going back further than this period, tide gauge data can be used as a proxy. We examine different methods of combining...... for better sensitivity analysis with respect to spatial distribution, and tide gauge data are available around the Arctic Ocean, which may be important for a later high-latitude reconstruction....... satellite altimetry and tide gauge data using optimal weighting of tide gauge data, linear regression and EOFs, including automatic quality checks of the tide gauge time series. We attempt to augment the model using various proxies such as climate indices like the NAO and PDO, and investigate alternative...
Skull defect reconstruction based on a new hybrid level set.
Zhang, Ziqun; Zhang, Ran; Song, Zhijian
2014-01-01
Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.
Experiments in Reconstructing Twentieth-Century Sea Levels
Ray, Richard D.; Douglas, Bruce C.
2011-01-01
One approach to reconstructing historical sea level from the relatively sparse tide-gauge network is to employ Empirical Orthogonal Functions (EOFs) as interpolatory spatial basis functions. The EOFs are determined from independent global data, generally sea-surface heights from either satellite altimetry or a numerical ocean model. The problem is revisited here for sea level since 1900. A new approach to handling the tide-gauge datum problem by direct solution offers possible advantages over the method of integrating sea-level differences, with the potential of eventually adjusting datums into the global terrestrial reference frame. The resulting time series of global mean sea levels appears fairly insensitive to the adopted set of EOFs. In contrast, charts of regional sea level anomalies and trends are very sensitive to the adopted set of EOFs, especially for the sparser network of gauges in the early 20th century. The reconstructions appear especially suspect before 1950 in the tropical Pacific. While this limits some applications of the sea-level reconstructions, the sensitivity does appear adequately captured by formal uncertainties. All our solutions show regional trends over the past five decades to be fairly uniform throughout the global ocean, in contrast to trends observed over the shorter altimeter era. Consistent with several previous estimates, the global sea-level rise since 1900 is 1.70 +/- 0.26 mm/yr. The global trend since 1995 exceeds 3 mm/yr which is consistent with altimeter measurements, but this large trend was possibly also reached between 1935 and 1950.
Multi-level damage identification with response reconstruction
Zhang, Chao-Dong; Xu, You-Lin
2017-10-01
Damage identification through finite element (FE) model updating usually forms an inverse problem. Solving the inverse identification problem for complex civil structures is very challenging since the dimension of potential damage parameters in a complex civil structure is often very large. Aside from enormous computation efforts needed in iterative updating, the ill-condition and non-global identifiability features of the inverse problem probably hinder the realization of model updating based damage identification for large civil structures. Following a divide-and-conquer strategy, a multi-level damage identification method is proposed in this paper. The entire structure is decomposed into several manageable substructures and each substructure is further condensed as a macro element using the component mode synthesis (CMS) technique. The damage identification is performed at two levels: the first is at macro element level to locate the potentially damaged region and the second is over the suspicious substructures to further locate as well as quantify the damage severity. In each level's identification, the damage searching space over which model updating is performed is notably narrowed down, not only reducing the computation amount but also increasing the damage identifiability. Besides, the Kalman filter-based response reconstruction is performed at the second level to reconstruct the response of the suspicious substructure for exact damage quantification. Numerical studies and laboratory tests are both conducted on a simply supported overhanging steel beam for conceptual verification. The results demonstrate that the proposed multi-level damage identification via response reconstruction does improve the identification accuracy of damage localization and quantization considerably.
Divide and conquer: intermediate levels of population fragmentation maximize cultural accumulation.
Derex, Maxime; Perreault, Charles; Boyd, Robert
2018-04-05
Identifying the determinants of cumulative cultural evolution is a key issue in the interdisciplinary field of cultural evolution. A widely held view is that large and well-connected social networks facilitate cumulative cultural evolution because they promote the spread of useful cultural traits and prevent the loss of cultural knowledge through factors such as drift. This view stems from models that focus on the transmission of cultural information, without considering how new cultural traits actually arise. In this paper, we review the literature from various fields that suggest that, under some circumstances, increased connectedness can decrease cultural diversity and reduce innovation rates. Incorporating this idea into an agent-based model, we explore the effect of population fragmentation on cumulative culture and show that, for a given population size, there exists an intermediate level of population fragmentation that maximizes the rate of cumulative cultural evolution. This result is explained by the fact that fully connected, non-fragmented populations are able to maintain complex cultural traits but produce insufficient variation and so lack the cultural diversity required to produce highly complex cultural traits. Conversely, highly fragmented populations produce a variety of cultural traits but cannot maintain complex ones. In populations with intermediate levels of fragmentation, cultural loss and cultural diversity are balanced in a way that maximizes cultural complexity. Our results suggest that population structure needs to be taken into account when investigating the relationship between demography and cumulative culture.This article is part of the theme issue 'Bridging cultural gaps: interdisciplinary studies in human cultural evolution'. © 2018 The Author(s).
Tao, R.; Tang, H.
Chocolate is one of the most popular food types and flavors in the world. Unfortunately, at present, chocolate products contain too much fat, leading to obesity. For example, a typical molding chocolate has various fat up to 40% in total and chocolate for covering ice cream has fat 50 -60%. Especially, as children are the leading chocolate consumers, reducing the fat level in chocolate products to make them healthier is important and urgent. While this issue was called into attention and elaborated in articles and books decades ago and led to some patent applications, no actual solution was found unfortunately. Why is reducing fat in chocolate so difficult? What is the underlying physical mechanism? We have found that this issue is deeply related to the basic science of soft matters, especially to their viscosity and maximally random jammed (MRJ) density φx. All chocolate productions are handling liquid chocolate, a suspension with cocoa solid particles in melted fat, mainly cocoa butter. The fat level cannot be lower than 1-φxin order to have liquid chocolate to flow. Here we show that that with application of an electric field to liquid chocolate, we can aggregate the suspended particles into prolate spheroids. This microstructure change reduces liquid chocolate's viscosity along the flow direction and increases its MRJ density significantly. Hence the fat level in chocolate can be effectively reduced. We are looking forward to a new class of healthier and tasteful chocolate coming to the market soon. Dept. of Physics, Temple Univ, Philadelphia, PA 19122.
Effect of Acute Maximal Exercise on Circulating Levels of Interleukin-12 during Ramadan Fasting.
Abedelmalek, Salma; Souissi, Nizar; Takayuki, Akimoto; Hadouk, Sami; Tabka, Zouhair
2011-09-01
The purpose of this study was to examine the effects of Ramadan fasting on circulating levels of interleukin-12 (IL-12) after a brief maximal exercise. NINE SUBJECTS PERFORMED A WINGATE TEST ON THREE DIFFERENT OCCASIONS: (i) the first week of Ramadan (1WR), (ii) the fourth week of Ramadan (4WR), and (iii) three weeks after Ramadan (AR). Blood samples were taken before, immediately and 60 min after the exercise. Plasma concentrations of IL-12 were measured using enzyme-linked immunosorbent assay. Variance analysis revealed no significant effect of Ramadan on P(peak) and P(mean) during the three testing periods. Considering the effect of Ramadan on plasma concentrations of IL-12, analysis of the variance revealed a significant Ramadan effect (F((2,) (16))=66.27; P effect (F((2,) (16))= 120.66; P Ramadan × time) of test interaction (F((4,) (32))=2.40; P>0.05). For all measures, IL-12 levels were lower during 1WR and 4WR in comparison with AR (P effects, IL-12 levels measured immediately after the exercise were significantly higher than those measured before and at 60 minutes after the exercise (P Ramadan.
Directory of Open Access Journals (Sweden)
Anu Raisanen
2014-05-01
Full Text Available Physical inactivity is a modifiable risk factor for cardiovascular (CV and metabolic disorders. VO2max is the best method to assess cardio-respiratory fitness level but it is poorly adopted in clinical practice. Sudomotor dysfunction may develop early in metabolic diseases. This study aimed at comparing established CV risk evaluation techniques with SUDOSCAN; a quick and non-invasive method to assess sudomotor function. A questionnaire was filled-in; physical examination and VO2max estimation using a maximal test on a bicycle ergometer were performed on active Finish workers. Hand and foot electrochemical skin conductance (ESC were measured to assess sudomotor function. Subjects with the lowest fitness level were involved in a 12 month training program with recording of their weekly physical activity and a final fitness level evaluation. Significant differences in BMI; waist and body fat were seen according to SUDOSCAN risk score classification. Correlation between the risk score and estimated VO2max was r = −0.57, p < 0.0001 for women and −0.48, p < 0.0001 for men. A significant increase in estimated VO2max, in hand and foot ESC and in risk score was observed after lifestyle intervention and was more important in subjects with the highest weekly activity. SUDOSCAN could be used to assess cardio-metabolic disease risk status in a working population and to follow individual lifestyle interventions.
DEFF Research Database (Denmark)
Hove, Jens D; Rasmussen, Rune; Freiberg, Jacob
2008-01-01
BACKGROUND: The purpose of this study was to investigate the quantitative properties of ordered-subset expectation maximization (OSEM) on kinetic modeling with nitrogen 13 ammonia compared with filtered backprojection (FBP) in healthy subjects. METHODS AND RESULTS: Cardiac N-13 ammonia positron...... emission tomography (PET) studies from 20 normal volunteers at rest and during dipyridamole stimulation were analyzed. Image data were reconstructed with either FBP or OSEM. FBP- and OSEM-derived input functions and tissue curves were compared together with the myocardial blood flow and spillover values...... and OSEM flow values were observed with a flow underestimation of 45% (rest/dipyridamole) in the septum and of 5% (rest) and 15% (dipyridamole) in the lateral myocardial wall. CONCLUSIONS: OSEM reconstruction of myocardial perfusion images with N-13 ammonia and PET produces high-quality images for visual...
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-01-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the
DEFF Research Database (Denmark)
Bradley, Paul S; Mohr, Magni; Bendiksen, Mads
2011-01-01
to detect test-retest changes and discriminate between performance for different playing standards and positions in elite soccer. Elite (n = 148) and sub-elite male (n = 14) soccer players carried out the Yo-Yo IE2 test on several occasions over consecutive seasons. Test-retest coefficient of variation (CV......) in Yo-Yo IE2 test performance and heart rate after 6 min were 3.9% (n = 37) and 1.4% (n = 32), respectively. Elite male senior and youth U19 players Yo-Yo IE2 performances were better (P ......The aims of this study were to (1) determine the reproducibility of sub-maximal and maximal versions of the Yo-Yo intermittent endurance test level 2 (Yo-Yo IE2 test), (2) assess the relationship between the Yo-Yo IE2 test and match performance and (3) quantify the sensitivity of the Yo-Yo IE2 test...
Optimization of hybrid iterative reconstruction level in pediatric body CT.
Karmazyn, Boaz; Liang, Yun; Ai, Huisi; Eckert, George J; Cohen, Mervyn D; Wanner, Matthew R; Jennings, S Gregory
2014-02-01
The objective of our study was to attempt to optimize the level of hybrid iterative reconstruction (HIR) in pediatric body CT. One hundred consecutive chest or abdominal CT examinations were selected. For each examination, six series were obtained: one filtered back projection (FBP) and five HIR series (iDose(4)) levels 2-6. Two pediatric radiologists, blinded to noise measurements, independently chose the optimal HIR level and then rated series quality. We measured CT number (mean in Hounsfield units) and noise (SD in Hounsfield units) changes by placing regions of interest in the liver, muscles, subcutaneous fat, and aorta. A mixed-model analysis-of-variance test was used to analyze correlation of noise reduction with the optimal HIR level compared with baseline FBP noise. One hundred CT examinations were performed of 88 patients (52 females and 36 males) with a mean age of 8.5 years (range, 19 days-18 years); 12 patients had both chest and abdominal CT studies. Radiologists agreed to within one level of HIR in 92 of 100 studies. The mean quality rating was significantly higher for HIR than FBP (3.6 vs 3.3, respectively; p optimal HIR level was used (p optimal for most studies. The optimal HIR level was less effective in reducing liver noise in children with lower baseline noise.
Directory of Open Access Journals (Sweden)
Liran Carmel
2010-01-01
Full Text Available Evolutionary binary characters are features of species or genes, indicating the absence (value zero or presence (value one of some property. Examples include eukaryotic gene architecture (the presence or absence of an intron in a particular locus, gene content, and morphological characters. In many studies, the acquisition of such binary characters is assumed to represent a rare evolutionary event, and consequently, their evolution is analyzed using various flavors of parsimony. However, when gain and loss of the character are not rare enough, a probabilistic analysis becomes essential. Here, we present a comprehensive probabilistic model to describe the evolution of binary characters on a bifurcating phylogenetic tree. A fast software tool, EREM, is provided, using maximum likelihood to estimate the parameters of the model and to reconstruct ancestral states (presence and absence in internal nodes and events (gain and loss events along branches.
Carmel, Liran; Wolf, Yuri I; Rogozin, Igor B; Koonin, Eugene V
2010-01-01
Evolutionary binary characters are features of species or genes, indicating the absence (value zero) or presence (value one) of some property. Examples include eukaryotic gene architecture (the presence or absence of an intron in a particular locus), gene content, and morphological characters. In many studies, the acquisition of such binary characters is assumed to represent a rare evolutionary event, and consequently, their evolution is analyzed using various flavors of parsimony. However, when gain and loss of the character are not rare enough, a probabilistic analysis becomes essential. Here, we present a comprehensive probabilistic model to describe the evolution of binary characters on a bifurcating phylogenetic tree. A fast software tool, EREM, is provided, using maximum likelihood to estimate the parameters of the model and to reconstruct ancestral states (presence and absence in internal nodes) and events (gain and loss events along branches).
Maximal doses of atorvastatin and rosuvastatin are highly effective in lowering low-density lipoprotein (LDL) cholesterol and triglyceride levels; however, rosuvastatin has been shown to be significantly more effective than atorvastatin in lowering LDL cholesterol and in increasing high-density lipo...
Level-set-based reconstruction algorithm for EIT lung images: first clinical results.
Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy
2012-05-01
We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.
Level-set-based reconstruction algorithm for EIT lung images: first clinical results
International Nuclear Information System (INIS)
Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz
2012-01-01
We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)
Maximal violation of Clauser-Horne-Shimony-Holt inequality for four-level systems
International Nuclear Information System (INIS)
Fu Libin; Chen Jingling; Chen Shigang
2004-01-01
Clauser-Horne-Shimony-Holt inequality for bipartite systems of four dimensions is studied in detail by employing the unbiased eight-port beam splitters measurements. The uniform formulas for the maximum and minimum values of this inequality for such measurements are obtained. Based on these formulas, we show that an optimal nonmaximally entangled state is about 6% more resistant to noise than the maximally entangled one. We also give the optimal state and the optimal angles which are important for experimental realization
Directory of Open Access Journals (Sweden)
Rodrigo Luiz Vancini
2015-01-01
Full Text Available Objective To investigate the correlation between cardiorespiratory fitness and mood state in individuals with temporal lobe epilepsy (TLE. Method Individuals with TLE (n = 20 and healthy control subjects (C, n = 20 were evaluated. Self-rating questionnaires were used to assess mood (POMS and habitual physical activity (BAECKE. Cardiorespiratory fitness was evaluated by a maximal incremental test. Results People with TLE presented lower cardiorespiratory fitness; higher levels of mood disorders; and lower levels of vigor when compared to control health subjects. A significant negative correlation was observed between the levels of tension-anxiety and maximal aerobic power. Conclusion Low levels of cardiorespiratory fitness may modify the health status of individuals with TLE and it may be considered a risk factor for the development of mood disorders.
Kolstein, M.; De Lorenzo, G.; Chmeissani, M.
2014-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.
Grant, K.; Rohling, E. J.; Amies, J.
2017-12-01
Sea-level (SL) reconstructions over glacial-interglacial timeframes are critical for understanding the equilibrium response of ice sheets to sustained warming. In particular, continuous and high-resolution SL records are essential for accurately quantifying `natural' rates of SL rise. Global SL changes are well-constrained since the last glacial maximum ( 20,000 years ago, ky) by radiometrically-dated corals and paleoshoreline data, and fairly well-constrained over the last glacial cycle ( 150 ky). Prior to that, however, studies of ice-volume:SL relationships tend to rely on benthic δ18O, as geomorphological evidence is far more sparse and less reliably dated. An alternative SL reconstruction method (the `marginal basin' approach) was developed for the Red Sea over 500 ky, and recently attempted for the Mediterranean over 5 My (Rohling et al., 2014, Nature). This method exploits the strong sensitivity of seawater δ18O in these basins to SL changes in the relatively narrow and shallow straits which connect the basins with the open ocean. However, the initial Mediterranean SL method did not resolve sea-level highstands during Northern Hemisphere insolation maxima, when African monsoon run-off - strongly depleted in δ18O - reached the Mediterranean. Here, we present improvements to the `marginal basin' sea-level reconstruction method. These include a new `Med-Red SL stack', which combines new probabilistic Mediterranean and Red Sea sea-level stacks spanning the last 500 ky. We also show how a box model-data comparison of water-column δ18O changes over a monsoon interval allows us to quantify the monsoon versus SL δ18O imprint on Mediterranean foraminiferal carbonate δ18O records. This paves the way for a more accurate and fully continuous SL reconstruction extending back through the Pliocene.
DEFF Research Database (Denmark)
Datta, Pameli; Philipsen, Peter A.; Olsen, Peter
2016-01-01
Vitamin D influences skeletal health as well as other aspects of human health. Even when the most obvious sources of variation such as solar UVB exposure, latitude, season, clothing habits, skin pigmentation and ethnicity are selected for, variation in the serum 25-hydroxy vitamin D (25(OH......)D) response to UVB remains extensive and unexplained. Our study assessed the inter-personal variation in 25(OH)D response to UVR and the maximal obtainable 25(OH)D level in 22 healthy participants (220 samples) with similar skin pigmentation during winter with negligible ambient UVB. Participants received...... identical UVB doses on identical body areas until a maximal level of 25(OH)D was reached. Major inter-personal variation in both the maximal obtainable UVB-induced 25(OH)D level (range 85–216 nmol l−1, mean 134 nmol l−1) and the total increase in 25(OH)D (range 3–139 nmol l−1, mean 48 nmol l−1) was found...
The Red Sea during the Last Glacial Maximum: implications for sea level reconstructions
Gildor, H.; Biton, E.; Peltier, W. R.
2006-12-01
The Red Sea (RS) is a semi-enclosed basin connected to the Indian Ocean via a narrow and shallow strait, and surrounded by arid areas which exhibits high sensitivity to atmospheric changes and sea level reduction. We have used the MIT GCM to investigate the changes in the hydrography and circulation in the RS in response to reduced sea level, variability in the Indian monsoons, and changes in atmospheric temperature and humidity that occurred during the Last Glacial Maximum (LGM). The model results show high sensitivity to sea level reduction especially in the salinity field (increasing with the reduction in sea level) together with a mild atmospheric impact. Sea level reduction decreases the stratification, increases subsurface temperatures, and alters the circulation pattern at the Strait of Bab el Mandab, which experiences a transition from submaximal flow to maximal flow. The reduction in sea level at LGM alters the location of deep water formation which shifts to an open sea convective site in the northern part of the RS compared to present day situation in which deep water is formed from the Gulf of Suez outflow. Our main result based on both the GCM and on a simple hydraulic control model which takes into account mixing process at the Strait of Bab El Mandeb, is that sea level was reduced by only ~100 m in the Bab El Mandeb region during the LGM, i.e. the water depth at the Hanish sill (the shallowest part in the Strait Bab el Mandab) was around 34 m. This result agrees with the recent reconstruction of the LGM low stand of the sea in this region based upon the ICE-5G (VM2) model of Peltier (2004).
Directory of Open Access Journals (Sweden)
Thiago M Pais
2013-06-01
Full Text Available The yeast Saccharomyces cerevisiae is able to accumulate ≥17% ethanol (v/v by fermentation in the absence of cell proliferation. The genetic basis of this unique capacity is unknown. Up to now, all research has focused on tolerance of yeast cell proliferation to high ethanol levels. Comparison of maximal ethanol accumulation capacity and ethanol tolerance of cell proliferation in 68 yeast strains showed a poor correlation, but higher ethanol tolerance of cell proliferation clearly increased the likelihood of superior maximal ethanol accumulation capacity. We have applied pooled-segregant whole-genome sequence analysis to identify the polygenic basis of these two complex traits using segregants from a cross of a haploid derivative of the sake strain CBS1585 and the lab strain BY. From a total of 301 segregants, 22 superior segregants accumulating ≥17% ethanol in small-scale fermentations and 32 superior segregants growing in the presence of 18% ethanol, were separately pooled and sequenced. Plotting SNP variant frequency against chromosomal position revealed eleven and eight Quantitative Trait Loci (QTLs for the two traits, respectively, and showed that the genetic basis of the two traits is partially different. Fine-mapping and Reciprocal Hemizygosity Analysis identified ADE1, URA3, and KIN3, encoding a protein kinase involved in DNA damage repair, as specific causative genes for maximal ethanol accumulation capacity. These genes, as well as the previously identified MKT1 gene, were not linked in this genetic background to tolerance of cell proliferation to high ethanol levels. The superior KIN3 allele contained two SNPs, which are absent in all yeast strains sequenced up to now. This work provides the first insight in the genetic basis of maximal ethanol accumulation capacity in yeast and reveals for the first time the importance of DNA damage repair in yeast ethanol tolerance.
Idea Sharing: How to Maximize Participation in a Mixed-Level English Class
Carlson, Gordon D.
2015-01-01
Teaching a class of mixed EFL/ESL levels can be problematic for both instructors and students. The disparate levels of ability often mean that some students are not challenged enough while others struggle to keep pace. Drawing on experience in the university classroom in Japan, this practice promotes good preparation, self-reliance, inclusiveness,…
Hydrological forecast of maximal water level in Lepenica river basin and flood control measures
Directory of Open Access Journals (Sweden)
Milanović Ana
2006-01-01
Full Text Available Lepenica river basin territory has became axis of economic and urban development of Šumadija district. However, considering Lepenica River with its tributaries, and their disordered river regime, there is insufficient of water for water supply and irrigation, while on the other hand, this area is suffering big flood and torrent damages (especially Kragujevac basin. The paper presents flood problems in the river basin, maximum water level forecasts, and flood control measures carried out until now. Some of the potential solutions, aiming to achieve the effective flood control, are suggested as well.
Principles and reconstruction of the ancient sea levels during the Quaternary
International Nuclear Information System (INIS)
Martin, L.; Flexor, J.M.; Suguio, K.
1986-01-01
This work focused the multiple aspects related to the ''reconstruction of the ancient sea level during the Quaternary''. The relative sea level, fluctuations are produced by true variations of the level (eustasy) and by changes in the land level (tectonism and isostasy). The changes of the relative levels are reconstructed through several evidence of these fluctuations, which are recognised in time and space. To define their situation in space is necessary to know their present altitude in relation to their original altitude, that is, to determine their position in relation to the sea level during their formation or sedimentation. Their situation in time is determined by measuring the moment of their formation or sedimentation, using for this the dating methods (isotopic, archeological, etc.) When numerous ancient levels could be reconstructed, spread through a considerable time interval, is possible to delineate the sea level fluctuation curve for this period. (C.D.G.) [pt
Sea level reconstructions from altimetry and tide gauges using independent component analysis
Brunnabend, Sandra-Esther; Kusche, Jürgen; Forootan, Ehsan
2017-04-01
Many reconstructions of global and regional sea level rise derived from tide gauges and satellite altimetry used the method of empirical orthogonal functions (EOF) to reduce noise, improving the spatial resolution of the reconstructed outputs and investigate the different signals in climate time series. However, the second order EOF method has some limitations, e.g. in the separation of individual physical signals into different modes of sea level variations and in the capability to physically interpret the different modes as they are assumed to be orthogonal. Therefore, we investigate the use of the more advanced statistical signal decomposition technique called independent component analysis (ICA) to reconstruct global and regional sea level change from satellite altimetry and tide gauge records. Our results indicate that the used method has almost no influence on the reconstruction of global mean sea level change (1.6 mm/yr from 1960-2010 and 2.9 mm/yr from 1993-2013). Only different numbers of modes are needed for the reconstruction. Using the ICA method is advantageous for separating independent climate variability signals from regional sea level variations as the mixing problem of the EOF method is strongly reduced. As an example, the modes most dominated by the El Niño-Southern Oscillation (ENSO) signal are compared. Regional sea level changes near Tianjin, China, Los Angeles, USA, and Majuro, Marshall Islands are reconstructed and the contributions from ENSO are identified.
Zhang, Zhengfang; Chen, Weifeng
2018-05-01
Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.
International Nuclear Information System (INIS)
Mito, Suzuko; Magota, Keiichi; Arai, Hiroshi; Omote, Hidehiko; Katsuura, Hidenori; Suzuki, Kotaro; Kubo Naoki
2005-01-01
Image registration technique is becoming an increasingly important tool in SPECT. Recently, software based on mutual-information maximization has been developed for automatic multimodality image registration. The accuracy of the software is important for its application to image registration. During SPECT reconstruction, the projection data are pre-filtered in order to reduce Poisson noise, commonly using a Butterworth filter. We have investigated the dependence of the absolute accuracy of MRI-SPECT registration on the cut-off frequencies of a range of Butterworth filters. This study used a 3D Hoffman phantom (Model No. 9000, Data-spectrum Co.). For the reference volume, an magnetization prepared rapid gradient echo (MPRage) sequence was performed on a Vision MRI (Siemence, 1.5 T). For the floating volumes, SPECT data of a phantom including 99m Tc 85 kBq/mL were acquired by a GCA-9300 (Toshiba Medical Systems Co.). During SPECT, the orbito-meatal (OM) line of the phantom was tilted by 5 deg and 15 deg to mimic the incline of a patient's head. The projection data were pre-filtered with Butterworth filters (cut-off frequency varying between 0.24 to 0.94 cycles/cm in 0.02 steps, order 8). The automated registrations were performed using iNRT β version software (Nihon Medi. Co.) and the rotation angles of SPECT for registration were noted. In this study, the registrations of all SPECT data were successful. Graphs of registration rotation angles against cut-off frequencies were scattered and showed no correlation between the two. The registration rotation angles ranged with changing cut-off frequency from -0.4 deg to +3.8 deg at a 5 deg tilt and from +12.7 deg to +19.6 deg at a 15 deg tilt. The registration rotation angles showed variation even for slight differences in cut-off frequencies. The absolute errors were a few degrees for any cut-off frequency. Regardless of the cut-off frequency, automatic registration using this software provides similar results. (author)
Barca, Patrizio; Giannelli, Marco; Fantacci, Maria Evelina; Caramella, Davide
2018-06-01
Computed tomography (CT) is a useful and widely employed imaging technique, which represents the largest source of population exposure to ionizing radiation in industrialized countries. Adaptive Statistical Iterative Reconstruction (ASIR) is an iterative reconstruction algorithm with the potential to allow reduction of radiation exposure while preserving diagnostic information. The aim of this phantom study was to assess the performance of ASIR, in terms of a number of image quality indices, when different reconstruction blending levels are employed. CT images of the Catphan-504 phantom were reconstructed using conventional filtered back-projection (FBP) and ASIR with reconstruction blending levels of 20, 40, 60, 80, and 100%. Noise, noise power spectrum (NPS), contrast-to-noise ratio (CNR) and modulation transfer function (MTF) were estimated for different scanning parameters and contrast objects. Noise decreased and CNR increased non-linearly up to 50 and 100%, respectively, with increasing blending level of reconstruction. Also, ASIR has proven to modify the NPS curve shape. The MTF of ASIR reconstructed images depended on tube load/contrast and decreased with increasing blending level of reconstruction. In particular, for low radiation exposure and low contrast acquisitions, ASIR showed lower performance than FBP, in terms of spatial resolution for all blending levels of reconstruction. CT image quality varies substantially with the blending level of reconstruction. ASIR has the potential to reduce noise whilst maintaining diagnostic information in low radiation exposure CT imaging. Given the opposite variation of CNR and spatial resolution with the blending level of reconstruction, it is recommended to use an optimal value of this parameter for each specific clinical application.
Stable reconstruction of Arctic sea level for the 1950-2010 period
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2016-01-01
Reconstruction of historical Arctic sea level is generally difficult due to the limited coverage and quality of both tide gauge and altimetry data in the area. Here a strategy to achieve a stable and plausible reconstruction of Arctic sea level from 1950 to today is presented. This work is based on the combination of tide gauge records and a new 20-year reprocessed satellite altimetry derived sea level pattern. Hence the study is limited to the area covered by satellite altimetry (68ºN and 82...
Bhattacharya, Joydeep; Pereda, Ernesto; Ioannou, Christos
2018-02-01
Maximal information coefficient (MIC) is a recently introduced information-theoretic measure of functional association with a promising potential of application to high dimensional complex data sets. Here, we applied MIC to reveal the nature of the functional associations between different brain regions during the perception of binaural beat (BB); BB is an auditory illusion occurring when two sinusoidal tones of slightly different frequency are presented separately to each ear and an illusory beat at the different frequency is perceived. We recorded sixty-four channels EEG from two groups of participants, musicians and non-musicians, during the presentation of BB, and systematically varied the frequency difference from 1 Hz to 48 Hz. Participants were also presented non-binuaral beat (NBB) stimuli, in which same frequencies were presented to both ears. Across groups, as compared to NBB, (i) BB conditions produced the most robust changes in the MIC values at the whole brain level when the frequency differences were in the classical alpha range (8-12 Hz), and (ii) the number of electrode pairs showing nonlinear associations decreased gradually with increasing frequency difference. Between groups, significant effects were found for BBs in the broad gamma frequency range (34-48 Hz), but such effects were not observed between groups during NBB. Altogether, these results revealed the nature of functional associations at the whole brain level during the binaural beat perception and demonstrated the usefulness of MIC in characterizing interregional neural dependencies.
International Nuclear Information System (INIS)
Lind, B.K.; Mavroidis, P.; Hyoedynmaa, S.; Kappas, C.
1999-01-01
During the past decade, tumor and normal tissue reactions after radiotherapy have been increasingly quantified in radiobiological terms. For this purpose, response models describing the dependence of tumor and normal tissue reactions on the irradiated volume, heterogeneity of the delivered dose distribution and cell sensitivity variations can be taken into account. The probability of achieving a good treatment outcome can be increased by using an objective function such as P + , the probability of complication-free tumor control. A new procedure is presented, which quantifies P + from the dose delivery on 2D surfaces and 3D volumes and helps the user of any treatment planning system (TPS) to select the best beam orientations, the best beam modalities and the most suitable beam energies. The final step of selecting the prescribed dose level is made by a renormalization of the entire dose plan until the value of P + is maximized. The index P + makes use of clinically established dose-response parameters, for tumors and normal tissues of interest, in order to improve its clinical relevance. The results, using P + , are compared against the assessments of experienced medical physicists and radiation oncologists for two clinical cases. It is observed that when the absorbed dose level for a given treatment plan is increased, the treatment outcome first improves rapidly. As the dose approaches the tolerance of normal tissues the complication-free curve begins to drop. The optimal dose level is often just below this point and it depends on the geometry of each patient and target volume. Furthermore, a more conformal dose delivery to the target results in a higher control rate for the same complication level. This effect can be quantified by the increased value of the P + parameter. (orig.)
Coastal barrier stratigraphy for Holocene high-resolution sea-level reconstruction.
Costas, Susana; Ferreira, Óscar; Plomaritis, Theocharis A; Leorri, Eduardo
2016-12-08
The uncertainties surrounding present and future sea-level rise have revived the debate around sea-level changes through the deglaciation and mid- to late Holocene, from which arises a need for high-quality reconstructions of regional sea level. Here, we explore the stratigraphy of a sandy barrier to identify the best sea-level indicators and provide a new sea-level reconstruction for the central Portuguese coast over the past 6.5 ka. The selected indicators represent morphological features extracted from coastal barrier stratigraphy, beach berm and dune-beach contact. These features were mapped from high-resolution ground penetrating radar images of the subsurface and transformed into sea-level indicators through comparison with modern analogs and a chronology based on optically stimulated luminescence ages. Our reconstructions document a continuous but slow sea-level rise after 6.5 ka with an accumulated change in elevation of about 2 m. In the context of SW Europe, our results show good agreement with previous studies, including the Tagus isostatic model, with minor discrepancies that demand further improvement of regional models. This work reinforces the potential of barrier indicators to accurately reconstruct high-resolution mid- to late Holocene sea-level changes through simple approaches.
Reconstructing sea level from paleo and projected temperatures 200 to 2100 AD
DEFF Research Database (Denmark)
Grinsted, Aslak; Moore, John; Jevrejeva, Svetlana
2010-01-01
-proxy reconstructions assuming that the established relationship between temperature and sea level holds from 200 to 2100 ad. Over the last 2,000 years minimum sea level (-19 to -26 cm) occurred around 1730 ad, maximum sea level (12–21 cm) around 1150 AD. Sea level 2090–2099 is projected to be 0.9 to 1.3 m for the A1B...
Directory of Open Access Journals (Sweden)
M.G. Bara Filho
2008-01-01
Full Text Available Strength and flexibility are common components of a training program and their maximal values are obtained through specific tests. However, little information about the damage effect of these training procedures in a skeletal muscle is known. Objective: To verify a serum CK changes 24 h after a sub maximal stretching routine and after the static flexibility and maximal strength tests. Methods: the sample was composed by 14 subjects (man and women, 28 ± 6 yr. physical education students. The volunteers were divided in a control group (CG and experimental group (EG that was submitted in a stretching routine (EG-ST, in a maximal flexibility static test (EG-FLEX and in 1-RM test (EG-1-RM, with one week interval among tests. The anthropometrics characteristics were obtained by digital scale with stadiometer (Filizola, São Paulo, Brasil, 2002. The blood samples were obtained using the IFCC method with reference values 26-155 U/L. The De Lorme and Watkins technique was used to access maximal maximal strength through bench press and leg press. The maximal flexibility test consisted in three 20 seconds sets until the point of maximal discomfort. The stretching was done in normal movement amplitude during 6 secons. Results: The basal and post 24 h CK values in CG and EG (ST; Flex and 1 RM were respectively 195,0 ± 129,5 vs. 202,1 ± 124,2; 213,3 ± 133,2 vs. 174,7 ± 115,8; 213,3 ± 133,2 vs. 226,6 ± 126,7 e 213,3 ± 133,2 vs. 275,9 ± 157,2. It was only observed a significant difference (p = 0,02 in the pre and post values inGE-1RM. Conclusion: only maximal strength dynamic exercise was capable to cause skeletal muscle damage.
Klimek, Andrzej T; Lubkowska, Anna; Szyguła, Zbigniew; Frączek, Barbara; Chudecka, Monika
2011-06-01
The objective of this work was to determine the dynamics of maximal anaerobic power (MAP) of the lower limbs, following a single whole body cryostimulation treatment (WBC), in relation to the temperature of thigh muscles. The subjects included 15 men and 15 women with an average age (± SD) of 21.6 ± 1.2 years. To evaluate the level of anaerobic power, the Wingate test was applied. The subjects were submitted to 6 WBC treatments at -130°C once a day. After each session they performed a single Wingate test in the 15, 30, 45, 60, 75 and 90th min after leaving the cryogenic chamber. The order of the test was randomized. All Wingate tests were preceded by an evaluation of thigh surface temperature with the use of a thermovisual camera. The average thigh surface temperature (T(av)) in both men and women dropped significantly after the whole body cryostimulation treatment, and next increased gradually. In women T(av) remained decreased for 75 min, whereas in men it did not return to the basal level until 90th min. A statistically insignificant decrease in MAP was observed in women after WBC. On the contrary, a non-significant increase in MAP was observed in men. The course of changes in MAP following the treatment was similar in both sexes to the changes in thigh surface temperature, with the exception of the period between 15th and 30th min. The shorter time to obtain MAP was observed in women till 90th min and in men till 45 min after WBC compared to the initial level. A single whole body cryostimulation may have a minor influence on short-term physical performance of supramaximal intensity, but it leads to improvement of velocity during the start as evidenced by shorter time required to obtain MAP.
Indian Academy of Sciences (India)
Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.
Indian Academy of Sciences (India)
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...
Improving Phylogeny Reconstruction at the Strain Level Using Peptidome Datasets.
Directory of Open Access Journals (Sweden)
Aitor Blanco-Míguez
2016-12-01
Full Text Available Typical bacterial strain differentiation methods are often challenged by high genetic similarity between strains. To address this problem, we introduce a novel in silico peptide fingerprinting method based on conventional wet-lab protocols that enables the identification of potential strain-specific peptides. These can be further investigated using in vitro approaches, laying a foundation for the development of biomarker detection and application-specific methods. This novel method aims at reducing large amounts of comparative peptide data to binary matrices while maintaining a high phylogenetic resolution. The underlying case study concerns the Bacillus cereus group, namely the differentiation of Bacillus thuringiensis, Bacillus anthracis and Bacillus cereus strains. Results show that trees based on cytoplasmic and extracellular peptidomes are only marginally in conflict with those based on whole proteomes, as inferred by the established Genome-BLAST Distance Phylogeny (GBDP method. Hence, these results indicate that the two approaches can most likely be used complementarily even in other organismal groups. The obtained results confirm previous reports about the misclassification of many strains within the B. cereus group. Moreover, our method was able to separate the B. anthracis strains with high resolution, similarly to the GBDP results as benchmarked via Bayesian inference and both Maximum Likelihood and Maximum Parsimony. In addition to the presented phylogenomic applications, whole-peptide fingerprinting might also become a valuable complementary technique to digital DNA-DNA hybridization, notably for bacterial classification at the species and subspecies level in the future.
Improving Phylogeny Reconstruction at the Strain Level Using Peptidome Datasets.
Blanco-Míguez, Aitor; Meier-Kolthoff, Jan P; Gutiérrez-Jácome, Alberto; Göker, Markus; Fdez-Riverola, Florentino; Sánchez, Borja; Lourenço, Anália
2016-12-01
Typical bacterial strain differentiation methods are often challenged by high genetic similarity between strains. To address this problem, we introduce a novel in silico peptide fingerprinting method based on conventional wet-lab protocols that enables the identification of potential strain-specific peptides. These can be further investigated using in vitro approaches, laying a foundation for the development of biomarker detection and application-specific methods. This novel method aims at reducing large amounts of comparative peptide data to binary matrices while maintaining a high phylogenetic resolution. The underlying case study concerns the Bacillus cereus group, namely the differentiation of Bacillus thuringiensis, Bacillus anthracis and Bacillus cereus strains. Results show that trees based on cytoplasmic and extracellular peptidomes are only marginally in conflict with those based on whole proteomes, as inferred by the established Genome-BLAST Distance Phylogeny (GBDP) method. Hence, these results indicate that the two approaches can most likely be used complementarily even in other organismal groups. The obtained results confirm previous reports about the misclassification of many strains within the B. cereus group. Moreover, our method was able to separate the B. anthracis strains with high resolution, similarly to the GBDP results as benchmarked via Bayesian inference and both Maximum Likelihood and Maximum Parsimony. In addition to the presented phylogenomic applications, whole-peptide fingerprinting might also become a valuable complementary technique to digital DNA-DNA hybridization, notably for bacterial classification at the species and subspecies level in the future.
Schramm, Georg; Holler, Martin; Rezaei, Ahmadreza; Vunckx, Kathleen; Knoll, Florian; Bredies, Kristian; Boada, Fernando; Nuyts, Johan
2018-02-01
In this article, we evaluate Parallel Level Sets (PLS) and Bowsher's method as segmentation-free anatomical priors for regularized brain positron emission tomography (PET) reconstruction. We derive the proximity operators for two PLS priors and use the EM-TV algorithm in combination with the first order primal-dual algorithm by Chambolle and Pock to solve the non-smooth optimization problem for PET reconstruction with PLS regularization. In addition, we compare the performance of two PLS versions against the symmetric and asymmetric Bowsher priors with quadratic and relative difference penalty function. For this aim, we first evaluate reconstructions of 30 noise realizations of simulated PET data derived from a real hybrid positron emission tomography/magnetic resonance imaging (PET/MR) acquisition in terms of regional bias and noise. Second, we evaluate reconstructions of a real brain PET/MR data set acquired on a GE Signa time-of-flight PET/MR in a similar way. The reconstructions of simulated and real 3D PET/MR data show that all priors were superior to post-smoothed maximum likelihood expectation maximization with ordered subsets (OSEM) in terms of bias-noise characteristics in different regions of interest where the PET uptake follows anatomical boundaries. Our implementation of the asymmetric Bowsher prior showed slightly superior performance compared with the two versions of PLS and the symmetric Bowsher prior. At very high regularization weights, all investigated anatomical priors suffer from the transfer of non-shared gradients.
Statistical selection of tide gauges for Arctic sea-level reconstruction
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2015-01-01
In this paper, we seek an appropriate selection of tide gauges for Arctic Ocean sea-level reconstruction based on a combination of empirical criteria and statistical properties (leverages). Tide gauges provide the only in situ observations of sea level prior to the altimetry era. However, tide...... the "influence" of each Arctic tide gauge on the EOF-based reconstruction through the use of statistical leverage and use this as an indication in selecting appropriate tide gauges, in order to procedurally identify poor-quality data while still including as much data as possible. To accommodate sparse...
Storm Surge Reconstruction and Return Water Level Estimation in Southeast Asia for the 20th Century
Cid, Alba; Wahl, Thomas; Chambers, Don P.; Muis, Sanne
2018-01-01
We present a methodology to reconstruct the daily maximum storm surge levels, obtained from tide gauges, based on the surrounding atmospheric conditions from an atmospheric reanalysis (20th Century Reanalysis-20CR). Tide gauge records in Southeast Asia are relatively short, so this area is often
Tung, Yu-Tang; Lin, Lei-Chen; Liu, Ya-Ling; Ho, Shang-Tse; Lin, Chi-Yang; Chuang, Hsiao-Li; Chiu, Chien-Chao; Huang, Chi-Chang; Wu, Jyh-Horng
2015-12-01
Some of the genus Rhododendron was used in traditional medicine for arthritis, acute and chronic bronchitis, asthma, pain, inflammation, rheumatism, hypertension and metabolic diseases and many species of the genus Rhododendron contain a large number of phenolic compounds and antioxidant properties that could be developed into pharmaceutical products. In this study, the antioxidative phytochemicals of Rhododendron oldhamii Maxim. leaves were detected by an online HPLC-DPPH method. In addition, the anti-hyperuricemic effect of the active phytochemicals from R. oldhamii leaf extracts was investigated using potassium oxonate (PO)-induced acute hyperuricemia. Six phytochemicals, including (2R, 3R)-epicatechin (1), (2R, 3R)-taxifolin (2), (2R, 3R)-astilbin (3), hyposide (4), guaijaverin (5), and quercitrin (6), were isolated using the developed screening method. Of these, compounds 3, 4, 5, and 6 were found to be major bioactive phytochemicals, and their contents were determined to be 130.8 ± 10.9, 105.5 ± 8.5, 104.1 ± 4.7, and 108.6 ± 4.0 mg per gram of EtOAc fraction, respectively. In addition, the four major bioactive phytochemicals at the same dosage (100 mmol/kg) were administered to the abdominal cavity of potassium oxonate (PO)-induced hyperuricemic mice, and the serum uric acid level was measured after 3 h of administration. H&E staining showed that PO-induced kidney injury caused renal tubular epithelium nuclear condensation in the cortex areas or the appearance of numerous hyaline casts in the medulla areas; treatment with 100 mmol/kg of EtOAc fraction, (2R, 3R)-astilbin, hyposide, guaijaverin, and quercitrin significantly reduced kidney injury. In addition, the serum uric acid level was significantly suppressed by 54.1, 35.1, 56.3, 56.3, and 53.2 %, respectively, by the administrations of 100 mmol/kg EtOAc fraction and the derived major phytochemicals, (2R, 3R)-astilbin, hyposide, guaijaverin, and quercitrin, compared to the PO group. The administration
Directory of Open Access Journals (Sweden)
Sukanya Somprom
2016-07-01
Full Text Available The research focuses on an insurance model controlled by proportional reinsurance in the finite-time surplus process with a unit-equalized time interval. We prove the existence of the maximal retention level for independent and identically distributed claim processes under α-regulation, i.e., a model where the insurance company has to manage the probability of insolvency to be at most α. In addition, we illustrate the maximal retention level for exponential claims by applying the bisection technique.
Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger
Directory of Open Access Journals (Sweden)
Rohr David
2016-01-01
at the Large Hadron Collider (LHC at CERN. The High Level Trigger (HLT is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC and the Inner Tracking System (ITS. The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
International Nuclear Information System (INIS)
Liang, Z.; Jaszczak, R.; Coleman, R.; Johnson, V.
1991-01-01
A multinomial image model is proposed which uses intensity-level information for reconstruction of contiguous image regions. The intensity-level information assumes that image intensities are relatively constant within contiguous regions over the image-pixel array and that intensity levels of these regions are determined either empirically or theoretically by information criteria. These conditions may be valid, for example, for cardiac blood-pool imaging, where the intensity levels (or radionuclide activities) of myocardium, blood-pool, and background regions are distinct and the activities within each region of muscle, blood, or background are relatively uniform. To test the model, a mathematical phantom over a 64x64 array was constructed. The phantom had three contiguous regions. Each region had a different intensity level. Measurements from the phantom were simulated using an emission-tomography geometry. Fifty projections were generated over 180 degree, with 64 equally spaced parallel rays per projection. Projection data were randomized to contain Poisson noise. Image reconstructions were performed using an iterative maximum a posteriori probability procedure. The contiguous regions corresponding to the three intensity levels were automatically segmented. Simultaneously, the edges of the regions were sharpened. Noise in the reconstructed images was significantly suppressed. Convergence of the iterative procedure to the phantom was observed. Compared with maximum likelihood and filtered-backprojection approaches, the results obtained using the maximum a posteriori probability with the intensity-level information demonstrated qualitative and quantitative improvement in localizing the regions of varying intensities
Directory of Open Access Journals (Sweden)
Wang Quan
2012-06-01
Full Text Available Abstract Leiomyosarcoma of the inferior vena cava (IVCL is a rare retroperitoneal tumor. We report two cases of level II (middle level, renal veins to hepatic veins IVCL, who underwent en bloc resection with reconstruction of bilateral or left renal venous return using prosthetic grafts. In our cases, IVCL is documented to be occluded preoperatively, therefore, radical resection of tumor and/or right kidney was performed and the distal end of inferior vena cava was resected and without caval reconstruction. None of the patients developed edema or acute renal failure postoperatively. After surgical resection, adjuvant radiation therapy was administrated. The patients have been free of recurrence 2 years and 3 months, 9 months after surgery, respectively, indicating the complete surgical resection and radiotherapy contribute to the better survival. The reconstruction of inferior vena cava was not considered mandatory in level II IVCL, if the retroperitoneal venous collateral pathways have been established. In addition to the curative resection of IVCL, the renal vascular reconstruction minimized the risks of procedure-related acute renal failure, and was more physiologically preferable. This concept was reflected in the treatment of the two patients reported on.
Reconstructing Northern Hemisphere upper-level fields during World War II
Energy Technology Data Exchange (ETDEWEB)
Broennimann, S. [Lunar and Planetary Laboratory, University of Arizona, PO Box 210092, Tucson, AZ 85721-0092 (United States); Luterbacher, J. [Institute of Geography, University of Bern, Bern (Switzerland); NCCR Climate, University of Bern, Bern (Switzerland)
2004-05-01
Monthly mean fields of temperature and geopotential height (GPH) from 700 to 100 hPa were statistically reconstructed for the extratropical Northern Hemisphere for the World War II period. The reconstruction was based on several hundred predictor variables, comprising temperature series from meteorological stations and gridded sea level pressure data (1939-1947) as well as a large amount of historical upper-air data (1939-1944). Statistical models were fitted in a calibration period (1948-1994) using the NCEP/NCAR Reanalysis data set as predictand. The procedure consists of a weighting scheme, principal component analyses on both the predictor variables and the predictand fields and multiple regression models relating the two sets of principal component time series to each other. According to validation experiments, the reconstruction skill in the 1939-1944 period is excellent for GPH at all levels and good for temperature up to 500 hPa, but somewhat worse for 300 hPa temperature and clearly worse for 100 hPa temperature. Regionally, high predictive skill is found over the midlatitudes of Europe and North America, but a lower quality over Asia, the subtropics, and the Arctic. Moreover, the quality is considerably better in winter than in summer. In the 1945-1947 period, reconstructions are useful up to 300 hPa for GPH and, in winter, up to 500 hPa for temperature. The reconstructed fields are presented for selected months and analysed from a dynamical perspective. It is demonstrated that the reconstructions provide a useful tool for the analysis of large-scale circulation features as well as stratosphere-troposphere coupling in the late 1930s and early 1940s. (orig.)
Du, L.; Shi, H.; Zhang, S.
2017-12-01
Acting as the typical shelf seas in northwest Pacific Ocean, regional sea level along China coasts exhibits complicated and multiscale spatial-temporal characteristics under circumstance of global change. In this paper, sea level variability is investigated with tide gauges records, satellite altimetry data, reconstructed sea surface height, and CMIP simulation fields. Sea level exhibits the interannual variability imposing on a remarkable sea level rising in the China seas and coastal region, although its seasonal signals are significant as the results of global ocean. Sea level exhibits faster rising rate during the satellite altimetry era, nearly twice to the rate during the last sixty years. AVISO data and reconstructed sea surface heights illustrate good correlation coefficient, more than 0.8. Interannual sea level variation is mainly modulated by the low-frequency variability of wind fields over northern Pacific Ocean by local and remote processes. Meanwhile sea level varies obviously by the transport fluctuation and bimodality path of Kuroshio. Its variability possibly linked to internal variability of the ocean-atmosphere system influenced by ENSO oscillation. China Sea level have been rising during the 20th century, and are projected to continue to rise during this century. Sea level can reach the highest extreme level in latter half of 21st century. Modeled sea level including regional sea level projection combined with the IPCC climate scenarios play a significant role on coastal storm surge evolution. The vulnerable regions along the ECS coast will suffer from the increasing storm damage with sea level variations.
Directory of Open Access Journals (Sweden)
W. C. Liu
2017-07-01
Full Text Available Shape and Albedo from Shading (SAfS techniques recover pixel-wise surface details based on the relationship between terrain slopes, illumination and imaging geometry, and the energy response (i.e., image intensity captured by the sensing system. Multiple images with different illumination geometries (i.e., photometric stereo can provide better SAfS surface reconstruction due to the increase in observations. Photometric stereo SAfS is suitable for detailed surface reconstruction of the Moon and other extra-terrestrial bodies due to the availability of photometric stereo and the less complex surface reflecting properties (i.e., albedo of the target bodies as compared to the Earth. Considering only one photometric stereo pair (i.e., two images, pixel-variant albedo is still a major obstacle to satisfactory reconstruction and it needs to be regulated by the SAfS algorithm. The illumination directional difference between the two images also becomes an important factor affecting the reconstruction quality. This paper presents a photometric stereo SAfS algorithm for pixel-level resolution lunar surface reconstruction. The algorithm includes a hierarchical optimization architecture for handling pixel-variant albedo and improving performance. With the use of Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC photometric stereo images, the reconstructed topography (i.e., the DEM is compared with the DEM produced independently by photogrammetric methods. This paper also addresses the effect of illumination directional difference in between one photometric stereo pair on the reconstruction quality of the proposed algorithm by both mathematical and experimental analysis. In this case, LROC NAC images under multiple illumination directions are utilized by the proposed algorithm for experimental comparison. The mathematical derivation suggests an illumination azimuthal difference of 90 degrees between two images is recommended to achieve
Energy Technology Data Exchange (ETDEWEB)
Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
International Nuclear Information System (INIS)
Razali, Azhani Mohd; Abdullah, Jaafar
2015-01-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
DEFF Research Database (Denmark)
Hove, Jens Dahlgaard; Rasmussen, R.; Freiberg, J.
2008-01-01
emission tomography (PET) studies from 20 normal volunteers at rest and during dipyridamole stimulation were analyzed. Image data were reconstructed with either FBP or OSEM. FBP- and OSEM-derived input functions and tissue curves were compared together with the myocardial blood flow and spillover values...... and OSEM flow values were observed with a flow underestimation of 45% (rest/dipyridamole) in the septum and of 5% (rest) and 15% (dipyridamole) in the lateral myocardial wall. CONCLUSIONS: OSEM reconstruction of myocardial perfusion images with N-13 ammonia and PET produces high-quality images for visual...... interpretation. However, compared with FBP, OSEM is associated with substantial underestimation of perfusion on quantitative imaging. Our findings indicate that OSEM should be used with precaution in clinical PET studies Udgivelsesdato: 2008/7...
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2015-01-01
Due to the sparsity and often poor quality of data, reconstructing Arctic sea level is highly challenging. We present a reconstruction of Arctic sea level covering 1950 to 2010, using the approaches from Church et al. (2004) and Ray and Douglas (2011). This involves decomposition of an altimetry calibration record into EOFs, and fitting these patterns to a historical tide gauge record.
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
Due to the sparsity and often poor quality of data, reconstructing Arctic sea level is highly challenging. We present a reconstruction of Arctic sea level covering 1950 to 2010, using the approaches from Church et al. (2004) and Ray and Douglas (2011). This involves decomposition of an altimetry...
Reconstructing Common Era relative sea-level change on the Gulf Coast of Florida
Gerlach, Matthew J.; Engelhart, Simon E.; Kemp, Andrew C.; Moyer, Ryan P.; Smoak, Joseph M.; Bernhardt, Christopher E.; Cahill, Niamh
2017-01-01
To address a paucity of Common Era data in the Gulf of Mexico, we reconstructed ~ 1.1 m of relative sea-level (RSL) rise over the past ~ 2000 years at Little Manatee River (Gulf Coast of Florida, USA). We applied a regional-scale foraminiferal transfer function to fossil assemblages preserved in a core of salt-marsh peat and organic silt that was dated using radiocarbon and recognition of pollution, 137Cs and pollen chronohorizons. Our proxy reconstruction was combined with tide-gauge data from four nearby sites spanning 1913–2014 CE. Application of an Errors-in-Variables Integrated Gaussian Process (EIV-IGP) model to the combined proxy and instrumental dataset demonstrates that RSL fell from ~ 350 to 100 BCE, before rising continuously to present. This initial RSL fall was likely the result of local-scale processes (e.g., silting up of a tidal flat or shallow sub-tidal shoal) as salt-marsh development at the site began. Since ~ 0 CE, we consider the reconstruction to be representative of regional-scale RSL trends. We removed a linear rate of 0.3 mm/yr from the RSL record using the EIV-IGP model to estimate climate-driven sea-level trends and to facilitate comparison among sites. This analysis demonstrates that since ~ 0 CE sea level did not deviate significantly from zero until accelerating continuously from ~ 1500 CE to present. Sea level was rising at 1.33 mm/yr in 1900 CE and accelerated until 2014 CE when a rate of 2.02 mm/yr was attained, which is the fastest, century-scale trend in the ~ 2000-year record. Comparison to existing reconstructions from the Gulf coast of Louisiana and the Atlantic coast of northern Florida reveal similar sea-level histories at all three sites. We explored the influence of compaction and fluvial processes on our reconstruction and concluded that compaction was likely insignificant. Fluvial processes were also likely insignificant, but further proxy evidence is needed to fully test this hypothesis. Our results
Directory of Open Access Journals (Sweden)
Liu Lili
2013-06-01
Full Text Available Understanding how metabolic reactions translate the genome of an organism into its phenotype is a grand challenge in biology. Genome-wide association studies (GWAS statistically connect genotypes to phenotypes, without any recourse to known molecular interactions, whereas a molecular mechanistic description ties gene function to phenotype through gene regulatory networks (GRNs, protein-protein interactions (PPIs and molecular pathways. Integration of different regulatory information levels of an organism is expected to provide a good way for mapping genotypes to phenotypes. However, the lack of curated metabolic model of rice is blocking the exploration of genome-scale multi-level network reconstruction. Here, we have merged GRNs, PPIs and genome-scale metabolic networks (GSMNs approaches into a single framework for rice via omics’ regulatory information reconstruction and integration. Firstly, we reconstructed a genome-scale metabolic model, containing 4,462 function genes, 2,986 metabolites involved in 3,316 reactions, and compartmentalized into ten subcellular locations. Furthermore, 90,358 pairs of protein-protein interactions, 662,936 pairs of gene regulations and 1,763 microRNA-target interactions were integrated into the metabolic model. Eventually, a database was developped for systematically storing and retrieving the genome-scale multi-level network of rice. This provides a reference for understanding genotype-phenotype relationship of rice, and for analysis of its molecular regulatory network.
Quan, Haiyang; Wu, Fan; Hou, Xi
2015-10-01
New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.
A High-Resolution Reconstruction of Late-Holocene Relative Sea Level in Rhode Island, USA
Stearns, R. B.; Engelhart, S. E.; Kemp, A.; Cahill, N.; Halavik, B. T.; Corbett, D. R.; Brain, M.; Hill, T. D.
2017-12-01
Studies on the US Atlantic and Gulf coasts have utilized salt-marsh peats and the macro- and microfossils preserved within them to reconstruct high-resolution records of relative sea level (RSL). We followed this approach to investigate spatial and temporal RSL variability in southern New England, USA, by reconstructing 3,300 years of RSL change in lower Narragansett Bay, Rhode Island. After reconnaisance of lower Narragansett Bay salt marshes, we recovered a 3.4m core at Fox Hill Marsh on Conanicut Island. We enumerated foraminiferal assemblages at 3cm intervals throughout the length of the core and we assessed trends in δ13C at 5 cm resolution. We developed a composite chronology (average resolution of ±50 years for a 1 cm slice) using 30 AMS radiocarbon dates and historical chronological markers of known age (137Cs, heavy metals, Pb isotopes, pollen). We assessed core compaction (mechanical compression) by collecting compaction-free basal-peat samples and using a published decompaction model. We employed fossil foraminifera and bulk sediment δ13C to estimate paleomarsh elevation using a Bayesian transfer function trained by a previously-published regional modern foraminiferal dataset. We combined the proxy RSL reconstruction and local tide-gauge measurements from Newport, Rhode Island (1931 CE to present) and estimated past rates of RSL change using an Errors-in-Variables Integrated Gaussian Process (EIV-IGP) model. Both basal peats and the decompaction model suggest that our RSL record is not significantly compacted. RSL rose from -3.9 m at 1250 BCE reaching -0.4 m at 1850 CE (1 mm/yr). We removed a Glacial Isostatic Adjustment (GIA) contribution of 0.9 mm/yr based on a local GPS site to facilitate comparison to regional records. The detrended sea-level reconstruction shows multiple departures from stable sea level (0 mm/yr) over the last 3,300 years and agrees with prior reconstructions from the US Atlantic coast showing evidence for sea-level changes that
Altazi, Baderaldeen A; Zhang, Geoffrey G; Fernandez, Daniel C; Montejo, Michael E; Hunt, Dylan; Werner, Joan; Biagioli, Matthew C; Moros, Eduardo G
2017-11-01
Site-specific investigations of the role of radiomics in cancer diagnosis and therapy are emerging. We evaluated the reproducibility of radiomic features extracted from 18 Flourine-fluorodeoxyglucose ( 18 F-FDG) PET images for three parameters: manual versus computer-aided segmentation methods, gray-level discretization, and PET image reconstruction algorithms. Our cohort consisted of pretreatment PET/CT scans from 88 cervical cancer patients. Two board-certified radiation oncologists manually segmented the metabolic tumor volume (MTV 1 and MTV 2 ) for each patient. For comparison, we used a graphical-based method to generate semiautomated segmented volumes (GBSV). To address any perturbations in radiomic feature values, we down-sampled the tumor volumes into three gray-levels: 32, 64, and 128 from the original gray-level of 256. Finally, we analyzed the effect on radiomic features on PET images of eight patients due to four PET 3D-reconstruction algorithms: maximum likelihood-ordered subset expectation maximization (OSEM) iterative reconstruction (IR) method, fourier rebinning-ML-OSEM (FOREIR), FORE-filtered back projection (FOREFBP), and 3D-Reprojection (3DRP) analytical method. We extracted 79 features from all segmentation method, gray-levels of down-sampled volumes, and PET reconstruction algorithms. The features were extracted using gray-level co-occurrence matrices (GLCM), gray-level size zone matrices (GLSZM), gray-level run-length matrices (GLRLM), neighborhood gray-tone difference matrices (NGTDM), shape-based features (SF), and intensity histogram features (IHF). We computed the Dice coefficient between each MTV and GBSV to measure segmentation accuracy. Coefficient values close to one indicate high agreement, and values close to zero indicate low agreement. We evaluated the effect on radiomic features by calculating the mean percentage differences (d¯) between feature values measured from each pair of parameter elements (i.e. segmentation methods: MTV
DEFF Research Database (Denmark)
Krustrup, Peter; Ørtenblad, Niels; Nielsen, Joachim
2011-01-01
The aim of this study was to examine maximal voluntary knee-extensor contraction force (MVC force), sarcoplasmic reticulum (SR) function and muscle glycogen levels in the days after a high-level soccer game when players ingested an optimised diet. Seven high-level male soccer players had a vastus...... lateralis muscle biopsy and a blood sample collected in a control situation and at 0, 24, 48 and 72 h after a competitive soccer game. MVC force, SR function, muscle glycogen, muscle soreness and plasma myoglobin were measured. MVC force sustained over 1 s was 11 and 10% lower (P ...
Reconstructing Mid- to Late Holocene Sea-Level Change from Coral Microatolls, French Polynesia
Hallmann, N.; Camoin, G.; Eisenhauer, A.; Vella, C.; Samankassou, E.; Botella, A.; Milne, G. A.; Pothin, V.; Dussouillez, P.; Fleury, J.
2017-12-01
Coral microatolls are sensitive low-tide recorders, as their vertical accretion is limited by the mean low water springs level, and can be considered therefore as high-precision recorders of sea-level change. They are of pivotal importance to resolving the rates and amplitudes of millennial-to-century scale changes during periods of relative climate stability such as the Mid- to Late Holocene, which serves as an important baseline of natural variability prior to the Anthropocene. It provides therefore a unique opportunity to study coastal response to sea-level rise, even if the rates of sea-level rise during the Mid- to Late Holocene were lower than the current rates and those expected in the near future. Mid- to Late Holocene relative sea-level changes in French Polynesia encompassing the last 6,000 years were reconstructed based on the coupling between absolute U/Th dating of in situ coral microatolls and their precise positioning via GPS RTK (Real Time Kinematic) measurements. The twelve studied islands represent ideal settings for accurate sea-level studies because: 1) they can be regarded as tectonically stable during the relevant period (slow subsidence), 2) they are located far from former ice sheets (far-field), 3) they are characterized by a low tidal amplitude, and 4) they cover a wide range of latitudes which produces significantly improved constraints on GIA (Glacial Isostatic Adjustment) model parameters. A sea-level rise of less than 1 m is recorded between 6 and 3-3.5 ka, and is followed by a gradual fall in sea level that started around 2.5 ka and persisted until the past few centuries. In addition, growth pattern analysis of coral microatolls allows the reconstruction of low-amplitude, high-frequency sea-level change on centennial to sub-decadal time scales. The reconstructed sea-level curve extends the Tahiti last deglacial sea-level curve [Deschamps et al., 2012, Nature, 483, 559-564], and is in good agreement with a geophysical model tuned to
Reconstruction based finger-knuckle-print verification with score level adaptive binary fusion.
Gao, Guangwei; Zhang, Lei; Yang, Jian; Zhang, Lin; Zhang, David
2013-12-01
Recently, a new biometrics identifier, namely finger knuckle print (FKP), has been proposed for personal authentication with very interesting results. One of the advantages of FKP verification lies in its user friendliness in data collection. However, the user flexibility in positioning fingers also leads to a certain degree of pose variations in the collected query FKP images. The widely used Gabor filtering based competitive coding scheme is sensitive to such variations, resulting in many false rejections. We propose to alleviate this problem by reconstructing the query sample with a dictionary learned from the template samples in the gallery set. The reconstructed FKP image can reduce much the enlarged matching distance caused by finger pose variations; however, both the intra-class and inter-class distances will be reduced. We then propose a score level adaptive binary fusion rule to adaptively fuse the matching distances before and after reconstruction, aiming to reduce the false rejections without increasing much the false acceptances. Experimental results on the benchmark PolyU FKP database show that the proposed method significantly improves the FKP verification accuracy.
Jesus, Íncare Correa de; Alle, Lupe Furtado; Munhoz, Eva Cantalejo; Silva, Larissa Rosa da; Lopes, Wendell Arthur; Tureck, Luciane Viater; Purim, Katia Sheylla Malta; Titski, Ana Claudia Kapp; Leite, Neiva
2017-09-21
To analyze the association between the Trp64Arg polymorphism of the ADRB3 gene, maximal fat oxidation rates and the lipid profile levels in non-obese adolescents. 72 schoolchildren, of both genders, aged between 11 and 17 years, participated in the study. The anthropometric and body composition variables, in addition to total cholesterol, HDL-c, LDL-c, triglycerides, insulin, and basal glycemia, were evaluated. The sample was divided into two groups according to the presence or absence of the polymorphism: non-carriers of the Arg64 allele, i.e., homozygous (Trp64Trp: n=54), and carriers of the Arg64 allele (Trp64Arg+Arg64Arg: n=18), in which the frequency of the Arg64 allele was 15.2%. The maximal oxygen uptake and peak of oxygen uptake during exercise were obtained through the symptom-limited, submaximal treadmill test. Maximal fat oxidation was determined according to the ventilatory ratio proposed in Lusk's table. Adolescents carrying the less frequent allele (Trp64Arg and Arg64Arg) had higher LDL-c levels (p=0.031) and lower maximal fat oxidation rates (p=0.038) when compared with non-carriers (Trp64Trp). Although the physiological processes related to lipolysis and lipid metabolism are complex, the presence of the Arg 64 allele was associated with lower rates of FATMAX during aerobic exercise, as well as with higher levels of LDL-c in adolescents. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
Stable reconstruction of Arctic sea level for the 1950-2010 period
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2016-01-01
on the combination of tide gauge records and a new 20-year reprocessed satellite altimetry derived sea level pattern. Hence the study is limited to the area covered by satellite altimetry (68ºN and 82ºN). It is found that timestep cumulative reconstruction as suggested by Church and White (2000) may yield widely...... 1950 to 2010, between 68ºN and 82ºN. This value is in good agreement with the global mean trend of 1.8 +/- 0.3 mm/y over the same period as found by Church and White (2004)....
Reconstruction of thin electromagnetic inclusions by a level-set method
International Nuclear Information System (INIS)
Park, Won-Kwang; Lesselier, Dominique
2009-01-01
In this contribution, we consider a technique of electromagnetic imaging (at a single, non-zero frequency) which uses the level-set evolution method for reconstructing a thin inclusion (possibly made of disconnected parts) with either dielectric or magnetic contrast with respect to the embedding homogeneous medium. Emphasis is on the proof of the concept, the scattering problem at hand being so far based on a two-dimensional scalar model. To do so, two level-set functions are employed; the first one describes location and shape, and the other one describes connectivity and length. Speeds of evolution of the level-set functions are calculated via the introduction of Fréchet derivatives of a least-square cost functional. Several numerical experiments on noiseless and noisy data as well illustrate how the proposed method behaves
Buescu, Cristian Tudor; Onutu, Adela Hilda; Lucaciu, Dan Osvald; Todor, Adrian
2017-03-01
The objective of this study was to compare the pain levels and analgesic consumption after single bundle ACL reconstruction with free quadriceps tendon autograft versus hamstring tendon autograft. A total of 48 patients scheduled for anatomic single-bundle ACL reconstruction were randomized into two groups: the free quadriceps tendon autograft group (24 patients) and the hamstring tendons autograft group (24 patients). A basic multimodal analgesic postoperative program was used for all patients and rescue analgesia was provided with tramadol, at pain scores over 30 on the Visual Analog Scale. The time to the first rescue analgesic, the number of doses of tramadol and pain scores were recorded. The results within the same group were compared with the Wilcoxon signed test. Supplementary analgesic drug administration proved significantly higher in the group of subjects with hamstring grafts, with a median (interquartile range) of 1 (1.3) dose, compared to the group of subjects treated with a quadriceps graft, median = 0.5 (0.1.25) (p = 0.009). A significantly higher number of subjects with a quadriceps graft did not require any supplementary analgesic drug (50%) as compared with subjects with hamstring graft (13%; Z-statistics = 3.01, p = 0.002). The percentage of subjects who required a supplementary analgesic drug was 38% higher in the HT group compared with the FQT group. The use of the free quadriceps tendon autograft for ACL reconstruction leads to less pain and analgesic consumption in the immediate postoperative period compared with the use of hamstrings autograft. Level I Therapeutic study. Copyright © 2017 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.
A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction
Directory of Open Access Journals (Sweden)
Qiegen Liu
2014-01-01
Full Text Available Nonconvex optimization has shown that it needs substantially fewer measurements than l1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU, the modified alternating direction method (ADM solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l1 and l2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values.
Analysis of sea-level reconstruction techniques for the Arctic Ocean
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
Sea-level reconstructions spanning several decades have been examined in numerous studies for most of the world's ocean areas, where satellite missions such as TOPEX/Poseidon and Jason-1 and -2 have provided much-improved knowledge of variability and long-term changes in sea level. However......, these dedicated oceanographic missions are limited in coverage to between ±66° latitude, and satellite altimeter data at higher latitudes is of a substantially worse quality. Following the approach of Church et al. (2004), we apply a model based on empirical orthogonal functions (EOFs) to the Arctic Ocean......, constrained by tide gauge records. A major challenge for this area is the sparsity of both satellite and tide gauge data beyond what can be covered with interpolation, necessitating a time-variable model and consideration to data preprocessing, including selection of appropriate tide gauges. In order to have...
Bieberle, M; Hampel, U
2015-06-13
Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Reconstructing the CT number array from gray-level images and its application in PACS
Chen, Xu; Zhuang, Tian-ge; Wu, Wei
2001-08-01
Although DICOM compliant computed tomography has been prevailing in medical fields nowadays, there are some incompliant ones, from which we could hardly get the raw data and make an apropos interpretation due to the proprietary image format. Under such condition, one usually uses frame grabbers to capture CT images, the results of which could not be freely adjusted by radiologists as the original CT number array could. To alleviate the inflexibility, a new method is presented in this paper to reconstruct the array of CT number from several gray-level images acquired under different window settings. Its feasibility is investigated and a few tips are put forward to correct the errors caused respectively by 'Border Effect' and some hardware problems. The accuracy analysis proves it a good substitution for original CT number array acquisition. And this method has already been successfully used in our newly developing PACS and accepted by the radiologists in clinical use.
Phylogeographic reconstruction of a bacterial species with high levels of lateral gene transfer
Pearson, T.; Giffard, P.; Beckstrom-Sternberg, S.; Auerbach, R.; Hornstra, H.; Tuanyok, A.; Price, E.P.; Glass, M.B.; Leadem, B.; Beckstrom-Sternberg, J. S.; Allan, G.J.; Foster, J.T.; Wagner, D.M.; Okinaka, R.T.; Sim, S.H.; Pearson, O.; Wu, Z.; Chang, J.; Kaul, R.; Hoffmaster, A.R.; Brettin, T.S.; Robison, R.A.; Mayo, M.; Gee, J.E.; Tan, P.; Currie, B.J.; Keim, P.
2009-01-01
Background: Phylogeographic reconstruction of some bacterial populations is hindered by low diversity coupled with high levels of lateral gene transfer. A comparison of recombination levels and diversity at seven housekeeping genes for eleven bacterial species, most of which are commonly cited as having high levels of lateral gene transfer shows that the relative contributions of homologous recombination versus mutation for Burkholderia pseudomallei is over two times higher than for Streptococcus pneumoniae and is thus the highest value yet reported in bacteria. Despite the potential for homologous recombination to increase diversity, B. pseudomallei exhibits a relative lack of diversity at these loci. In these situations, whole genome genotyping of orthologous shared single nucleotide polymorphism loci, discovered using next generation sequencing technologies, can provide very large data sets capable of estimating core phylogenetic relationships. We compared and searched 43 whole genome sequences of B. pseudomallei and its closest relatives for single nucleotide polymorphisms in orthologous shared regions to use in phylogenetic reconstruction. Results: Bayesian phylogenetic analyses of >14,000 single nucleotide polymorphisms yielded completely resolved trees for these 43 strains with high levels of statistical support. These results enable a better understanding of a separate analysis of population differentiation among >1,700 B. pseudomallei isolates as defined by sequence data from seven housekeeping genes. We analyzed this larger data set for population structure and allele sharing that can be attributed to lateral gene transfer. Our results suggest that despite an almost panmictic population, we can detect two distinct populations of B. pseudomallei that conform to biogeographic patterns found in many plant and animal species. That is, separation along Wallace's Line, a biogeographic boundary between Southeast Asia and Australia. Conclusion: We describe an
Phylogeographic reconstruction of a bacterial species with high levels of lateral gene transfer
Directory of Open Access Journals (Sweden)
Kaul Rajinder
2009-11-01
Full Text Available Abstract Background Phylogeographic reconstruction of some bacterial populations is hindered by low diversity coupled with high levels of lateral gene transfer. A comparison of recombination levels and diversity at seven housekeeping genes for eleven bacterial species, most of which are commonly cited as having high levels of lateral gene transfer shows that the relative contributions of homologous recombination versus mutation for Burkholderia pseudomallei is over two times higher than for Streptococcus pneumoniae and is thus the highest value yet reported in bacteria. Despite the potential for homologous recombination to increase diversity, B. pseudomallei exhibits a relative lack of diversity at these loci. In these situations, whole genome genotyping of orthologous shared single nucleotide polymorphism loci, discovered using next generation sequencing technologies, can provide very large data sets capable of estimating core phylogenetic relationships. We compared and searched 43 whole genome sequences of B. pseudomallei and its closest relatives for single nucleotide polymorphisms in orthologous shared regions to use in phylogenetic reconstruction. Results Bayesian phylogenetic analyses of >14,000 single nucleotide polymorphisms yielded completely resolved trees for these 43 strains with high levels of statistical support. These results enable a better understanding of a separate analysis of population differentiation among >1,700 B. pseudomallei isolates as defined by sequence data from seven housekeeping genes. We analyzed this larger data set for population structure and allele sharing that can be attributed to lateral gene transfer. Our results suggest that despite an almost panmictic population, we can detect two distinct populations of B. pseudomallei that conform to biogeographic patterns found in many plant and animal species. That is, separation along Wallace's Line, a biogeographic boundary between Southeast Asia and Australia
Murphy, Robyn M; Larkins, Noni T; Mollica, Janelle P; Beard, Nicole A; Lamb, Graham D
2009-01-15
Whilst calsequestrin (CSQ) is widely recognized as the primary Ca2+ buffer in the sarcoplasmic reticulum (SR) in skeletal muscle fibres, its total buffering capacity and importance have come into question. This study quantified the absolute amount of CSQ isoform 1 (CSQ1, the primary isoform) present in rat extensor digitorum longus (EDL) and soleus fibres, and related this to their endogenous and maximal SR Ca2+ content. Using Western blotting, the entire constituents of minute samples of muscle homogenates or segments of individual muscle fibres were compared with known amounts of purified CSQ1. The fidelity of the analysis was proven by examining the relative signal intensity when mixing muscle samples and purified CSQ1. The CSQ1 contents of EDL fibres, almost exclusively type II fibres, and soleus type I fibres [SOL (I)] were, respectively, 36 +/- 2 and 10 +/- 1 micromol (l fibre volume)(-1), quantitatively accounting for the maximal SR Ca2+ content of each. Soleus type II [SOL (II)] fibres (approximately 20% of soleus fibres) had an intermediate amount of CSQ1. Every SOL (I) fibre examined also contained some CSQ isoform 2 (CSQ2), which was absent in every EDL and other type II fibre except for trace amounts in one case. Every EDL and other type II fibre had a high density of SERCA1, the fast-twitch muscle sarco(endo)plasmic reticulum Ca2+-ATPase isoform, whereas there was virtually no SERCA1 in any SOL (I) fibre. Maximal SR Ca2+ content measured in skinned fibres increased with CSQ1 content, and the ratio of endogenous to maximal Ca2+ content was inversely correlated with CSQ1 content. The relative SR Ca2+ content that could be maintained in resting cytoplasmic conditions was found to be much lower in EDL fibres than in SOL (I) fibres (approximately 20 versus >60%). Leakage of Ca2+ from the SR in EDL fibres could be substantially reduced with a SR Ca2+ pump blocker and increased by adding creatine to buffer cytoplasmic [ADP] at a higher level, both results
Brain, Matthew J.; Kemp, Andrew C.; Hawkes, Andrea D.; Engelhart, Simon E.; Vane, Christopher H.; Cahill, Niamh; Hill, Troy D.; Donnelly, Jeffrey P.; Horton, Benjamin P.
2017-07-01
Salt-marsh sediments provide precise and near-continuous reconstructions of Common Era relative sea level (RSL). However, organic and low-density salt-marsh sediments are prone to compaction processes that cause post-depositional distortion of the stratigraphic column used to reconstruct RSL. We compared two RSL reconstructions from East River Marsh (Connecticut, USA) to assess the contribution of mechanical compression and biodegradation to compaction of salt-marsh sediments and their subsequent influence on RSL reconstructions. The first, existing reconstruction ('trench') was produced from a continuous sequence of basal salt-marsh sediment and is unaffected by compaction. The second, new reconstruction is from a compaction-susceptible core taken at the same location. We highlight that sediment compaction is the only feasible mechanism for explaining the observed differences in RSL reconstructed from the trench and core. Both reconstructions display long-term RSL rise of ∼1 mm/yr, followed by a ∼19th Century acceleration to ∼3 mm/yr. A statistically-significant difference between the records at ∼1100 to 1800 CE could not be explained by a compression-only geotechnical model. We suggest that the warmer and drier conditions of the Medieval Climate Anomaly (MCA) resulted in an increase in sediment compressibility during this time period. We adapted the geotechnical model by reducing the compressive strength of MCA sediments to simulate this softening of sediments. 'Decompaction' of the core reconstruction with this modified model accounted for the difference between the two RSL reconstructions. Our results demonstrate that compression-only geotechnical models may be inadequate for estimating compaction and post-depositional lowering of susceptible organic salt-marsh sediments in some settings. This has important implications for our understanding of the drivers of sea-level change. Further, our results suggest that future climate changes may make salt
Reconstruction of incomplete cell paths through a 3D-2D level set segmentation
Hariri, Maia; Wan, Justin W. L.
2012-02-01
Segmentation of fluorescent cell images has been a popular technique for tracking live cells. One challenge of segmenting cells from fluorescence microscopy is that cells in fluorescent images frequently disappear. When the images are stacked together to form a 3D image volume, the disappearance of the cells leads to broken cell paths. In this paper, we present a segmentation method that can reconstruct incomplete cell paths. The key idea of this model is to perform 2D segmentation in a 3D framework. The 2D segmentation captures the cells that appear in the image slices while the 3D segmentation connects the broken cell paths. The formulation is similar to the Chan-Vese level set segmentation which detects edges by comparing the intensity value at each voxel with the mean intensity values inside and outside of the level set surface. Our model, however, performs the comparison on each 2D slice with the means calculated by the 2D projected contour. The resulting effect is to segment the cells on each image slice. Unlike segmentation on each image frame individually, these 2D contours together form the 3D level set function. By enforcing minimum mean curvature on the level set surface, our segmentation model is able to extend the cell contours right before (and after) the cell disappears (and reappears) into the gaps, eventually connecting the broken paths. We will present segmentation results of C2C12 cells in fluorescent images to illustrate the effectiveness of our model qualitatively and quantitatively by different numerical examples.
Energy Technology Data Exchange (ETDEWEB)
Cichy, Krzysztof [DESY, Zeuthen (Germany). NIC; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [DESY, Zeuthen (Germany). NIC; Korcyl, Piotr [DESY, Zeuthen (Germany). NIC; Jagiellonian Univ., Krakow (Poland). M. Smoluchowski Inst. of Physics
2012-07-15
We present results of a lattice QCD application of a coordinate space renormalization scheme for the extraction of renormalization constants for flavour non-singlet bilinear quark operators. The method consists in the analysis of the small-distance behaviour of correlation functions in Euclidean space and has several theoretical and practical advantages, in particular: it is gauge invariant, easy to implement and has relatively low computational cost. The values of renormalization constants in the X-space scheme can be converted to the MS scheme via 4-loop continuum perturbative formulae. Our results for N{sub f}=2 maximally twisted mass fermions with tree-level Symanzik improved gauge action are compared to the ones from the RI-MOM scheme and show full agreement with this method. (orig.)
International Nuclear Information System (INIS)
Cichy, Krzysztof; Adam Mickiewicz Univ., Poznan; Jansen, Karl; Korcyl, Piotr; Jagiellonian Univ., Krakow
2012-07-01
We present results of a lattice QCD application of a coordinate space renormalization scheme for the extraction of renormalization constants for flavour non-singlet bilinear quark operators. The method consists in the analysis of the small-distance behaviour of correlation functions in Euclidean space and has several theoretical and practical advantages, in particular: it is gauge invariant, easy to implement and has relatively low computational cost. The values of renormalization constants in the X-space scheme can be converted to the MS scheme via 4-loop continuum perturbative formulae. Our results for N f =2 maximally twisted mass fermions with tree-level Symanzik improved gauge action are compared to the ones from the RI-MOM scheme and show full agreement with this method. (orig.)
Higgins, M F; Tallis, J; Price, M J; James, R S
2013-05-01
This study examined the effects of elevated buffer capacity [~32 mM HCO₃(-)] through administration of sodium bicarbonate (NaHCO₃) on maximally stimulated isolated mouse soleus (SOL) and extensor digitorum longus (EDL) muscles undergoing cyclical length changes at 37 °C. The elevated buffering capacity was of an equivalent level to that achieved in humans with acute oral supplementation. We evaluated the acute effects of elevated [HCO₃(-)] on (1) maximal acute power output (PO) and (2) time to fatigue to 60 % of maximum control PO (TLIM60), the level of decline in muscle PO observed in humans undertaking similar exercise, using the work loop technique. Acute PO was on average 7.0 ± 4.8 % greater for NaHCO₃-treated EDL muscles (P < 0.001; ES = 2.0) and 3.6 ± 1.8 % greater for NaHCO₃-treated SOL muscles (P < 0.001; ES = 2.3) compared to CON. Increases in PO were likely due to greater force production throughout shortening. The acute effects of NaHCO₃ on EDL were significantly greater (P < 0.001; ES = 0.9) than on SOL. Treatment of EDL (P = 0.22; ES = 0.6) and SOL (P = 0.19; ES = 0.9) with NaHCO₃ did not alter the pattern of fatigue. Although significant differences were not observed in whole group data, the fatigability of muscle performance was variable, suggesting that there might be inter-individual differences in response to NaHCO₃ supplementation. These results present the best indication to date that NaHCO₃ has direct peripheral effects on mammalian skeletal muscle resulting in increased acute power output.
Directory of Open Access Journals (Sweden)
Marco Olivieri
2016-07-01
Full Text Available Exploiting the Delaunay interpolation, we present a newly implemented 2-D sea-level reconstruction from coastal sea-level observations to open seas, with the aim of characterizing the spatial variability of the rate of sea-level change. To test the strengths and weaknesses of this method and to determine its usefulness in sea-level interpolation, we consider the case studies of the Baltic Sea and of the Pacific Ocean. In the Baltic Sea, a small basin well sampled by tide gauges, our reconstructions are successfully compared with absolute sea-level observations from altimetry during 1993-2011. The regional variability of absolute sea level observed across the Pacific Ocean, however, cannot be reproduced. We interpret this result as the effect of the uneven and sparse tide gauge data set and of the composite vertical land movements in and around the region. Useful considerations arise that can serve as a basis for developing sophisticated approaches.
Directory of Open Access Journals (Sweden)
Xingli Liu
Full Text Available To determine the optimal dose reduction level of iterative reconstruction technique for paediatric chest CT in pig models.27 infant pigs underwent 640-slice volume chest CT with 80kVp and different mAs. Automatic exposure control technique was used, and the index of noise was set to SD10 (Group A, routine dose, SD12.5, SD15, SD17.5, SD20 (Groups from B to E to reduce dose respectively. Group A was reconstructed with filtered back projection (FBP, and Groups from B to E were reconstructed using iterative reconstruction (IR. Objective and subjective image quality (IQ among groups were compared to determine an optimal radiation reduction level.The noise and signal-to-noise ratio (SNR in Group D had no significant statistical difference from that in Group A (P = 1.0. The scores of subjective IQ in Group A were not significantly different from those in Group D (P>0.05. There were no obvious statistical differences in the objective and subjective index values among the subgroups (small, medium and large subgroups of Group D. The effective dose (ED of Group D was 58.9% lower than that of Group A (0.20±0.05mSv vs 0.48±0.10mSv, p <0.001.In infant pig chest CT, using iterative reconstruction can provide diagnostic image quality; furthermore, it can reduce the dosage by 58.9%.
van der Meer, Douwe G.
2017-01-01
In this thesis, I aimed at searching for new ways of constraining paleo-geographic, -atmosphere and -sea level reconstructions, through an extensive investigation of mantle structure in seismic tomographic models. To this end, I explored evidence for paleo-subduction in these models and how this may
Collins, S. V.; Reinhardt, E. G.; Rissolo, D.; Chatters, J. C.; Nava Blank, A.; Luna Erreguerena, P.
2015-09-01
The skeletal remains of a Paleoamerican (Naia; HN5/48) and extinct megafauna were found at -40 to -43 mbsl in a submerged dissolution chamber named Hoyo Negro (HN) in the Sac Actun Cave System, Yucatan Peninsula, Mexico. The human remains were dated to between 12 and 13 Ka, making these remains the oldest securely dated in the Yucatan. Twelve sediment cores were used to reconstruct the Holocene flooding history of the now phreatic cave passages and cenotes (Ich Balam, Oasis) that connect to HN. Four facies were found: 1. bat guano and Seed (SF), 2. lime Mud (MF), 3. Calcite Rafts (CRF) and 4. Organic Matter/Calcite Rafts (OM/CRF) which were defined by their lithologic characteristics and ostracod, foraminifera and testate amoebae content. Basal radiocarbon ages (AMS) of aquatic sediments (SF) combined with cave bottom and ceiling height profiles determined the history of flooding in HN and when access was restricted for human and animal entry. Our results show that the bottom of HN was flooded at least by 9850 cal yr BP but likely earlier. We also found, that the pit became inaccessible for human and animal entry at ≈8100 cal yr BP, when water reaching the cave ceiling effectively prevented entry. Water level continued to rise between ≈6000 and 8100 cal yr BP, filling the cave passages and entry points to HN (Cenotes Ich Balam and Oasis). Analysis of cave facies revealed that both Holocene sea-level rise and cave ceiling height determined the configuration of airways and the deposition of floating and bat derived OM (guano and seeds). Calcite rafts, which form on the water surface, are also dependent on the presence of airways but can also form in isolated air domes in the cave ceiling that affect their loci of deposition on the cave bottom. These results indicated that aquatic cave sedimentation is transient in time and space, necessitating extraction of multiple cores to determine a limit after which flooding occurred.
Directory of Open Access Journals (Sweden)
Won-Kwang Park
2013-01-01
Full Text Available An inverse problem for reconstructing arbitrary-shaped thin penetrable electromagnetic inclusions concealed in a homogeneous material is considered in this paper. For this purpose, the level-set evolution method is adopted. The topological derivative concept is incorporated in order to evaluate the evolution speed of the level-set functions. The results of the corresponding numerical simulations with and without noise are presented in this paper.
Profit maximization mitigates competition
DEFF Research Database (Denmark)
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...
Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris
2004-01-01
The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.
Ninković, Srđan; Avramov, Snežana; Harhaji, Vladimir; Obradović, Mirko; Vranješ, Miodrag; Milankov, Miroslav
2015-01-01
The goal of this study was to examine the nature and presence of influence of different levels of sports activity on the life quality of the patients a year after the reconstruction of anterior cruciate ligament. The study included 185 patients operated at the Department of Orthopedic Surgery and Traumatology of the Clinical Centre of Vojvodina, who were followed for twelve months. Data were collected using the modified Knee Injury and Osteoarthritis Outcome Score questionnaire which included the Lysholm scale. This study included 146 male and 39 female subjects. The reconstruction of anterior cruciate ligament was equally successful in both gender groups. In relation to different types of sports activity, there were no differences in the overall life quality measured by the questionnaire and its subscales, regardless of the level (professional or recreational). However, regarding the level of sports activities, there were differences among the subjects engaged in sports activities at the national level as compared with those going in for sports activities at the recreational level, and particularly in comparison with physically inactive population. A significant correlation was not found by examining the aforementioned relationship between sports activities. This study has shown that the overall life quality a year after the reconstruction of the anterior cruciate ligament does not differ in relation to either the gender of the subjects or the type of sports activity, while the level of sports activity does have some influence on the quality of life. Professional athletes have proved to train significantly more intensively after this reconstruction than those going in for sports recreationally.
Grain-size based sea-level reconstruction in the south Bohai Sea during the past 135 kyr
Yi, Liang; Chen, Yanping
2013-04-01
Future anthropogenic sea-level rise and its impact on coastal regions is an important issue facing human civilizations. Due to the short nature of the instrumental record of sea-level change, development of proxies for sea-level change prior to the advent of instrumental records is essential to reconstruct long-term background sea-level changes on local, regional and global scales. Two of the most widely used approaches for past sea-level changes are: (1) exploitation of dated geomorphologic features such as coastal sands (e.g. Mauz and Hassler, 2000), salt marsh (e.g. Madsen et al., 2007), terraces (e.g. Chappell et al., 1996), and other coastal sediments (e.g. Zong et al., 2003); and (2) sea-level transfer functions based on faunal assemblages such as testate amoebae (e.g. Charman et al., 2002), foraminifera (e.g. Chappell and Shackleton, 1986; Horton, 1997), and diatoms (e.g. Horton et al., 2006). While a variety of methods has been developed to reconstruct palaeo-changes in sea level, many regions, including the Bohai Sea, China, still lack detailed relative sea-level curves extending back to the Pleistocene (Yi et al., 2012). For example, coral terraces are absent in the Bohai Sea, and the poor preservation of faunal assemblages makes development of a transfer function for a relative sea-level reconstruction unfeasible. In contrast, frequent alternations between transgression and regression has presumably imprinted sea-level change on the grain size distribution of Bohai Sea sediments, which varies from medium silt to coarse sand during the late Quaternary (IOCAS, 1985). Advantages of grainsize-based relative sea-level transfer function approaches are that they require smaller sample sizes, allowing for replication, faster measurement and higher spatial or temporal resolution at a fraction of the cost of detail micro-palaeontological analysis (Yi et al., 2012). Here, we employ numerical methods to partition sediment grain size using a combined database of
Application of conifer needles in the reconstruction of Holocene CO2 levels
Kouwenberg, L.L.R.
1973-01-01
To clarify the nature of the link between CO2 and climate on relatively short time-scales, precise, high-resolution reconstructions of the pre-industrial evolution of atmospheric CO2 are required. Adjustment of stomatal frequency to changes in atmospheric CO2 allows plants of many species to retain
Directory of Open Access Journals (Sweden)
Magnus F Kaffarnik
Full Text Available To investigate the relationship between the degree of liver dysfunction, quantified by maximal liver function capacity (LiMAx test and endothelin-1, TNF-α and IL-6 in septic surgical patients.28 septic patients (8 female, 20 male, age range 35-80y were prospectively investigated on a surgical intensive care unit. Liver function, defined by LiMAx test, and measurements of plasma levels of endothelin-1, TNF-α and IL-6 were carried out within the first 24 hours after onset of septic symptoms, followed by day 2, 5 and 10. Patients were divided into 2 groups (group A: LiMAx ≥100 μg/kg/h, moderate liver dysfunction; group B: LiMAx <100 μg/kg/h, severe liver dysfunction for analysis and investigated regarding the correlation between endothelin-1 and the severity of liver failure, quantified by LiMAx test.Group B showed significant higher results for endothelin-1 than patients in group A (P = 0.01, d5; 0.02, d10. For TNF-α, group B revealed higher results than group A, with a significant difference on day 10 (P = 0.005. IL-6 showed a non-significant trend to higher results in group B. The Spearman's rank correlation coefficient revealed a significant correlation between LiMAx and endothelin-1 (-0.434; P <0.001, TNF-α (-0.515; P <0.001 and IL-6 (-0.590; P <0.001.Sepsis-related hepatic dysfunction is associated with elevated plasma levels of endothelin-1, TNF-α and IL-6. Low LiMAx results combined with increased endothelin-1 and TNF-α and a favourable correlation between LiMAx and cytokine values support the findings of a crucial role of Endothelin-1 and TNF-α in development of septic liver failure.
Energy Technology Data Exchange (ETDEWEB)
Rohr, David [Frankfurt Institute for Advanced Studies, Frankfurt (Germany); Collaboration: ALICE-Collaboration
2016-07-01
ALICE is one of the four major experiments at the Large Hadron Collider (LHC) at CERN. Its main goal is the study of matter under extreme pressure and temperature as produced in heavy ion collisions at LHC. The ALICE High Level Trigger (HLT) is an online compute farm of around 200 nodes that performs a real time event reconstruction of the data delivered by the ALICE detectors. The HLT employs a fast FPGA based cluster finder algorithm as well as a GPU based track reconstruction algorithm and it is designed to process the maximum data rate expected from the ALICE detectors in real time. We present new features of the HLT for LHC Run 2 that started in 2015. A new fast standalone track reconstruction algorithm for the Inner Tracking System (ITS) enables the HLT to compute and report to LHC the luminous region of the interactions in real time. We employ a new dynamically reconfigurable histogram component that allows the visualization of characteristics of the online reconstruction using the full set of events measured by the detectors. This improves our monitoring and QA capabilities. During Run 2, we plan to deploy online calibration, starting with the calibration of the TPC (Time Projection Chamber) detector's drift time. First proof of concept tests were successfully performed using data-replay on our development cluster and during the heavy ion period at the end of 2015.
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2012-01-01
Ocean satellite altimetry has provided global sets of sea level data for the last two decades, allowing determination of spatial patterns in global sea level. For reconstructions going back further than this period, tide gauge data can be used as a proxy for the model. We examine different methods...... to spatial distribution, and tide gauge data are available around the Arctic Ocean, which may be important for a later high-latitude reconstruction....... of combining satellite altimetry and tide gauge data using optimal weighting of tide gauge data, linear regression and EOFs, including automatic quality checks of the tide gauge time series. We attempt to augment the model using various proxies such as climate indices like the NAO and PDO, and investigate...
Maximizing and customer loyalty: Are maximizers less loyal?
Directory of Open Access Journals (Sweden)
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Wiedenmann, W; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; Di Mattia, A; Díaz-Gómez, M; Dos Anjos, A; Drohan, J; Ellis, Nick; Elsing, M; Epp, B; Etienne, F; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Polesello, G; Qian, Z; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Schörner-Sadenius, T; Segura, E; De Seixas, J M; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A T; Wengler, T; Werner, P; Wheeler, S; Wickens, F J; Wielers, M; Zobernig, G; NSS-MIC 2003 - IEEE Nuclear Science Symposium and Medical Imaging Conference, Part 1
2004-01-01
The Atlas High Level Trigger's primary function of event selection will be accomplished with a Level-2 trigger farm and an Event Filter farm, both running software components developed in the Atlas offline reconstruction framework. While this approach provides a unified software framework for event selection, it poses strict requirements on offline components critical for the Level-2 trigger. A Level-2 decision in Atlas must typically be accomplished within 10 ms and with multiple event processing in concurrent threads. In order to address these constraints, prototypes have been developed that incorporate elements of the Atlas Data Flow -, High Level Trigger -, and offline framework software. To realize a homogeneous software environment for offline components in the High Level Trigger, the Level-2 Steering Controller was developed. With electron/gamma- and muon-selection slices it has been shown that the required performance can be reached, if the offline components used are carefully designed and optimized ...
Waubert de Puiseau, Berenike; Greving, Sven; Aßfalg, André; Musch, Jochen
2017-09-01
Aggregating information across multiple testimonies may improve crime reconstructions. However, different aggregation methods are available, and research on which method is best suited for aggregating multiple observations is lacking. Furthermore, little is known about how variance in the accuracy of individual testimonies impacts the performance of competing aggregation procedures. We investigated the superiority of aggregation-based crime reconstructions involving multiple individual testimonies and whether this superiority varied as a function of the number of witnesses and the degree of heterogeneity in witnesses' ability to accurately report their observations. Moreover, we examined whether heterogeneity in competence levels differentially affected the relative accuracy of two aggregation procedures: a simple majority rule, which ignores individual differences, and the more complex general Condorcet model (Romney et al., Am Anthropol 88(2):313-338, 1986; Batchelder and Romney, Psychometrika 53(1):71-92, 1988), which takes into account differences in competence between individuals. 121 participants viewed a simulated crime and subsequently answered 128 true/false questions about the crime. We experimentally generated groups of witnesses with homogeneous or heterogeneous competences. Both the majority rule and the general Condorcet model provided more accurate reconstructions of the observed crime than individual testimonies. The superiority of aggregated crime reconstructions involving multiple individual testimonies increased with an increasing number of witnesses. Crime reconstructions were most accurate when competences were heterogeneous and aggregation was based on the general Condorcet model. We argue that a formal aggregation should be considered more often when eyewitness testimonies have to be assessed and that the general Condorcet model provides a good framework for such aggregations.
Maximally incompatible quantum observables
Energy Technology Data Exchange (ETDEWEB)
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Maximally incompatible quantum observables
International Nuclear Information System (INIS)
Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario
2014-01-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Kumar, V.; Melet, A.; Meyssignac, B.; Ganachaud, A.; Kessler, W. S.; Singh, A.; Aucan, J.
2018-02-01
Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years has been up to 3 times the global average. In this study, we aim at reconstructing sea levels at selected sites in the region (Suva, Lautoka—Fiji, and Nouméa—New Caledonia) as a multilinear regression (MLR) of atmospheric and oceanic variables. We focus on sea level variability at interannual-to-interdecadal time scales, and trend over the 1988-2014 period. Local sea levels are first expressed as a sum of steric and mass changes. Then a dynamical approach is used based on wind stress curl as a proxy for the thermosteric component, as wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. Statistically significant predictors among wind stress curl, halosteric sea level, zonal/meridional wind stress components, and sea surface temperature are used to construct a MLR model simulating local sea levels. Although we are focusing on the local scale, the global mean sea level needs to be adjusted for. Our reconstructions provide insights on key drivers of sea level variability at the selected sites, showing that while local dynamics and the global signal modulate sea level to a given extent, most of the variance is driven by regional factors. On average, the MLR model is able to reproduce 82% of the variance in island sea level, and could be used to derive local sea level projections via downscaling of climate models.
3D road marking reconstruction from street-level calibrated stereo pairs
Soheilian, Bahman; Paparoditis, Nicolas; Boldo, Didier
This paper presents an automatic approach to road marking reconstruction using stereo pairs acquired by a mobile mapping system in a dense urban area. Two types of road markings were studied: zebra crossings (crosswalks) and dashed lines. These two types of road markings consist of strips having known shape and size. These geometric specifications are used to constrain the recognition of strips. In both cases (i.e. zebra crossings and dashed lines), the reconstruction method consists of three main steps. The first step extracts edge points from the left and right images of a stereo pair and computes 3D linked edges using a matching process. The second step comprises a filtering process that uses the known geometric specifications of road marking objects. The goal is to preserve linked edges that can plausibly belong to road markings and to filter others out. The final step uses the remaining linked edges to fit a theoretical model to the data. The method developed has been used for processing a large number of images. Road markings are successfully and precisely reconstructed in dense urban areas under real traffic conditions.
Phenomenology of maximal and near-maximal lepton mixing
International Nuclear Information System (INIS)
Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.
2001-01-01
The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay
Reconstruction of epidemic curves for pandemic influenza A (H1N1 2009 at city and sub-city levels
Directory of Open Access Journals (Sweden)
Wong Ngai Sze
2010-11-01
Full Text Available Abstract To better describe the epidemiology of influenza at local level, the time course of pandemic influenza A (H1N1 2009 in the city of Hong Kong was reconstructed from notification data after decomposition procedure and time series analysis. GIS (geographic information system methodology was incorporated for assessing spatial variation. Between May and September 2009, a total of 24415 cases were successfully geocoded, out of 25473 (95.8% reports in the original dataset. The reconstructed epidemic curve was characterized by a small initial peak, a nadir followed by rapid rise to the ultimate plateau. The full course of the epidemic had lasted for about 6 months. Despite the small geographic area of only 1000 Km2, distinctive spatial variation was observed in the configuration of the curves across 6 geographic regions. With the relatively uniform physical and climatic environment within Hong Kong, the temporo-spatial variability of influenza spread could only be explained by the heterogeneous population structure and mobility patterns. Our study illustrated how an epidemic curve could be reconstructed using regularly collected surveillance data, which would be useful in informing intervention at local levels.
International Nuclear Information System (INIS)
Smarda, M; Alexopoulou, E; Mazioti, A; Kordolaimi, S; Ploussi, A; Efstathopoulos, E; Priftis, K
2015-01-01
Purpose of the study is to determine the appropriate iterative reconstruction (IR) algorithm level that combines image quality and diagnostic confidence, for pediatric patients undergoing high-resolution computed tomography (HRCT). During the last 2 years, a total number of 20 children up to 10 years old with a clinical presentation of chronic bronchitis underwent HRCT in our department's 64-detector row CT scanner using the iDose IR algorithm, with almost similar image settings (80kVp, 40-50 mAs). CT images were reconstructed with all iDose levels (level 1 to 7) as well as with filtered-back projection (FBP) algorithm. Subjective image quality was evaluated by 2 experienced radiologists in terms of image noise, sharpness, contrast and diagnostic acceptability using a 5-point scale (1=excellent image, 5=non-acceptable image). Artifacts existance was also pointed out. All mean scores from both radiologists corresponded to satisfactory image quality (score ≤3), even with the FBP algorithm use. Almost excellent (score <2) overall image quality was achieved with iDose levels 5 to 7, but oversmoothing artifacts appearing with iDose levels 6 and 7 affected the diagnostic confidence. In conclusion, the use of iDose level 5 enables almost excellent image quality without considerable artifacts affecting the diagnosis. Further evaluation is needed in order to draw more precise conclusions. (paper)
DEFF Research Database (Denmark)
Noer, Ivan; Tønnesen, K H; Sager, P
1978-01-01
Preoperative measurements of direct femoral artery systolic pressure, indirect ankle systolic pressure and direct brachial artery systolic pressure were carried out in nine patients with severe ischemia and arterial occlusions both proximal and distal to the ingvinal ligament. The pressure......-rise at the ankle was estimated preoperatively by assuming that the ankle pressure would rise in proportion to the rise in femoral artery pressure. Thus it was predicted that reconstruction of the iliac obstruction with aorta-femoral pressure gradients from 44 to 96 mm Hg would result in a rise in ankle pressure...... of 16--54 mm Hg. The actual rise in ankle pressure one month after reconstruction of the iliac arteries ranged from 10 to 46 mm Hg and was well correlated to the preoperative estimations. In conclusion, by proper pressure measurements the run-off problem of multiple level arterial occlusions can...
CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data
DEFF Research Database (Denmark)
Sharma, Ojaswa; Anton, François
2009-01-01
Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identificat...... of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes....
Energy Technology Data Exchange (ETDEWEB)
Kuettel, M.; Wanner, H. [University of Bern, Oeschger Centre for Climate Change Research (OCCR), and Institute of Geography, Climatology and Meteorology, Bern (Switzerland); Xoplaki, E. [University of Bern, Oeschger Centre for Climate Change Research (OCCR), and Institute of Geography, Climatology and Meteorology, Bern (Switzerland); EEWRC, The Cyprus Institute, Nicosia (Cyprus); Gallego, D. [Universidad Pablo de Olavide de Sevilla, Departamento de Sistemas Fisicos, Quimicos y Naturales, Sevilla (Spain); Luterbacher, J. [University of Bern, Oeschger Centre for Climate Change Research (OCCR), and Institute of Geography, Climatology and Meteorology, Bern (Switzerland); Justus-Liebig University of Giessen, Department of Geography, Climatology, Climate Dynamics and Climate Change, Giessen (Germany); Garcia-Herrera, R. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de CC Fisicas, Madrid (Spain); Allan, R. [Met Office Hadley Centre, Exeter (United Kingdom); Barriendos, M. [University of Barcelona, Department of Modern History, Barcelona (Spain); Jones, P.D. [University of East Anglia, Climatic Research Unit, School of Environmental Sciences, Norwich (United Kingdom); Wheeler, D. [University of Sunderland, Faculty of Applied Sciences, Sunderland (United Kingdom)
2010-06-15
Local to regional climate anomalies are to a large extent determined by the state of the atmospheric circulation. The knowledge of large-scale sea level pressure (SLP) variations in former times is therefore crucial when addressing past climate changes across Europe and the Mediterranean. However, currently available SLP reconstructions lack data from the ocean, particularly in the pre-1850 period. Here we present a new statistically-derived 5 x 5 resolved gridded seasonal SLP dataset covering the eastern North Atlantic, Europe and the Mediterranean area (40 W-50 E; 20 N-70 N) back to 1750 using terrestrial instrumental pressure series and marine wind information from ship logbooks. For the period 1750-1850, the new SLP reconstruction provides a more accurate representation of the strength of the winter westerlies as well as the location and variability of the Azores High than currently available multiproxy pressure field reconstructions. These findings strongly support the potential of ship logbooks as an important source to determine past circulation variations especially for the pre-1850 period. This new dataset can be further used for dynamical studies relating large-scale atmospheric circulation to temperature and precipitation variability over the Mediterranean and Eurasia, for the comparison with outputs from GCMs as well as for detection and attribution studies. (orig.)
Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...
Maximal combustion temperature estimation
International Nuclear Information System (INIS)
Golodova, E; Shchepakina, E
2006-01-01
This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models
On maximal massive 3D supergravity
Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K
2010-01-01
ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...
International Nuclear Information System (INIS)
Yang, Lin; Liang, Changhong; Zhuang, Jian; Huang, Meiping; Liu, Hui
2017-01-01
Hybrid iterative reconstruction can reduce image noise and produce better image quality compared with filtered back-projection (FBP), but few reports describe optimization of the iteration level. We optimized the iteration level of iDose"4 and evaluated image quality for pediatric cardiac CT angiography. Children (n = 160) with congenital heart disease were enrolled and divided into full-dose (n = 84) and half-dose (n = 76) groups. Four series were reconstructed using FBP, and iDose"4 levels 2, 4 and 6; we evaluated subjective quality of the series using a 5-grade scale and compared the series using a Kruskal-Wallis H test. For FBP and iDose"4-optimal images, we compared contrast-to-noise ratios (CNR) and size-specific dose estimates (SSDE) using a Student's t-test. We also compared diagnostic-accuracy of each group using a Kruskal-Wallis H test. Mean scores for iDose"4 level 4 were the best in both dose groups (all P < 0.05). CNR was improved in both groups with iDose"4 level 4 as compared with FBP. Mean decrease in SSDE was 53% in the half-dose group. Diagnostic accuracy for the four datasets were in the range 92.6-96.2% (no statistical difference). iDose"4 level 4 was optimal for both the full- and half-dose groups. Protocols with iDose"4 level 4 allowed 53% reduction in SSDE without significantly affecting image quality and diagnostic accuracy. (orig.)
Lewis, Stephen J; Mohanty, Chandan; Gazendam, Aaron M; Kato, So; Keshen, Sam G; Lewis, Noah D; Magana, Sofia P; Perlmutter, David; Cape, Jennifer
2018-03-01
To determine the incidence of pseudarthrosis at the osteotomy site after three-column spinal osteotomies (3-COs) with posterior column reconstruction. 82 consecutive adult 3-COs (66 patients) with a minimum of 2-year follow-up were retrospectively reviewed. All cases underwent posterior 3-COs with two-rod constructs. The inferior facets of the proximal level were reduced to the superior facets of the distal level. If that was not possible, a structural piece of bone graft either from the local resection or a local rib was slotted in the posterior column defect to re-establish continual structural posterior bone across the lateral margins of the resection. No interbody cages were used at the level of the osteotomy. There were 34 thoracic osteotomies, 47 lumbar osteotomies and one sacral osteotomy with a mean follow-up of 52 (24-126) months. All cases underwent posterior column reconstructions described above and the addition of interbody support or additional posterior rods was not performed for fusion at the osteotomy level. Among them, 29 patients underwent one or more revision surgeries. There were three definite cases of pseudarthrosis at the osteotomy site (4%). Six revisions were also performed for pseudarthrosis at other levels. Restoration of the structural integrity of the posterior column in three-column posterior-based osteotomies was associated with > 95% fusion rate at the level of the osteotomy. Pseudarthrosis at other levels was the second most common reason for revision following adjacent segment disease in the long-term follow-up.
Energy Technology Data Exchange (ETDEWEB)
Yang, Lin; Liang, Changhong [Southern Medical University, Guangzhou (China); Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China); Zhuang, Jian [Guangdong Academy of Medical Sciences, Dept. of Cardiac Surgery, Guangdong Cardiovascular Inst., Guangdong Provincial Key Lab. of South China Structural Heart Disease, Guangdong General Hospital, Guangzhou (China); Huang, Meiping [Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China); Guangdong Academy of Medical Sciences, Dept. of Catheterization Lab, Guangdong Cardiovascular Inst., Guangdong Provincial Key Lab. of South China Structural Heart Disease, Guangdong General Hospital, Guangzhou (China); Liu, Hui [Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China)
2017-01-15
Hybrid iterative reconstruction can reduce image noise and produce better image quality compared with filtered back-projection (FBP), but few reports describe optimization of the iteration level. We optimized the iteration level of iDose{sup 4} and evaluated image quality for pediatric cardiac CT angiography. Children (n = 160) with congenital heart disease were enrolled and divided into full-dose (n = 84) and half-dose (n = 76) groups. Four series were reconstructed using FBP, and iDose{sup 4} levels 2, 4 and 6; we evaluated subjective quality of the series using a 5-grade scale and compared the series using a Kruskal-Wallis H test. For FBP and iDose{sup 4}-optimal images, we compared contrast-to-noise ratios (CNR) and size-specific dose estimates (SSDE) using a Student's t-test. We also compared diagnostic-accuracy of each group using a Kruskal-Wallis H test. Mean scores for iDose{sup 4} level 4 were the best in both dose groups (all P < 0.05). CNR was improved in both groups with iDose{sup 4} level 4 as compared with FBP. Mean decrease in SSDE was 53% in the half-dose group. Diagnostic accuracy for the four datasets were in the range 92.6-96.2% (no statistical difference). iDose{sup 4} level 4 was optimal for both the full- and half-dose groups. Protocols with iDose{sup 4} level 4 allowed 53% reduction in SSDE without significantly affecting image quality and diagnostic accuracy. (orig.)
International Nuclear Information System (INIS)
Verhaeghe, Jeroen; D'Asseler, Yves; Vandenberghe, Stefaan; Staelens, Steven; Lemahieu, Ignace
2007-01-01
The use of a temporal B-spline basis for the reconstruction of dynamic positron emission tomography data was investigated. Maximum likelihood (ML) reconstructions using an expectation maximization framework and maximum A-posteriori (MAP) reconstructions using the generalized expectation maximization framework were evaluated. Different parameters of the B-spline basis of such as order, number of basis functions and knot placing were investigated in a reconstruction task using simulated dynamic list-mode data. We found that a higher order basis reduced both the bias and variance. Using a higher number of basis functions in the modeling of the time activity curves (TACs) allowed the algorithm to model faster changes of the TACs, however, the TACs became noisier. We have compared ML, Gaussian postsmoothed ML and MAP reconstructions. The noise level in the ML reconstructions was controlled by varying the number of basis functions. The MAP algorithm penalized the integrated squared curvature of the reconstructed TAC. The postsmoothed ML was always outperformed in terms of bias and variance properties by the MAP and ML reconstructions. A simple adaptive knot placing strategy was also developed and evaluated. It is based on an arc length redistribution scheme during the reconstruction. The free knot reconstruction allowed a more accurate reconstruction while reducing the noise level especially for fast changing TACs such as blood input functions. Limiting the number of temporal basis functions combined with the adaptive knot placing strategy is in this case advantageous for regularization purposes when compared to the other regularization techniques
Maximally multipartite entangled states
Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio
2008-06-01
We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.
Blood glucose level reconstruction as a function of transcapillary glucose transport.
Koutny, Tomas
2014-10-01
A diabetic patient occasionally undergoes a detailed monitoring of their glucose levels. Over the course of a few days, a monitoring system provides a detailed track of their interstitial fluid glucose levels measured in their subcutaneous tissue. A discrepancy in the blood and interstitial fluid glucose levels is unimportant because the blood glucose levels are not measured continuously. Approximately five blood glucose level samples are taken per day, and the interstitial fluid glucose level is usually measured every 5min. An increased frequency of blood glucose level sampling would cause discomfort for the patient; thus, there is a need for methods to estimate blood glucose levels from the glucose levels measured in subcutaneous tissue. The Steil-Rebrin model is widely used to describe the relationship between blood and interstitial fluid glucose dynamics. However, we measured glucose level patterns for which the Steil-Rebrin model does not hold. Therefore, we based our research on a different model that relates present blood and interstitial fluid glucose levels to future interstitial fluid glucose levels. Using this model, we derived an improved model for calculating blood glucose levels. In the experiments conducted, this model outperformed the Steil-Rebrin model while introducing no additional requirements for glucose sample collection. In subcutaneous tissue, 26.71% of the calculated blood glucose levels had absolute values of relative differences from smoothed measured blood glucose levels less than or equal to 5% using the Steil-Rebrin model. However, the same difference interval was encountered in 63.01% of the calculated blood glucose levels using the proposed model. In addition, 79.45% of the levels calculated with the Steil-Rebrin model compared with 95.21% of the levels calculated with the proposed model had 20% difference intervals. Copyright © 2014 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
Vigoda-Gadot, Eran; Beeri, Itai; Birman-Shemesh, Taly; Somech, Anit
2007-01-01
Purpose: Most writings on Organizational Citizenship Behavior (OCB) to date have focused on analysis at the individual level and paid less attention to other analytical frameworks at the group level (i.e., team, unit, or organization). This article approaches OCB from the less conventional perspective of group-level activities and uses it to…
Camacho-Cardenosa, Marta; Camacho-Cardenosa, Alba; Martínez Guardado, Ismael; Marcos-Serrano, Marta; Timon, Rafael; Olcina, Guillermo
2017-01-01
This pilot study had the aim to determine the effects of a new dose of maximal-intensity interval training in hypoxia in active adults. Twenty-four university student volunteers were randomly assigned to three groups: hypoxia group, normoxia group or control group. The eight training sessions consisted of 2 sets of 5 repeated sprints of 10 seconds with a recovery of 20 seconds between sprints and a recovery period of 10 minutes between sets. Body composition was measured following standard procedures. A blood sample was taken for an immediate hematocrit (HCT) and hemoglobin (Hb) concentration assessment. An all-out 3-ute test was performed to evaluate ventilation parameters and power. HCT and Hb were significantly higher for the hypoxia group in Post- and Det- (P=0.01; P=0.03). Fat mass percentage was significantly lower for the hypoxia group in both assessments (P=0.05; P=0.05). The hypoxia group underwent a significant increase in mean power after the recovery period. A new dose of 8 sessions of maximal-intensity interval training in hypoxia is enough to decrease the percentage of fat mass and to improve HCT and Hb parameters and mean muscle power in healthy and active adults.
International Nuclear Information System (INIS)
Gronau, M.
1984-01-01
Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references
Arctic Sea Level Change over the altimetry era and reconstructed over the last 60 years
DEFF Research Database (Denmark)
Andersen, Ole Baltazar; Svendsen, Peter Limkilde; Nielsen, Allan Aasbjerg
The Arctic Ocean process severe limitations on the use of altimetry and tide gauge data for sea level studies and prediction due to the presence of seasonal or permanent sea ice. In order to overcome this issue we reprocessed all altimetry data with editing tailored to Arctic conditions, hereby m...... by Church and White (2004). We also find significant higher trend in the Beaufort Gyre region showing an increase in sea level over the last decade up to 2011....
International Nuclear Information System (INIS)
Hou, Yang; Xu, Shu; Guo, Wenli; Vembar, Mani; Guo, Qiyong
2012-01-01
Aim: To assess the image quality (IQ) of an iterative reconstruction (IR) technique (iDose 4 ) from prospective electrocardiography (ECG)-triggered coronary computed tomography angiography (coronary CTA) on a 256-slice multi-detector CT (MDCT) scanner and determine the optimal dose reduction using IR that can provide IQ comparable to filtered back projection (FBP). Method and materials: 110 consecutive patients (69 men, 41 women; age: 54 ± 10 years) underwent coronary CTA on a 256-slice MDCT (Brilliance iCT, Philips Healthcare). The control group (Group A, n = 21) were scanned using the conventional tube output (120 kVp, 210 mAs) and reconstructed using FBP. The other 4 groups were scanned with the same kVp but successively reduced tube output as follows: B[n = 15]: 125 mAs; C[n = 22]: 105 mAs; D[n = 36]: 84 mAs: E[n = 16]: 65 mAs) and reconstructed using IR levels of L3 (Group B), L4 (Group C) and L5 (Groups D and E), to compensate for the noise increase. All images were reconstructed using the same kernel (XCB). Two radiologists graded IQ in a blinded fashion on a 4-point scale (4 – excellent, 3 – good, 2 – fair and 1 – poor). Quantitative measurements of CT values, image noise and contrast-to-noise (CNR) were measured in each group. A receiver-operating characteristic (ROC) analysis was performed to determine a radiation reduction threshold up to which excellent IQ was maintained. Results: There were no significant differences in objective noise, SNR and CNR values among Groups A, B, C, D, and E (P = 0.14, 0.09, 0.17, respectively). There were no significant differences in the scores of the subjective IQ between Group A, and Groups B, C, D, E (P = 0.23–0.97). Significant differences in image sharpness and study acceptability were observed between groups A and E (P < 0.05). Using the criterion of excellent IQ (score 4), the ROC curve of dose levels and IQ acceptability established a reduction of 60% of tube output (Group D) as optimum cutoff point (AUC
Hong, Sun Suk; Lee, Jong-Woong; Seo, Jeong Beom; Jung, Jae-Eun; Choi, Jiwon; Kweon, Dae Cheol
2013-12-01
The purpose of this research is to determine the adaptive statistical iterative reconstruction (ASIR) level that enables optimal image quality and dose reduction in the chest computed tomography (CT) protocol with ASIR. A chest phantom with 0-50 % ASIR levels was scanned and then noise power spectrum (NPS), signal and noise and the degree of distortion of peak signal-to-noise ratio (PSNR) and the root-mean-square error (RMSE) were measured. In addition, the objectivity of the experiment was measured using the American College of Radiology (ACR) phantom. Moreover, on a qualitative basis, five lesions' resolution, latitude and distortion degree of chest phantom and their compiled statistics were evaluated. The NPS value decreased as the frequency increased. The lowest noise and deviation were at the 20 % ASIR level, mean 126.15 ± 22.21. As a result of the degree of distortion, signal-to-noise ratio and PSNR at 20 % ASIR level were at the highest value as 31.0 and 41.52. However, maximum absolute error and RMSE showed the lowest deviation value as 11.2 and 16. In the ACR phantom study, all ASIR levels were within acceptable allowance of guidelines. The 20 % ASIR level performed best in qualitative evaluation at five lesions of chest phantom as resolution score 4.3, latitude 3.47 and the degree of distortion 4.25. The 20 % ASIR level was proved to be the best in all experiments, noise, distortion evaluation using ImageJ and qualitative evaluation of five lesions of a chest phantom. Therefore, optimal images as well as reduce radiation dose would be acquired when 20 % ASIR level in thoracic CT is applied.
Branch, Leslie G; Crantford, John C; Thompson, James T; Tannan, Shruti C
2017-11-01
From 2004 to 2013, there were 9341 lawn mower injuries in children under 20 years old. The incidence of lawn mower injuries in children has not decreased since 1990 despite implementation of various different prevention strategies. In this report, the authors review the results of pediatric lawn mower-related lower-extremity injuries treated at a tertiary care referral center as well as review the overall literature. A retrospective review was performed at a level 1 trauma center over a 10-year period (2005-2015). Patients younger than 18 years who presented to the emergency room with lower extremity lawn mower injuries were included. Of the 27 patients with lower-extremity lawn mower injuries during this period, the mean age at injury was 5.5 years and Injury Severity Score was 7.2. Most (85%) patients were boys and the predominant type of mower causing injury was a riding lawn mower (96%). Injury occurred in patients who were bystanders in 78%, passengers in 11%, and operators in 11%. Mean length of stay was 12.2 days, and mean time to reconstruction was 7.9 days. Mean number of surgical procedures per patient was 4.1. Amputations occurred in 15 (56%) cases with the most common level of amputation being distal to the metatarsophalangeal joint (67%). Reconstructive procedures ranged from direct closure (41%) to free tissue transfer (7%). Major complications included infection (7%), wound dehiscence (11%), and delayed wound healing (15%). Mean follow up was 23.6 months and 100% of the patients were ambulatory after injury. The subgroup of patients with the most severe injuries, highest number of amputations, and need for overall surgical procedures were patients aged 2 to 5 years. A review of the literature also showed consistent findings. This study demonstrates the danger and morbidity that lawn mowers present to the pediatric population, particularly children aged 2 to 5 years. Every rung of the so-called reconstructive ladder is used in caring for these
Tree-ring reconstruction of the level of Great Salt Lake, USA
R. Justin DeRose; Shih-Yu Wang; Brendan M. Buckley; Matthew F. Bekker
2014-01-01
Utah's Great Salt Lake (GSL) is a closed-basin remnant of the larger Pleistocene-age Lake Bonneville. The modern instrumental record of the GSL-level (i.e. elevation) change is strongly modulated by Pacific Ocean coupled ocean/atmospheric oscillations at low frequency, and therefore reflects the decadalscale wet/dry cycles that characterize the region. A within-...
AUC-Maximizing Ensembles through Metalearning.
LeDell, Erin; van der Laan, Mark J; Petersen, Maya
2016-05-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.
Rohling, Eelco J; Hibbert, Fiona D.; Williams, Felicity H.; Grant, Katharine M; Marino, Gianluca; Foster, Gavin L; Hennekam, Rick|info:eu-repo/dai/nl/357286081; de Lange, Gert J.|info:eu-repo/dai/nl/073930962; Roberts, Andrew P.; Yu, Jimin; Webster, Jody M.; Yokoyama, Yusuke
2017-01-01
Studies of past glacial cycles yield critical information about climate and sea-level (ice-volume) variability, including the sensitivity of climate to radiative change, and impacts of crustal rebound on sea-level reconstructions for past interglacials. Here we identify significant differences
An information maximization model of eye movements
Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra
2005-01-01
We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.
Harput, Gulcan; Ulusoy, Burak; Ozer, Hamza; Baltaci, Gul; Richards, Jim
2016-10-01
The objectives of this study were to investigate the effects of knee brace (KB) and kinesiotaping (KT) on functional performance and self-reported function in individuals six months post-ACLR who desired to return to their pre-injury activity levels but felt unable to do so due to kinesiophobia. This was a cross-sectional study involving 30 individuals six months post-ACLR with Tampa Kinesiophobia Scores >37. Individuals were tested under three conditions: no intervention, KB and KT in a randomized order. Isokinetic concentric quadriceps and hamstring strength tests, one leg hop test, star excursion balance test and global rating scale were assessed under the three conditions. The involved side showed that KT and KB significantly increased the hop distance (P=0.01, P=0.04) and improved balance (P=0.01, P=0.04), respectively, but only KB was found to increase the quadriceps and hamstring peak torques compared to no intervention (P<0.05). Individuals reported having better knee function with KB when compared to no intervention (P<0.001) and KT (P=0.03). Both KB and KT have positive effects in individuals post-ACLR which may assist in reducing kinesiophobia when returning to their pre-injury activity levels, with the KB appearing to offer the participants better knee function compared to KT. Copyright © 2016 Elsevier B.V. All rights reserved.
Maximal Entanglement in High Energy Physics
Directory of Open Access Journals (Sweden)
Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli
2017-11-01
Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.
Flexible event reconstruction software chains with the ALICE High-Level Trigger
International Nuclear Information System (INIS)
Ram, D; Breitner, T; Szostak, A
2012-01-01
The ALICE High-Level Trigger (HLT) has a large high-performance computing cluster at CERN whose main objective is to perform real-time analysis on the data generated by the ALICE experiment and scale it down to at-most 4GB/sec - which is the current maximum mass-storage bandwidth available. Data-flow in this cluster is controlled by a custom designed software framework. It consists of a set of components which can communicate with each other via a common control interface. The software framework also supports the creation of different configurations based on the detectors participating in the HLT. These configurations define a logical data processing “chain” of detector data-analysis components. Data flows through this software chain in a pipelined fashion so that several events can be processed at the same time. An instance of such a chain can run and manage a few thousand physics analysis and data-flow components. The HLT software and the configuration scheme used in the 2011 heavy-ion runs of ALICE, has been discussed in this contribution.
DEFF Research Database (Denmark)
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...
Directory of Open Access Journals (Sweden)
Malcolm S Hill
Full Text Available Demosponges are challenging for phylogenetic systematics because of their plastic and relatively simple morphologies and many deep divergences between major clades. To improve understanding of the phylogenetic relationships within Demospongiae, we sequenced and analyzed seven nuclear housekeeping genes involved in a variety of cellular functions from a diverse group of sponges.We generated data from each of the four sponge classes (i.e., Calcarea, Demospongiae, Hexactinellida, and Homoscleromorpha, but focused on family-level relationships within demosponges. With data for 21 newly sampled families, our Maximum Likelihood and Bayesian-based approaches recovered previously phylogenetically defined taxa: Keratosa(p, Myxospongiae(p, Spongillida(p, Haploscleromorpha(p (the marine haplosclerids and Democlavia(p. We found conflicting results concerning the relationships of Keratosa(p and Myxospongiae(p to the remaining demosponges, but our results strongly supported a clade of Haploscleromorpha(p+Spongillida(p+Democlavia(p. In contrast to hypotheses based on mitochondrial genome and ribosomal data, nuclear housekeeping gene data suggested that freshwater sponges (Spongillida(p are sister to Haploscleromorpha(p rather than part of Democlavia(p. Within Keratosa(p, we found equivocal results as to the monophyly of Dictyoceratida. Within Myxospongiae(p, Chondrosida and Verongida were monophyletic. A well-supported clade within Democlavia(p, Tetractinellida(p, composed of all sampled members of Astrophorina and Spirophorina (including the only lithistid in our analysis, was consistently revealed as the sister group to all other members of Democlavia(p. Within Tetractinellida(p, we did not recover monophyletic Astrophorina or Spirophorina. Our results also reaffirmed the monophyly of order Poecilosclerida (excluding Desmacellidae and Raspailiidae, and polyphyly of Hadromerida and Halichondrida.These results, using an independent nuclear gene set, confirmed
Hill, Malcolm S; Hill, April L; Lopez, Jose; Peterson, Kevin J; Pomponi, Shirley; Diaz, Maria C; Thacker, Robert W; Adamska, Maja; Boury-Esnault, Nicole; Cárdenas, Paco; Chaves-Fonnegra, Andia; Danka, Elizabeth; De Laine, Bre-Onna; Formica, Dawn; Hajdu, Eduardo; Lobo-Hajdu, Gisele; Klontz, Sarah; Morrow, Christine C; Patel, Jignasa; Picton, Bernard; Pisani, Davide; Pohlmann, Deborah; Redmond, Niamh E; Reed, John; Richey, Stacy; Riesgo, Ana; Rubin, Ewelina; Russell, Zach; Rützler, Klaus; Sperling, Erik A; di Stefano, Michael; Tarver, James E; Collins, Allen G
2013-01-01
Demosponges are challenging for phylogenetic systematics because of their plastic and relatively simple morphologies and many deep divergences between major clades. To improve understanding of the phylogenetic relationships within Demospongiae, we sequenced and analyzed seven nuclear housekeeping genes involved in a variety of cellular functions from a diverse group of sponges. We generated data from each of the four sponge classes (i.e., Calcarea, Demospongiae, Hexactinellida, and Homoscleromorpha), but focused on family-level relationships within demosponges. With data for 21 newly sampled families, our Maximum Likelihood and Bayesian-based approaches recovered previously phylogenetically defined taxa: Keratosa(p), Myxospongiae(p), Spongillida(p), Haploscleromorpha(p) (the marine haplosclerids) and Democlavia(p). We found conflicting results concerning the relationships of Keratosa(p) and Myxospongiae(p) to the remaining demosponges, but our results strongly supported a clade of Haploscleromorpha(p)+Spongillida(p)+Democlavia(p). In contrast to hypotheses based on mitochondrial genome and ribosomal data, nuclear housekeeping gene data suggested that freshwater sponges (Spongillida(p)) are sister to Haploscleromorpha(p) rather than part of Democlavia(p). Within Keratosa(p), we found equivocal results as to the monophyly of Dictyoceratida. Within Myxospongiae(p), Chondrosida and Verongida were monophyletic. A well-supported clade within Democlavia(p), Tetractinellida(p), composed of all sampled members of Astrophorina and Spirophorina (including the only lithistid in our analysis), was consistently revealed as the sister group to all other members of Democlavia(p). Within Tetractinellida(p), we did not recover monophyletic Astrophorina or Spirophorina. Our results also reaffirmed the monophyly of order Poecilosclerida (excluding Desmacellidae and Raspailiidae), and polyphyly of Hadromerida and Halichondrida. These results, using an independent nuclear gene set
Tri-maximal vs. bi-maximal neutrino mixing
International Nuclear Information System (INIS)
Scott, W.G
2000-01-01
It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects
Thomas, P; Hayton, A; Beveridge, T; Marks, P; Wallace, A
2015-09-01
To assess the influence and significance of the use of iterative reconstruction (IR) algorithms on patient dose in CT in Australia. We examined survey data submitted to the Australian Radiation Protection and Nuclear Safety Agency (ARPANSA) National Diagnostic Reference Level Service (NDRLS) during 2013 and 2014. We compared median survey dose metrics with categorization by scan region and use of IR. The use of IR results in a reduction in volume CT dose index of between 17% and 44% and a reduction in dose-length product of between 14% and 34% depending on the specific scan region. The reduction was highly significant (p sum test) for all six scan regions included in the NDRLS. Overall, 69% (806/1167) of surveys included in the analysis used IR. The use of IR in CT is achieving dose savings of 20-30% in routine practice in Australia. IR appears to be widely used by participants in the ARPANSA NDRLS with approximately 70% of surveys submitted employing this technique. This study examines the impact of the use of IR on patient dose in CT on a national scale.
Savitha, D; Sejil, T V; Rao, Shwetha; Roshan, C J; Roshan, C J
2013-01-01
The purpose of the study was to investigate the effect of vocal and instrumental music on various physiological parameters during submaximal exercise. Each subject underwent three sessions of exercise protocol without music, with vocal music, and instrumental versions of same piece of music. The protocol consisted of 10 min treadmill exercise at 70% HR(max) and 20 min of recovery. Minute to minute heart rate and breath by breath recording of respiratory parameters, rate of energy expenditure and perceived exertion levels were measured. Music, irrespective of the presence or absence of lyrics, enabled the subjects to exercise at a significantly lower heart rate and oxygen consumption, reduced the metabolic cost and perceived exertion levels of exercise (P Music having a relaxant effect could have probably increased the parasympathetic activation leading to these effects.
Cho, J; Overton, T R; Schwab, C G; Tauer, L W
2007-10-01
The profitability of feeding rumen-protected Met (RPMet) sources to produce milk protein was estimated using a 2-step procedure: First, the effect of Met in metabolizable protein (MP) on milk protein production was estimated by using a quadratic Box-Cox functional form. Then, using these estimation results, the amounts of RPMet supplement that corresponded to the optimal levels of Met in MP for maximizing milk protein production and profit on dairy farms were determined. The data used in this study were modified from data used to determine the optimal level of Met in MP for lactating cows in the Nutrient Requirements of Dairy Cattle (NRC, 2001). The data used in this study differ from that in the NRC (2001) data in 2 ways. First, because dairy feed generally contains 1.80 to 1.90% Met in MP, this study adjusts the reference production value (RPV) from 2.06 to 1.80 or 1.90%. Consequently, the milk protein production response is also modified to an RPV of 1.80 or 1.90% Met in MP. Second, because this study is especially interested in how much additional Met, beyond the 1.80 or 1.90% already contained in the basal diet, is required to maximize farm profits, the data used are limited to concentrations of Met in MP above 1.80 or 1.90%. This allowed us to calculate any additional cost to farmers based solely on the price of an RPMet supplement and eliminated the need to estimate the dollar value of each gram of Met already contained in the basal diet. Results indicated that the optimal level of Met in MP for maximizing milk protein production was 2.40 and 2.42%, where the RPV was 1.80 and 1.90%, respectively. These optimal levels were almost identical to the recommended level of Met in MP of 2.40% in the NRC (2001). The amounts of RPMet required to increase the percentage of Met in MP from each RPV to 2.40 and 2.42% were 21.6 and 18.5 g/d, respectively. On the other hand, the optimal levels of Met in MP for maximizing profit were 2.32 and 2.34%, respectively. The amounts
Balsalobre-Fernández, Carlos; Tejero-González, Carlos Mª; del Campo-Vecino, Juan; Alonso-Curiel, Dionisio
2013-01-01
The aim of this study was to determine the effects of a power training cycle on maximum strength, maximum power, vertical jump height and acceleration in seven high-level 400-meter hurdlers subjected to a specific training program twice a week for 10 weeks. Each training session consisted of five sets of eight jump-squats with the load at which each athlete produced his maximum power. The repetition maximum in the half squat position (RM), maximum power in the jump-squat (W), a squat jump (SJ), countermovement jump (CSJ), and a 30-meter sprint from a standing position were measured before and after the training program using an accelerometer, an infra-red platform and photo-cells. The results indicated the following statistically significant improvements: a 7.9% increase in RM (Z=−2.03, p=0.021, δc=0.39), a 2.3% improvement in SJ (Z=−1.69, p=0.045, δc=0.29), a 1.43% decrease in the 30-meter sprint (Z=−1.70, p=0.044, δc=0.12), and, where maximum power was produced, a change in the RM percentage from 56 to 62% (Z=−1.75, p=0.039, δc=0.54). As such, it can be concluded that strength training with a maximum power load is an effective means of increasing strength and acceleration in high-level hurdlers. PMID:23717361
Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.
2017-07-01
Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T = K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Entropy and transverse section reconstruction
International Nuclear Information System (INIS)
Gullberg, G.T.
1976-01-01
A new approach to the reconstruction of a transverse section using projection data from multiple views incorporates the concept of maximum entropy. The principle of maximizing information entropy embodies the assurance of minimizing bias or prejudice in the reconstruction. Using maximum entropy is a necessary condition for the reconstructed image. This entropy criterion is most appropriate for 3-D reconstruction of objects from projections where the system is underdetermined or the data are limited statistically. This is the case in nuclear medicine time limitations in patient studies do not yield sufficient projections
Three-dimensional dictionary-learning reconstruction of (23)Na MRI data.
Behl, Nicolas G R; Gnahm, Christine; Bachert, Peter; Ladd, Mark E; Nagel, Armin M
2016-04-01
To reduce noise and artifacts in (23)Na MRI with a Compressed Sensing reconstruction and a learned dictionary as sparsifying transform. A three-dimensional dictionary-learning compressed sensing reconstruction algorithm (3D-DLCS) for the reconstruction of undersampled 3D radial (23)Na data is presented. The dictionary used as the sparsifying transform is learned with a K-singular-value-decomposition (K-SVD) algorithm. The reconstruction parameters are optimized on simulated data, and the quality of the reconstructions is assessed with peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). The performance of the algorithm is evaluated in phantom and in vivo (23)Na MRI data of seven volunteers and compared with nonuniform fast Fourier transform (NUFFT) and other Compressed Sensing reconstructions. The reconstructions of simulated data have maximal PSNR and SSIM for an undersampling factor (USF) of 10 with numbers of averages equal to the USF. For 10-fold undersampling, the PSNR is increased by 5.1 dB compared with the NUFFT reconstruction, and the SSIM by 24%. These results are confirmed by phantom and in vivo (23)Na measurements in the volunteers that show markedly reduced noise and undersampling artifacts in the case of 3D-DLCS reconstructions. The 3D-DLCS algorithm enables precise reconstruction of undersampled (23)Na MRI data with markedly reduced noise and artifact levels compared with NUFFT reconstruction. Small structures are well preserved. © 2015 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Russo, James K; Armeson, Kent E; Rhome, Ryan; Spanos, Michele; Harper, Jennifer L
2011-01-01
To define the dosimetric coverage of level I/II axillary volumes and the lung volume irradiated in postmastectomy radiotherapy (PMRT) following tissue expander placement. Twenty-three patients were identified who had undergone postmastectomy radiotherapy with tangent only fields. All patients had pre-radiation tissue expander placement and expansion. Thirteen patients had bilateral expander reconstruction. The level I/II axillary volumes were contoured using the RTOG contouring atlas. The patient-specific variables of expander volume, superior-to-inferior location of expander, distance between expanders, expander angle and axillary volume were analyzed to determine their relationship to the axillary volume and lung volume dose. The mean coverage of the level I/II axillary volume by the 95% isodose line (V D95% ) was 23.9% (range 0.3 - 65.4%). The mean Ipsilateral Lung V D50% was 8.8% (2.2-20.9). Ipsilateral and contralateral expander volume correlated to Axillary V D95% in patients with bilateral reconstruction (p = 0.01 and 0.006, respectively) but not those with ipsilateral only reconstruction (p = 0.60). Ipsilateral Lung V D50% correlated with angle of the expander from midline (p = 0.05). In patients undergoing PMRT with tissue expanders, incidental doses delivered by tangents to the axilla, as defined by the RTOG contouring atlas, do not provide adequate coverage. The posterior-superior region of level I and II is the region most commonly underdosed. Axillary volume coverage increased with increasing expander volumes in patients with bilateral reconstruction. Lung dose increased with increasing expander angle from midline. This information should be considered both when placing expanders and when designing PMRT tangent only treatment plans by contouring and targeting the axilla volume when axillary treatment is indicated
DEFF Research Database (Denmark)
Bendiksen, Mads; Ahler, Thomas; Clausen, Helle
2013-01-01
ABSTRACT: We evaluated a sub-maximal and maximal version of the Yo-Yo IR1 childrens test (YYIR1C) and the Andersen test for fitness and maximal HR assessments of children aged 6-10. Two repetitions of the YYIR1C and Andersen tests were carried out within one week by 6-7 and 8-9 year olds (grade 0...
International Nuclear Information System (INIS)
Pruthi, Ankur; Choudhury, Partha Sarathi; Gupta, Manoj; Taywade, Sameer
2015-01-01
F-18 fluorodeoxyglucose (F-18 FDG) positron emission tomography/computed tomography (PET/CT) scan and hypothyroidism. The aim was to determine whether the intensity of diffuse thyroid gland uptake on F-18 FDG PET/CT scans predicts the severity of hypothyroidism. A retrospective analysis of 3868 patients who underwent F-18 FDG PET/CT scans, between October 2012 and June 2013 in our institution for various oncological indications was done. Out of them, 106 (2.7%) patients (79 females, 27 males) presented with bilateral diffuse thyroid gland uptake as an incidental finding. These patients were investigated retrospectively and various parameters such as age, sex, primary cancer site, maximal standardized uptake value (SUVmax), results of thyroid function tests (TFTs) and fine-needle aspiration cytology results were noted. The SUVmax values were correlated with serum thyroid stimulating hormone (S. TSH) levels using Pearson's correlation analysis. Pearson's correlation analysis. Clinical information and TFT (serum FT3, FT4 and TSH levels) results were available for 31 of the 106 patients (27 females, 4 males; mean age 51.5 years). Twenty-six out of 31 patients (84%) were having abnormal TFTs with abnormal TSH levels in 24/31 patients (mean S. TSH: 22.35 μIU/ml, median: 7.37 μIU/ml, range: 0.074-211 μIU/ml). Among 7 patients with normal TSH levels, 2 patients demonstrated low FT3 and FT4 levels. No significant correlation was found between maximum standardized uptake value and TSH levels (r = 0.115, P > 0.05). Incidentally detected diffuse thyroid gland uptake on F-18 FDG PET/CT scan was usually associated with hypothyroidism probably caused by autoimmune thyroiditis. Patients should be investigated promptly irrespective of the intensity of FDG uptake with TFTs to initiate replacement therapy and a USG examination to look for any suspicious nodules
Maximal Bell's inequality violation for non-maximal entanglement
International Nuclear Information System (INIS)
Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.
2004-01-01
Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil
Stovall, A. E.; Shugart, H. H., Jr.
2017-12-01
Future NASA and ESA satellite missions plan to better quantify global carbon through detailed observations of forest structure, but ultimately rely on uncertain ground measurement approaches for calibration and validation. A significant amount of the uncertainty in estimating plot-level biomass can be attributed to inadequate and unrepresentative allometric relationships used to convert plot-level tree measurements to estimates of aboveground biomass. These allometric equations are known to have high errors and biases, particularly in carbon rich forests because they were calibrated with small and often biased samples of destructively harvested trees. To overcome this issue, a non-destructive methodology for estimating tree and plot-level biomass has been proposed through the use of Terrestrial Laser Scanning (TLS). We investigated the potential for using TLS as a ground validation approach in LiDAR-based biomass mapping though virtual plot-level tree volume reconstruction and biomass estimation. Plot-level biomass estimates were compared on the Virginia-based Smithsonian Conservation Biology Institute's SIGEO forest with full 3D reconstruction, TLS allometry, and Jenkins et al. (2003) allometry. On average, full 3D reconstruction ultimately provided the lowest uncertainty estimate of plot-level biomass (9.6%), followed by TLS allometry (16.9%) and the national equations (20.2%). TLS offered modest improvements to the airborne LiDAR empirical models, reducing RMSE from 16.2% to 14%. Our findings suggest TLS plot acquisitions and non-destructive allometry can play a vital role for reducing uncertainty in calibration and validation data for biomass mapping in the upcoming NASA and ESA missions.
Maximally Symmetric Composite Higgs Models.
Csáki, Csaba; Ma, Teng; Shu, Jing
2017-09-29
Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.
International Nuclear Information System (INIS)
Larsson, Joel; Baath, Magnus; Thilander-Klang, Anne; Ledenius, Kerstin; Caisander, Haakan
2016-01-01
The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR TM ) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefited from a higher ASiR level. An ASiR level of 70 % together with the Soft TM or Standard TM kernel was suggested to be the optimal combination for paediatric abdominal CT examinations. (authors)
Larsson, Joel; Båth, Magnus; Ledenius, Kerstin; Caisander, Håkan; Thilander-Klang, Anne
2016-06-01
The purpose of this study was to investigate the effect of different combinations of convolution kernel and the level of Adaptive Statistical iterative Reconstruction (ASiR™) on diagnostic image quality as well as visualisation of anatomical structures in paediatric abdominal computed tomography (CT) examinations. Thirty-five paediatric patients with abdominal pain with non-specified pathology undergoing abdominal CT were included in the study. Transaxial stacks of 5-mm-thick images were retrospectively reconstructed at various ASiR levels, in combination with three convolution kernels. Four paediatric radiologists rated the diagnostic image quality and the delineation of six anatomical structures in a blinded randomised visual grading study. Image quality at a given ASiR level was found to be dependent on the kernel, and a more edge-enhancing kernel benefitted from a higher ASiR level. An ASiR level of 70 % together with the Soft™ or Standard™ kernel was suggested to be the optimal combination for paediatric abdominal CT examinations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Principles of maximally classical and maximally realistic quantum ...
Indian Academy of Sciences (India)
Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...
International Nuclear Information System (INIS)
Guan, Huaiqun; Zhu, Yunping
1998-01-01
Although electronic portal imaging devices (EPIDs) are efficient tools for radiation therapy verification, they only provide images of overlapped anatomic structures. We investigated using a fluorescent screen/CCD-based EPID, coupled with a novel multi-level scheme algebraic reconstruction technique (MLS-ART), for a feasibility study of portal computed tomography (CT) reconstructions. The CT images might be useful for radiation treatment planning and verification. We used an EPID, set it to work at the linear dynamic range and collimated 6 MV photons from a linear accelerator to a slit beam of 1 cm wide and 25 cm long. We performed scans under a total of ∼200 monitor units (MUs) for several phantoms in which we varied the number of projections and MUs per projection. The reconstructed images demonstrated that using the new MLS-ART technique megavoltage portal CT with a total of 200 MUs can achieve a contrast detectibility of ∼2.5% (object size 5mmx5mm) and a spatial resolution of 2.5 mm. (author)
International Nuclear Information System (INIS)
Hamann, C.
1975-01-01
A report is given on the state of the research project to reconstruct our whole-body counter with solid geometries into a scanning type one. The object is to develop a process computer controlled 'adaptive system'. The self-built scan mechanics are explained and the advantages and problems of applying stepping motors are gone into. A stepping motor coordinates control is presented. As the planned scanner and the process computer form a digital controlled system, all theoretical and actual values as well as the control orders from the process computer must be directly controllable. A CAMAC system was not used for economical reasons, the process periphery was made controllable by self building of interfaces to and from the computer. As example, the available multi-channel analyzers were converted to external controlling. The price-moderate and relatively simple self-built set-up are outlined and an example is given of how a TELETYPE version is reconstructed into a fast electronic interface. A BUS-MULTIPLEX system was developed which generates all necessary DI/DO interfaces out of one DI and DO address of the process computer only. The essential part of this system is given. (orig./LH) [de
Evaluation of bias and variance in low-count OSEM list mode reconstruction
International Nuclear Information System (INIS)
Jian, Y; Carson, R E; Planeta, B
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([ 11 C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. (paper)
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
den Harder, Annemarie M; Willemink, Martin J; van Hamersvelt, Robbert W; Vonken, Evertjan P A; Schilham, Arnold M R; Lammers, Jan-Willem J; Luijk, Bart; Budde, Ricardo P J; Leiner, Tim; de Jong, Pim A
2016-01-01
The aim of the study was to determine the effects of dose reduction and iterative reconstruction (IR) on pulmonary nodule volumetry. In this prospective study, 25 patients scheduled for follow-up of pulmonary nodules were included. Computed tomography acquisitions were acquired at 4 dose levels with a median of 2.1, 1.2, 0.8, and 0.6 mSv. Data were reconstructed with filtered back projection (FBP), hybrid IR, and model-based IR. Volumetry was performed using semiautomatic software. At the highest dose level, more than 91% (34/37) of the nodules could be segmented, and at the lowest dose level, this was more than 83%. Thirty-three nodules were included for further analysis. Filtered back projection and hybrid IR did not lead to significant differences, whereas model-based IR resulted in lower volume measurements with a maximum difference of -11% compared with FBP at routine dose. Pulmonary nodule volumetry can be accurately performed at a submillisievert dose with both FBP and hybrid IR.
Implications of maximal Jarlskog invariant and maximal CP violation
International Nuclear Information System (INIS)
Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico
2001-04-01
We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)
Adaptive multiresolution method for MAP reconstruction in electron tomography
Energy Technology Data Exchange (ETDEWEB)
Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)
2016-11-15
3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.
Maximal quantum Fisher information matrix
International Nuclear Information System (INIS)
Chen, Yu; Yuan, Haidong
2017-01-01
We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)
Maximizing petrochemicals from refineries
Energy Technology Data Exchange (ETDEWEB)
Glover, B.; Foley, T.; Frey, S. [UOP, Des Plaines, IL (United States)
2007-07-01
New fuel quality requirements and high growth rates for petrochemicals are providing both challenges and opportunities for refineries. A key challenge in refineries today is to improve of the value of the products from the FCC unit. In particular, light FCC naphtha and LCO are prime candidates for improved utilization. Processing options have been developed focusing on new opportunities for these traditional fuel components. The Total Petrochemicals/UOP Olefin Cracking Process cracks C4-C8 olefins to produce propylene and ethylene. This process can be integrated into FCC units running at all severity levels to produce valuable light olefins while reducing the olefin content of the light FCC naphtha. Integration of the Olefin Cracking Process with an FCC unit can be accomplished to allow a range of operating modes which can accommodate changing demand for propylene, cracked naphtha and alkylate. Other processes developed by UOP allow for upgrading LCO into a range of products including petrochemical grade xylenes, benzene, high cetane diesel and low sulfur high octane gasoline. Various processing options are available which allow the products from LCO conversion to be adjusted based on the needs and opportunities of an individual refinery, as well as the external petrochemical demand cycles. This presentation will examine recent refining and petrochemical trends and highlight new process technologies that can be used to generate additional revenue from petrochemical production while addressing evolving clean fuel demands. (orig.)
Lange, L. H.
1974-01-01
Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)
Finding Maximal Quasiperiodicities in Strings
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....
Zaytseva, Olga O; Bogdanova, Vera S; Kosterin, Oleg E
2012-08-10
A phylogenetic analysis of the genus Pisum (peas), embracing diverse wild and cultivated forms, which evoke problems with species delimitation, was carried out based on a gene coding for histone H1, a protein that has a long and variable functional C-terminal domain. Phylogenetic trees were reconstructed on the basis of the coding sequence of the gene His5 of H1 subtype 5 in 65 pea accessions. Early separation of a clear-cut wild species Pisum fulvum is well supported, while cultivated species Pisum abyssinicum appears as a small branch within Pisum sativum. Another robust branch within P. sativum includes some wild and almost all cultivated representatives of P. sativum. Other wild representatives form diverse but rather subtle branches. In a subset of accessions, PsbA-trnH chloroplast intergenic spacer was also analysed and found less informative than His5. A number of accessions of cultivated peas from remote regions have a His5 allele of identical sequence, encoding an electrophoretically slow protein product, which earlier attracted attention as likely positively selected in harsh climate conditions. In PsbA-trnH, a 8bp deletion was found, which marks cultivated representatives of P. sativum. Copyright © 2012 Elsevier B.V. All rights reserved.
Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo
2016-01-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
Koller, Heiko; Schmidt, Rene; Mayer, Michael; Hitzl, Wolfgang; Zenner, Juliane; Midderhoff, Stefan; Middendorf, Stefan; Graf, Nicolaus; Gräf, Nicolaus; Resch, H; Wilke, Hans-Joachim; Willke, Hans-Joachim
2010-12-01
Clinical studies reported frequent failure with anterior instrumented multilevel cervical corpectomies. Hence, posterior augmentation was recommended but necessitates a second approach. Thus, an author group evaluated the feasibility, pull-out characteristics, and accuracy of anterior transpedicular screw (ATPS) fixation. Although first success with clinical application of ATPS has already been reported, no data exist on biomechanical characteristics of an ATPS-plate system enabling transpedicular end-level fixation in advanced instabilities. Therefore, we evaluated biomechanical qualities of an ATPS prototype C4-C7 for reduction of range of motion (ROM) and primary stability in a non-destructive setup among five constructs: anterior plate, posterior all-lateral mass screw construct, posterior construct with lateral mass screws C5 + C6 and end-level fixation using pedicle screws unilaterally or bilaterally, and a 360° construct. 12 human spines C3-T1 were divided into two groups. Four constructs were tested in group 1 and three in group 2; the ATPS prototypes were tested in both groups. Specimens were subjected to flexibility test in a spine motion tester at intact state and after 2-level corpectomy C5-C6 with subsequent reconstruction using a distractable cage and one of the osteosynthesis mentioned above. ROM in flexion-extension, axial rotation, and lateral bending was reported as normalized values. All instrumentations but the anterior plate showed significant reduction of ROM for all directions compared to the intact state. The 360° construct outperformed all others in terms of reducing ROM. While there were no significant differences between the 360° and posterior constructs in flexion-extension and lateral bending, the 360° constructs were significantly more stable in axial rotation. Concerning primary stability of ATPS prototypes, there were no significant differences compared to posterior-only constructs in flexion-extension and axial rotation. The
International Nuclear Information System (INIS)
Burmistrov, V.R.
1979-01-01
The principle and program of introduction of data on γ-γ- coincidences into the computer program are described. By analogy with the principle of accounting for γ-line intensities while constructing a system of levels according to the reference levels and γ-line spectrum, the ''leaving'' γ-transitions are introduced as an artificial level parameter. This parameter is a list of γ-lines leaving the given level or the lower levels bound with it. As a result of introducing such parameters, the accounting for the data on γ-γ-coincidences amounts to comparing two tables of numbers: a table of γ-line coincidences (an experimental one) and a table of ''leaving'' γ-transitions of every level. The program arranges the γ-lines in the preset system of equations with regard to the γ-line energies, their intensities and data on γ-γ- coincidences, and excludes consideration of the false levels. The calculation results are printed out in tables [ru
Directory of Open Access Journals (Sweden)
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
National Research Council Canada - National Science Library
Dombrowski, Michael P
2006-01-01
...?s attempts at winning the peace. Recent operations in Afghanistan and Iraq illustrate the lack of interagency unity of effort at the operational level and cry out for an enduring solution similar to Goldwater-Nichols...
Directory of Open Access Journals (Sweden)
João P. Rosado
2012-08-01
Full Text Available OBJECTIVES: The aim of the present study was to perform a stereological and biochemical analysis of the foreskin of smoker subjects. MATERIALS AND METHODS: Foreskin samples were obtained from 20 young adults (mean = 27.2 years old submitted to circumcision. Of the patients analyzed, one group (n = 10 had previous history of chronic smoking (a half pack to 3 packs per day for 3 to 13 years (mean = 5.8 ± 3.2. The control group included 10 nonsmoking patients. Masson's trichrome stain was used to quantify the foreskin vascular density. Weigert’s resorcin-fucsin stain was used to assess the elastic system fibers and Picrosirius red stain was applied to study the collagen. Stereological analysis was performed using the Image J software to determine the volumetric densities. For biochemical analysis, the total collagen was determined as µg of hydroxyproline per mg of dry tissue. Means were compared using the unpaired t-test (p < 0.05. RESULTS: Elastic system fibers of smokers was 42.5% higher than in the control group (p = 0.002. In contrast, smooth muscle fibers (p = 0.42 and vascular density (p = 0.16 did not show any significant variation. Qualitative analysis using Picrosirius red stain with polarized light evidenced the presence of type I and III collagen in the foreskin tissue, without significant difference between the groups. Total collagen concentration also did not differ significantly between smokers and non-smokers (73.1µg/mg ± 8.0 vs. 69.2µg/mg ± 5.9, respectively, p = 0.23. CONCLUSIONS: The foreskin tissue of smoking patients had a significant increase of elastic system fibers. Elastic fibers play an important role in this tissue’s turnover and this high concentration in smokers possibly causes high extensibility of the foreskin. The structural alterations in smokers’ foreskins could possibly explain the poor results in smoking patients submitted to foreskin fasciocutaneous flaps in urethral reconstruction surgery.
Rovere, A.; Raymo, M.E.; Vacchi, M.; Lorscheid, T; Stocchi, P.; Gómez-Pujolf, L.; Harris, D.L.; Casella, E.; O'Leary, M.J.; Hearty, P.J.
2016-01-01
The Last Interglacial (MIS 5e, 128–116 ka) is among the most studied past periods in Earth's history. The climate at that time was warmer than today, primarily due to different orbital conditions, with smaller ice sheets and higher sea-level. Field evidence for MIS 5e sea-level was reported from
Maximizing Entropy over Markov Processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....
Maximizing entropy over Markov processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...
DEFF Research Database (Denmark)
Sander, Lasse; Hede, Mikkel Ulfeldt; Fruergaard, Mikkel
2016-01-01
Coastal lagoons and beach ridges are genetically independent, though non-continuous, sedimentary archives. We here combine the results from two recently published studies in order to produce an 8000-year-long record of Holocene relative sea-level changes on the island of Samsø, southern Kattegat,...
Quantitative SPECT reconstruction of iodine-123 data
International Nuclear Information System (INIS)
Gilland, D.R.; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.
1991-01-01
Many clinical and research studies in nuclear medicine require quantitation of iodine-123 ( 123 I) distribution for the determination of kinetics or localization. The objective of this study was to implement several reconstruction methods designed for single-photon emission computed tomography (SPECT) using 123 I and to evaluate their performance in terms of quantitative accuracy, image artifacts, and noise. The methods consisted of four attenuation and scatter compensation schemes incorporated into both the filtered backprojection/Chang (FBP) and maximum likelihood-expectation maximization (ML-EM) reconstruction algorithms. The methods were evaluated on data acquired of a phantom containing a hot sphere of 123 I activity in a lower level background 123 I distribution and nonuniform density media. For both reconstruction algorithms, nonuniform attenuation compensation combined with either scatter subtraction or Metz filtering produced images that were quantitatively accurate to within 15% of the true value. The ML-EM algorithm demonstrated quantitative accuracy comparable to FBP and smaller relative noise magnitude for all compensation schemes
A maximum-likelihood reconstruction algorithm for tomographic gamma-ray nondestructive assay
International Nuclear Information System (INIS)
Prettyman, T.H.; Estep, R.J.; Cole, R.A.; Sheppard, G.A.
1994-01-01
A new tomographic reconstruction algorithm for nondestructive assay with high resolution gamma-ray spectroscopy (HRGS) is presented. The reconstruction problem is formulated using a maximum-likelihood approach in which the statistical structure of both the gross and continuum measurements used to determine the full-energy response in HRGS is precisely modeled. An accelerated expectation-maximization algorithm is used to determine the optimal solution. The algorithm is applied to safeguards and environmental assays of large samples (for example, 55-gal. drums) in which high continuum levels caused by Compton scattering are routinely encountered. Details of the implementation of the algorithm and a comparative study of the algorithm's performance are presented
Pau, Mauro; Reinbacher, Knut Ernst; Feichtinger, Matthias; Navysany, Kawe; Kärcher, Hans
2014-06-01
Panfacial fractures represent a challenge, even for experienced maxillofacial surgeons, because all references for reconstructing the facial skeleton are missing. Logical reconstructive sequencing based on a clear understanding of the correlation between projection and the widths and lengths of facial subunits should enable the surgeon to achieve correct realignment of the bony framework of the face and to prevent late deformity and functional impairment. Reconstruction is particularly challenging in patients presenting with concomitant fractures at the Le Fort I level and affecting the palate, condyles, and mandibular symphysis. In cases without bony loss and sufficient dentition, we believe that accurate fixation of the mandibular symphysis can represent the starting point of a reconstructive sequence that allows successful reconstruction at the Le Fort I level. Two patients were treated in our department by reconstruction starting in the occlusal area through repair of the mandibular symphysis. Both patients considered the postoperative facial shape and profile to be satisfactory and comparable to the pre-injury situation. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Boomerang flap reconstruction for the breast.
Baumholtz, Michael A; Al-Shunnar, Buthainah M; Dabb, Richard W
2002-07-01
The boomerang-shaped latissimus dorsi musculocutaneous flap for breast reconstruction offers a stable platform for breast reconstruction. It allows for maximal aesthetic results with minimal complications. The authors describe a skin paddle to obtain a larger volume than either the traditional elliptical skin paddle or the extended latissimus flap. There are three specific advantages to the boomerang design: large volume, conical shape (often lacking in the traditional skin paddle), and an acceptable donor scar. Thirty-eight flaps were performed. No reconstruction interfered with patient's ongoing oncological regimen. The most common complication was seroma, which is consistent with other latissimus reconstructions.
Chamaebatiaria millefolium (Torr.) Maxim.: fernbush
Nancy L. Shaw; Emerenciana G. Hurd
2008-01-01
Fernbush - Chamaebatiaria millefolium (Torr.) Maxim. - the only species in its genus, is endemic to the Great Basin, Colorado Plateau, and adjacent areas of the western United States. It is an upright, generally multistemmed, sweetly aromatic shrub 0.3 to 2 m tall. Bark of young branches is brown and becomes smooth and gray with age. Leaves are leathery, alternate,...
Olsson, Anna; Arlig, Asa; Carlsson, Gudrun Alm; Gustafsson, Agnetha
2007-09-01
The image quality of single photon emission computed tomography (SPECT) depends on the reconstruction algorithm used. The purpose of the present study was to evaluate parameters in ordered subset expectation maximization (OSEM) and to compare systematically with filtered back-projection (FBP) for reconstruction of regional cerebral blood flow (rCBF) SPECT, incorporating attenuation and scatter correction. The evaluation was based on the trade-off between contrast recovery and statistical noise using different sizes of subsets, number of iterations and filter parameters. Monte Carlo simulated SPECT studies of a digital human brain phantom were used. The contrast recovery was calculated as measured contrast divided by true contrast. Statistical noise in the reconstructed images was calculated as the coefficient of variation in pixel values. A constant contrast level was reached above 195 equivalent maximum likelihood expectation maximization iterations. The choice of subset size was not crucial as long as there were > or = 2 projections per subset. The OSEM reconstruction was found to give 5-14% higher contrast recovery than FBP for all clinically relevant noise levels in rCBF SPECT. The Butterworth filter, power 6, achieved the highest stable contrast recovery level at all clinically relevant noise levels. The cut-off frequency should be chosen according to the noise level accepted in the image. Trade-off plots are shown to be a practical way of deciding the number of iterations and subset size for the OSEM reconstruction and can be used for other examination types in nuclear medicine.
Post-reconstruction full power and shut down level 2 PSA study for Unit 1 of Bohunice V1 NPP
International Nuclear Information System (INIS)
Kovacs, Z.
2003-01-01
The level 2 PSA model of the J. Bohunice V1 NPP was developed in the RISK SPECTRUM Professional code with the following objectives: to identify the ways in which radioactive releases from the plant can occur following the core damage; to calculate the magnitudes and frequency of the release; to provide insights into the plant behaviour during a severe accident; to provide a framework for understanding containment failure modes; the impact of the phenomena that could occur during and following core damage and have the potential to challenge the integrity of the confinement; to support the severe accident management and development of SAMGs. The magnitudes of release categories are calculated using: the MAAP4/VVER for reactor operation and shutdown mode with closed reactor vessel and the MELCOR code for shutdown mode with open reactor vessel. In this paper an overview of the Level 2 PSA methodology; description of the confinement; the interface between the level 1 and 2 PSA and accident progression analyses are presented. An evaluation of the confinement failure modes and construction of the confinement event trees as well as definition of release categories, source term analysis and sensitivity analyses are also discussed. The presented results indicate that: 1)for the full power operation - there is an 25% probability that the confinement will successfully maintain its integrity and prevent an uncontrolled fission product release; the most likely mode of release from the confinement is a confinement bypass after SGTM with conditional probability of 30%; the conditional probability for the confinement isolation failure probability without spray is 5%, for early confinement failure at the vessel failure is 4%, for other categories 1% or less; 2) for the shutdown operating modes - the shutdown risk is high for the open reactor vessel and open confinement; important severe accident sequences exists for release categories: RC5.1, RC5.2 and RC6.2
Butterworth, C J; Rogers, S N
2017-12-01
This aim of this report is to describe the development and evolution of a new surgical technique for the immediate surgical reconstruction and rapid post-operative prosthodontic rehabilitation with a fixed dental prosthesis following low-level maxillectomy for malignant disease.The technique involves the use of a zygomatic oncology implant perforated micro-vascular soft tissue flap (ZIP flap) for the primary management of maxillary malignancy with surgical closure of the resultant maxillary defect and the installation of osseointegrated support for a zygomatic implant-supported maxillary fixed dental prosthesis.The use of this technique facilitates extremely rapid oral and dental rehabilitation within a few weeks of resective surgery, providing rapid return to function and restoring appearance following low-level maxillary resection, even in cases where radiotherapy is required as an adjuvant treatment post-operatively. The ZIP flap technique has been adopted as a standard procedure in the unit for the management of low-level maxillary malignancy, and this report provides a detailed step-by-step approach to treatment and discusses modifications developed over the treatment of an initial cohort of patients.
Titanium template for scaphoid reconstruction.
Haefeli, M; Schaefer, D J; Schumacher, R; Müller-Gerbl, M; Honigmann, P
2015-06-01
Reconstruction of a non-united scaphoid with a humpback deformity involves resection of the non-union followed by bone grafting and fixation of the fragments. Intraoperative control of the reconstruction is difficult owing to the complex three-dimensional shape of the scaphoid and the other carpal bones overlying the scaphoid on lateral radiographs. We developed a titanium template that fits exactly to the surfaces of the proximal and distal scaphoid poles to define their position relative to each other after resection of the non-union. The templates were designed on three-dimensional computed tomography reconstructions and manufactured using selective laser melting technology. Ten conserved human wrists were used to simulate the reconstruction. The achieved precision measured as the deviation of the surface of the reconstructed scaphoid from its virtual counterpart was good in five cases (maximal difference 1.5 mm), moderate in one case (maximal difference 3 mm) and inadequate in four cases (difference more than 3 mm). The main problems were attributed to the template design and can be avoided by improved pre-operative planning, as shown in a clinical case. © The Author(s) 2014.
Directory of Open Access Journals (Sweden)
Yasufumi Iryu
2007-07-01
Full Text Available The timing and course of the last deglaciation (19,000–6,000 years BP are essential components for understanding the dynamics of large ice sheets (Lindstrom and MacAyeal, 1993 and their effects on Earth’s isostasy (Nakada and Lambeck, 1989; Lambeck, 1993; Peltier, 1994, as well as the complex relationship between freshwater fluxes to the ocean, thermohaline circulation, and, hence, global climate during the Late Pleistocene and the Holocene. Moreover, the lastdeglaciation is generally seen as a possible analogue for the environmental changes and increased sea level that Earth may experience because of the greenhouse effect, related thermal expansion of oceans, and the melting of polar ice sheets.
National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Paleoclimatology Program archives reconstructions of past climatic conditions derived from paleoclimate proxies, in addition to the Program's large holdings...
Cormie, Prue; McGuigan, Michael R; Newton, Robert U
2011-02-01
This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and
Yaroshetskiym A I; Protsenko, D N; Boytsov, P V; Chentsov, V B; Nistratov, S L; Kudlyakov, O N; Solov'ev, V V; Banova, Zh I; Shkuratova, N V; Rezenov, N A; Gel'fand, B R
2016-11-01
to determine optimum level ofpositive end-expiratory pressure (PEEP) according to balance between maxi- mal end-expiratory lung volume (EEL V)(more than predicted) and minimal decrease in exhaled carbon dioxide volume (VCO) and then to develop the algorithm of gas exchange correction based on prognostic values of EEL K; alveolar recruitability, PA/FiO2, static compliance (C,,,) and VCO2. 27 mechanically ventilatedpatients with acute respiratory distress syndrome (ARDS) caused by influenza A (HINJ)pdm09 in Moscow Municipal Clinics ICU's from January to March 2016 were included in the trial. At the beginning of the study patients had the following characteristic: duration offlu symptoms 5 (3-10) days, p.0/FiO2 120 (70-50) mmHg. SOFA 7 (5-9), body mass index 30.1 (26.4-33.8) kg/m², static compliance of respiratory system 35 (30-40) ml/mbar: Under sedation and paralysis we measured EELV, C VCO and end-tidal carbon dioxide concentration (EtCO) (for CO₂ measurements we fixed short-term values after 2 min after PEEP level change) at PEEP 8, 11,13,15,18, 20 mbar consequently, and incase of good recruitability, at 22 and 24 mbar. After analyses of obtained data we determined PEEP value in which increase in EELV was maximal (more than predicted) and depression of VCO₂ was less than 20%, change in mean blood pressure and heart rate were both less than 20% (measured at PEEP 8 mbar). After that we set thus determined level of PEEP and didn't change it for 5 days. Comparision of predicted and measured EELV revealed two typical points of alveloar recruiment: the first at PEEP 11-15 mbar, the second at PEEP 20-22 mbar. EELV measured at PEEP 18 mbar appeared to be higher than predicted at PEEP 8 mbar by 400 ml (approx.), which was the sign of alveolar recruitment-1536 (1020-1845) ml vs 1955 (1360-2320) ml, p=0,001, Friedman test). we didn't found significant changes of VCO₂ when increased PEEP in the range from 8 to 15 mbar (p>0.05, Friedman test). PEEP increase from 15 to
IMNN: Information Maximizing Neural Networks
Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.
2018-04-01
This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.
International Nuclear Information System (INIS)
Ferrandis, Javier
2005-01-01
The current experimental determination of the absolute values of the CKM elements indicates that 2 vertical bar V ub /V cb V us vertical bar =(1-z), with z given by z=0.19+/-0.14. This fact implies that irrespective of the form of the quark Yukawa matrices, the measured value of the SM CP phase β is approximately the maximum allowed by the measured absolute values of the CKM elements. This is β=(π/6-z/3) for γ=(π/3+z/3), which implies α=π/2. Alternatively, assuming that β is exactly maximal and using the experimental measurement sin(2β)=0.726+/-0.037, the phase γ is predicted to be γ=(π/2-β)=66.3 o +/-1.7 o . The maximality of β, if confirmed by near-future experiments, may give us some clues as to the origin of CP violation
Formation Control for the MAXIM Mission
Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.
2004-01-01
Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.
International Nuclear Information System (INIS)
Massie, B.M.; Wisneski, J.; Kramer, B.; Hollenberg, M.; Gertz, E.; Stern, D.
1982-01-01
Recently the quantitation of regional 201 Tl clearance has been shown to increase the sensitivity of the scintigraphic detection of coronary disease. Although 201 Tl clearance rates might be expected to vary with the degree of exercise, this relationship has not been explored. We therefore evaluated the rate of decrease in myocardial 201 Tl activity following maximal and submaximal stress in seven normal subjects and 21 patients with chest pain, using the seven-pinhole tomographic reconstruction technique. In normals, the mean 201 Tl clearance rate declined from 41% +/- 7 over a 3-hr period with maximal exercise to 25% +/- 5 after 3 hr at a submaximal level (p less than 0.001). Similar differences in clearance rates were found in the normally perfused regions of the left ventricle in patients with chest pain, depending on whether or not a maximal end point (defined as either the appearance of ischemia or reaching 85% of age-predicted heart rate) was achieved. In five patients who did not reach these end points, 3-hr clearance rates in uninvolved regions averaged 25% +/- 2, in contrast to a mean of 38% +/- 5 for such regions in 15 patients who exercised to ischemia or an adequate heart rate. These findings indicate that clearance criteria derived from normals can be applied to patients who are stressed maximally, even if the duration of exercise is limited, but that caution must be used in interpreting clearance rates in those who do not exercise to an accepted end point
Strategy to maximize maintenance operation
Espinoza, Michael
2005-01-01
This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...
Scalable Nonlinear AUC Maximization Methods
Khalid, Majdi; Ray, Indrakshi; Chitsaz, Hamidreza
2017-01-01
The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear structure underlying most real world-data. However, the high training complexity renders the kernelize...
Rissolo, D.; Reinhardt, E. G.; Collins, S.; Kovacs, S. E.; Beddows, P. A.; Chatters, J. C.; Nava Blank, A.; Luna Erreguerena, P.
2014-12-01
A massive pit deep within the now submerged cave system of Sac Actun, located along the central east coast of the Yucatan Peninsula, contains a diverse fossil assemblage of extinct megafauna as well as a nearly complete human skeleton. The inundated site of Hoyo Negro presents a unique and promising opportunity for interdisciplinary Paleoamerican and paleoenvironmental research in the region. Investigations have thus far revealed a range of associated features and deposits which make possible a multi-proxy approach to identifying and reconstructing the natural and cultural processes that have formed and transformed the site over millennia. Understanding water-level fluctuations (both related to, and independent from, eustatic sea level changes), with respect to cave morphology is central to understanding the movement of humans and animals into and through the cave system. Recent and ongoing studies involve absolute dating of human, faunal, macrobotanical, and geological samples; taphonomic analyses; and a characterization of site hydrogeology and sedimentological facies, including microfossil assemblages and calcite raft deposits.
FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION
Directory of Open Access Journals (Sweden)
Rahmawati Sukmaningrum
2017-04-01
Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.
International Nuclear Information System (INIS)
Francisco, Oscar; Rangel, Murilo; Barter, William; Bursche, Albert; Potterat, Cedric; Coco, Victor
2012-01-01
Full text: The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than 4 X 10 32 cm -2 s -1 and the integrated luminosity reached the value of 1,02fb -1 on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test perturbative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space ηX φ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the colorimeters are used on the LHCb experiment to create objects called particle flow objects that are used as input to anti-kt algorithm. The LHCb is specially interesting for jets studies because its η region is complementary to the others main experiments on LHC. We will present the first results of jet reconstruction using 2011 LHCb data. (author)
Energy Technology Data Exchange (ETDEWEB)
Francisco, Oscar; Rangel, Murilo [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil); Barter, William [University of Cambridge, Cambridge (United Kingdom); Bursche, Albert [Universitat Zurich, Zurich (Switzerland); Potterat, Cedric [Universitat de Barcelona, Barcelona (Spain); Coco, Victor [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands)
2012-07-01
Full text: The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than 4 X 10{sup 32} cm{sup -2}s{sup -1} and the integrated luminosity reached the value of 1,02fb{sup -1} on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test perturbative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space {eta}X {phi} and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the colorimeters are used on the LHCb experiment to create objects called particle flow objects that are used as input to anti-kt algorithm. The LHCb is specially interesting for jets studies because its {eta} region is complementary to the others main experiments on LHC. We will present the first results of jet reconstruction using 2011 LHCb data. (author)
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Similarity-regulation of OS-EM for accelerated SPECT reconstruction
Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.
2016-06-01
Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.
International Nuclear Information System (INIS)
Ammendola, R.; Biagioni, A.; Cretaro, P.; Frezza, O.; Cicero, F. Lo; Lonardo, A.; Martinelli, M.; Paolucci, P.S.; Pastorelli, E.; Chiozzi, S.; Ramusino, A. Cotta; Fiorini, M.; Gianoli, A.; Neri, I.; Lorenzo, S. Di; Fantechi, R.; Piandani, R.; Pontisso, L.; Lamanna, G.; Piccini, M.
2017-01-01
This project aims to exploit the parallel computing power of a commercial Graphics Processing Unit (GPU) to implement fast pattern matching in the Ring Imaging Cherenkov (RICH) detector for the level 0 (L0) trigger of the NA62 experiment. In this approach, the ring-fitting algorithm is seedless, being fed with raw RICH data, with no previous information on the ring position from other detectors. Moreover, since the L0 trigger is provided with a more elaborated information than a simple multiplicity number, it results in a higher selection power. Two methods have been studied in order to reduce the data transfer latency from the readout boards of the detector to the GPU, i.e., the use of a dedicated NIC device driver with very low latency and a direct data transfer protocol from a custom FPGA-based NIC to the GPU. The performance of the system, developed through the FPGA approach, for multi-ring Cherenkov online reconstruction obtained during the NA62 physics runs is presented.
Carroll, Linda J; Rothe, J Peter
2010-09-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson's metaphysical work on the 'ways of knowing'. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions.
Hein, C. J.; Billy, J.; Robin, N.; FitzGerald, D.; Certain, R.
2017-12-01
rise of ca. 0.7 m during the past 700 years (+1.1 mm/yr). This study presents a moderate-resolution RSL curve for southern Newfoundland over the last 2500 years and a field demonstration of the utility of wave-built/aeolian stratigraphic contacts in beach ridges for sea-level reconstructions in mixed clastic systems.
Lawther, R
2018-01-01
In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...
Design of optimal linear antennas with maximally flat radiation patterns
Minkovich, B. M.; Mints, M. Ia.
1990-02-01
The paper presents an explicit solution to the problem of maximizing the aperture area utilization coefficient and obtaining the best approximation in the mean of the sectorial U-shaped radiation pattern of a linear antenna, when Butterworth flattening constraints are imposed on the approximating pattern. Constraints are established on the choice of the smallest and large antenna dimensions that make it possible to obtain maximally flat patterns, having a low sidelobe level and free from pulsations within the main lobe.
International Nuclear Information System (INIS)
Lesavoy, M.A.
1985-01-01
Vaginal reconstruction can be an uncomplicated and straightforward procedure when attention to detail is maintained. The Abbe-McIndoe procedure of lining the neovaginal canal with split-thickness skin grafts has become standard. The use of the inflatable Heyer-Schulte vaginal stent provides comfort to the patient and ease to the surgeon in maintaining approximation of the skin graft. For large vaginal and perineal defects, myocutaneous flaps such as the gracilis island have been extremely useful for correction of radiation-damaged tissue of the perineum or for the reconstruction of large ablative defects. Minimal morbidity and scarring ensue because the donor site can be closed primarily. With all vaginal reconstruction, a compliant patient is a necessity. The patient must wear a vaginal obturator for a minimum of 3 to 6 months postoperatively and is encouraged to use intercourse as an excellent obturator. In general, vaginal reconstruction can be an extremely gratifying procedure for both the functional and emotional well-being of patients
... in moderate exercise and recreational activities, or play sports that put less stress on the knees. ACL reconstruction is generally recommended if: You're an athlete and want to continue in your sport, especially if the sport involves jumping, cutting or ...
Maximizing benefits from resource development
International Nuclear Information System (INIS)
Skjelbred, B.
2002-01-01
The main objectives of Norwegian petroleum policy are to maximize the value creation for the country, develop a national oil and gas industry, and to be at the environmental forefront of long term resource management and coexistence with other industries. The paper presents a graph depicting production and net export of crude oil for countries around the world for 2002. Norway produced 3.41 mill b/d and exported 3.22 mill b/d. Norwegian petroleum policy measures include effective regulation and government ownership, research and technology development, and internationalisation. Research and development has been in five priority areas, including enhanced recovery, environmental protection, deep water recovery, small fields, and the gas value chain. The benefits of internationalisation includes capitalizing on Norwegian competency, exploiting emerging markets and the assurance of long-term value creation and employment. 5 figs
Maximizing synchronizability of duplex networks
Wei, Xiang; Emenheiser, Jeffrey; Wu, Xiaoqun; Lu, Jun-an; D'Souza, Raissa M.
2018-01-01
We study the synchronizability of duplex networks formed by two randomly generated network layers with different patterns of interlayer node connections. According to the master stability function, we use the smallest nonzero eigenvalue and the eigenratio between the largest and the second smallest eigenvalues of supra-Laplacian matrices to characterize synchronizability on various duplexes. We find that the interlayer linking weight and linking fraction have a profound impact on synchronizability of duplex networks. The increasingly large inter-layer coupling weight is found to cause either decreasing or constant synchronizability for different classes of network dynamics. In addition, negative node degree correlation across interlayer links outperforms positive degree correlation when most interlayer links are present. The reverse is true when a few interlayer links are present. The numerical results and understanding based on these representative duplex networks are illustrative and instructive for building insights into maximizing synchronizability of more realistic multiplex networks.
VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS
Directory of Open Access Journals (Sweden)
Desak Putu Eka Pratiwi
2015-07-01
Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Directory of Open Access Journals (Sweden)
Brown James
2007-12-01
Full Text Available This article aims to discuss the various defects that occur with maxillectomy with a full review of the literature and discussion of the advantages and disadvantages of the various techniques described. Reconstruction of the maxilla can be relatively simple for the standard low maxillectomy that does not involve the orbital floor (Class 2. In this situation the structure of the face is less damaged and the there are multiple reconstructive options for the restoration of the maxilla and dental alveolus. If the maxillectomy includes the orbit (Class 4 then problems involving the eye (enopthalmos, orbital dystopia, ectropion and diplopia are avoided which simplifies the reconstruction. Most controversy is associated with the maxillectomy that involves the orbital floor and dental alveolus (Class 3. A case is made for the use of the iliac crest with internal oblique as an ideal option but there are other methods, which may provide a similar result. A multidisciplinary approach to these patients is emphasised which should include a prosthodontist with a special expertise for these defects.
A heuristic statistical stopping rule for iterative reconstruction in emission tomography
International Nuclear Information System (INIS)
Ben Bouallegue, F.; Mariano-Goulart, D.; Crouzet, J.F.
2013-01-01
We propose a statistical stopping criterion for iterative reconstruction in emission tomography based on a heuristic statistical description of the reconstruction process. The method was assessed for maximum likelihood expectation maximization (MLEM) reconstruction. Based on Monte-Carlo numerical simulations and using a perfectly modeled system matrix, our method was compared with classical iterative reconstruction followed by low-pass filtering in terms of Euclidian distance to the exact object, noise, and resolution. The stopping criterion was then evaluated with realistic PET data of a Hoffman brain phantom produced using the Geant4 application in emission tomography (GATE) platform for different count levels. The numerical experiments showed that compared with the classical method, our technique yielded significant improvement of the noise-resolution tradeoff for a wide range of counting statistics compatible with routine clinical settings. When working with realistic data, the stopping rule allowed a qualitatively and quantitatively efficient determination of the optimal image. Our method appears to give a reliable estimation of the optimal stopping point for iterative reconstruction. It should thus be of practical interest as it produces images with similar or better quality than classical post-filtered iterative reconstruction with a mastered computation time. (author)
Augusto, O
2012-01-01
The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than $4 \\times 10^{32} cm^{-2} s^{-1}$ and the integrated luminosity reached the value of 1.02 $fb^{-1}$ on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test pertubative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space $\\eta \\times \\phi$ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the calo...
Maximizing ROI (return on information)
Energy Technology Data Exchange (ETDEWEB)
McDonald, B.
2000-05-01
The role and importance of managing information are discussed, underscoring the importance by quoting from the report of the International Data Corporation, according to which Fortune 500 companies lost $ 12 billion in 1999 due to inefficiencies resulting from intellectual re-work, substandard performance , and inability to find knowledge resources. The report predicts that this figure will rise to $ 31.5 billion by 2003. Key impediments to implementing knowledge management systems are identified as : the cost and human resources requirement of deployment; inflexibility of historical systems to adapt to change; and the difficulty of achieving corporate acceptance of inflexible software products that require changes in 'normal' ways of doing business. The author recommends the use of model, document and rule-independent systems with a document centered interface (DCI), employing rapid application development (RAD) and object technologies and visual model development, which eliminate these problems, making it possible for companies to maximize their return on information (ROI), and achieve substantial savings in implementation costs.
Maximizing the optical network capacity.
Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I
2016-03-06
Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. © 2016 The Authors.
Light Microscopy at Maximal Precision
Directory of Open Access Journals (Sweden)
Matthew Bierbaum
2017-10-01
Full Text Available Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI. As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10–100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
Light Microscopy at Maximal Precision
Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.
2017-10-01
Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
REGEN: Ancestral Genome Reconstruction for Bacteria
Yang, Kuan; Heath, Lenwood S.; Setubal, João C.
2012-01-01
Ancestral genome reconstruction can be understood as a phylogenetic study with more details than a traditional phylogenetic tree reconstruction. We present a new computational system called REGEN for ancestral bacterial genome reconstruction at both the gene and replicon levels. REGEN reconstructs gene content, contiguous gene runs, and replicon structure for each ancestral genome. Along each branch of the phylogenetic tree, REGEN infers evolutionary events, including gene creation and deleti...
Interval-based reconstruction for uncertainty quantification in PET
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Directory of Open Access Journals (Sweden)
Hirpara Kieran M
2011-07-01
Full Text Available Abstract Background Tensioning of anterior cruciate ligament (ACL reconstruction grafts affects the clinical outcome of the procedure. As yet, no consensus has been reached regarding the optimum initial tension in an ACL graft. Most surgeons rely on the maximal sustained one-handed pull technique for graft tension. We aim to determine if this technique is reproducible from patient to patient. Findings We created a device to simulate ACL reconstruction surgery using Ilizarov components and porcine flexor tendons. Six experienced ACL reconstruction surgeons volunteered to tension porcine grafts using the device to see if they could produce a consistent tension. None of the surgeons involved were able to accurately reproduce graft tension over a series of repeat trials. Conclusions We conclude that the maximal sustained one-handed pull technique of ACL graft tensioning is not reproducible from trial to trial. We also conclude that the initial tension placed on an ACL graft varies from surgeon to surgeon.
LENUS (Irish Health Repository)
O'Neill, Barry J
2011-07-20
Abstract Background Tensioning of anterior cruciate ligament (ACL) reconstruction grafts affects the clinical outcome of the procedure. As yet, no consensus has been reached regarding the optimum initial tension in an ACL graft. Most surgeons rely on the maximal sustained one-handed pull technique for graft tension. We aim to determine if this technique is reproducible from patient to patient. Findings We created a device to simulate ACL reconstruction surgery using Ilizarov components and porcine flexor tendons. Six experienced ACL reconstruction surgeons volunteered to tension porcine grafts using the device to see if they could produce a consistent tension. None of the surgeons involved were able to accurately reproduce graft tension over a series of repeat trials. Conclusions We conclude that the maximal sustained one-handed pull technique of ACL graft tensioning is not reproducible from trial to trial. We also conclude that the initial tension placed on an ACL graft varies from surgeon to surgeon.
Reconstruction of multiple-pinhole micro-SPECT data using origin ensembles.
Lyon, Morgan C; Sitek, Arkadiusz; Metzler, Scott D; Moore, Stephen C
2016-10-01
The authors are currently developing a dual-resolution multiple-pinhole microSPECT imaging system based on three large NaI(Tl) gamma cameras. Two multiple-pinhole tungsten collimator tubes will be used sequentially for whole-body "scout" imaging of a mouse, followed by high-resolution (hi-res) imaging of an organ of interest, such as the heart or brain. Ideally, the whole-body image will be reconstructed in real time such that data need only be acquired until the area of interest can be visualized well-enough to determine positioning for the hi-res scan. The authors investigated the utility of the origin ensemble (OE) algorithm for online and offline reconstructions of the scout data. This algorithm operates directly in image space, and can provide estimates of image uncertainty, along with reconstructed images. Techniques for accelerating the OE reconstruction were also introduced and evaluated. System matrices were calculated for our 39-pinhole scout collimator design. SPECT projections were simulated for a range of count levels using the MOBY digital mouse phantom. Simulated data were used for a comparison of OE and maximum-likelihood expectation maximization (MLEM) reconstructions. The OE algorithm convergence was evaluated by calculating the total-image entropy and by measuring the counts in a volume-of-interest (VOI) containing the heart. Total-image entropy was also calculated for simulated MOBY data reconstructed using OE with various levels of parallelization. For VOI measurements in the heart, liver, bladder, and soft-tissue, MLEM and OE reconstructed images agreed within 6%. Image entropy converged after ∼2000 iterations of OE, while the counts in the heart converged earlier at ∼200 iterations of OE. An accelerated version of OE completed 1000 iterations in <9 min for a 6.8M count data set, with some loss of image entropy performance, whereas the same dataset required ∼79 min to complete 1000 iterations of conventional OE. A combination of the two
Does mental exertion alter maximal muscle activation?
Directory of Open Access Journals (Sweden)
Vianney eRozand
2014-09-01
Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.
Galavis, Paulina E; Hollensen, Christian; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert
2010-10-01
Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [(18)F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range ≤ 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% ≤ range ≤ 25%) were entropy-GLCM, sum entropy, high gray level run emphsis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be considered as a good candidates for tumor
International Nuclear Information System (INIS)
Galavis, Paulina E.; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert; Hollensen, Christian
2010-01-01
Background. Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [ 18 F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range = 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% = range = 25%) were entropy-GLCM, sum entropy, high gray level run emphasis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be
Inclusive Fitness Maximization:An Axiomatic Approach
Okasha, Samir; Weymark, John; Bossert, Walter
2014-01-01
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...
Activity versus outcome maximization in time management.
Malkoc, Selin A; Tonietto, Gabriela N
2018-04-30
Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.
On the maximal superalgebras of supersymmetric backgrounds
International Nuclear Information System (INIS)
Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan
2009-01-01
In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.
Task-oriented maximally entangled states
International Nuclear Information System (INIS)
Agrawal, Pankaj; Pradhan, B
2010-01-01
We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.
Cartilage grafting in nasal reconstruction.
Immerman, Sara; White, W Matthew; Constantinides, Minas
2011-02-01
Nasal reconstruction after resection for cutaneous malignancies poses a unique challenge to facial plastic surgeons. The nose, a unique 3-D structure, not only must remain functional but also be aesthetically pleasing to patients. A complete understanding of all the layers of the nose and knowledge of available cartilage grafting material is necessary. Autogenous material, namely septal, auricular, and costal cartilage, is the most favored material in a free cartilage graft or a composite cartilage graft. All types of material have advantages and disadvantages that should guide the most appropriate selection to maximize the functional and cosmetic outcomes for patients. Copyright © 2011 Elsevier Inc. All rights reserved.
Wan, Chao; Hao, Zhixiu
2018-02-01
Graft tissues within bone tunnels remain mobile for a long time after anterior cruciate ligament (ACL) reconstruction. However, whether the graft-tunnel friction affects the finite element (FE) simulation of the ACL reconstruction is still unclear. Four friction coefficients (from 0 to 0.3) were simulated in the ACL-reconstructed joint model as well as two loading levels of anterior tibial drawer. The graft-tunnel friction did not affect joint kinematics and the maximal principal strain of the graft. By contrast, both the relative graft-tunnel motion and equivalent strain for the bone tunnels were altered, which corresponded to different processes of graft-tunnel integration and bone remodeling, respectively. It implies that the graft-tunnel friction should be defined properly for studying the graft-tunnel integration or bone remodeling after ACL reconstruction using numerical simulation.
International Nuclear Information System (INIS)
O'Sullivan, F.; Pawitan, Y.; Harrison, R.L.; Lewellen, T.K.
1990-01-01
In statistical terms, filtered backprojection can be viewed as smoothed Least Squares (LS). In this paper, the authors report on improvement in LS resolution by: incorporating locally adaptive smoothers, imposing positivity and using statistical methods for optimal selection of the resolution parameter. The resulting algorithm has high computational efficiency relative to more elaborate Maximum Likelihood (ML) type techniques (i.e. EM with sieves). Practical aspects of the procedure are discussed in the context of PET and illustrations with computer simulated and real tomograph data are presented. The relative recovery coefficients for a 9mm sphere in a computer simulated hot-spot phantom range from .3 to .6 when the number of counts ranges from 10,000 to 640,000 respectively. The authors will also present results illustrating the relative efficacy of ML and LS reconstruction techniques
Transformation of bipartite non-maximally entangled states into a ...
Indian Academy of Sciences (India)
We present two schemes for transforming bipartite non-maximally entangled states into a W state in cavity QED system, by using highly detuned interactions and the resonant interactions between two-level atoms and a single-mode cavity field. A tri-atom W state can be generated by adjusting the interaction times between ...
Extract of Zanthoxylum bungeanum maxim seed oil reduces ...
African Journals Online (AJOL)
Purpose: To investigate the anti-hyperlipidaemic effect of extract of Zanthoxylum bungeanum Maxim. seed oil (EZSO) on high-fat diet (HFD)-induced hyperlipidemic hamsters. Methods: Following feeding with HFD for 30 days, hyperlipidemic hamsters were intragastrically treated with EZSO for 60 days. Serum levels of ...
How Managerial Ownership Affects Profit Maximization in Newspaper Firms.
Busterna, John C.
1989-01-01
Explores whether different levels of a manager's ownership of a newspaper affects the manager's profit maximizing attitudes and behavior. Finds that owner-managers tend to place less emphasis on profits than non-owner-controlled newspapers, contrary to economic theory and empirical evidence from other industries. (RS)
Maximizing the model for Discounted Stream of Utility from ...
African Journals Online (AJOL)
Osagiede et al. (2009) considered an analytic model for maximizing discounted stream of utility from consumption when the rate of production is linear. A solution was provided to a level where methods of solving order differential equations will be applied, but they left off there, as a result of the mathematical complexity ...
Maximally Entangled Multipartite States: A Brief Survey
International Nuclear Information System (INIS)
Enríquez, M; Wintrowicz, I; Życzkowski, K
2016-01-01
The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)
Utility maximization and mode of payment
Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.
2000-01-01
The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:
Corporate Social Responsibility and Profit Maximizing Behaviour
Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta
2005-01-01
We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
A New Look at the Impact of Maximizing on Unhappiness: Two Competing Mediating Effects
Directory of Open Access Journals (Sweden)
Jiaxi Peng
2018-02-01
Full Text Available The current study aims to explore how the decision-making style of maximizing affects subjective well-being (SWB, which mainly focuses on the confirmation of the mediator role of regret and suppressing role of achievement motivation. A total of 402 Chinese undergraduate students participated in this study, in which they responded to the maximization, regret, and achievement motivation scales and SWB measures. Results suggested that maximizing significantly predicted SWB. Moreover, regret and achievement motivation (hope for success dimension could completely mediate and suppress this effect. That is, two competing indirect pathways exist between maximizing and SWB. One pathway is through regret. Maximizing typically leads one to regret, which could negatively predict SWB. Alternatively, maximizing could lead to high levels of hope for success, which were positively correlated with SWB. Findings offered a complex method of thinking about the relationship between maximizing and SWB.
Connected Filtering by Reconstruction : Basis and New Advances
Wilkinson, Michael H.F.
2008-01-01
Openings-by-reconstruction are the oldest connected filters, and indeed, reconstruction methodology lies at the heart of many connected operators such as levelings. Starting out from the basic reconstruction principle of iterated geodesic dilations, extensions such as the use of reconstruction
CONNECTED FILTERING BY RECONSTRUCTION : BASIS AND NEW ADVANCES
Wilkinson, Michael H. F.
2008-01-01
Openings-by-reconstruction are the oldest connected filters, and indeed, reconstruction methodology lies at the heart of many connected operators such as levelings. Starting out from the basic reconstruction principle of iterated geodesic dilations, extensions such as the use of reconstruction
Maximizing PTH Anabolic Osteoporosis Therapy
2014-09-01
2 MSPC uti - lizing a sensitive polymerase chain reaction- (PCR) based ELISA detection method (29). Consistently, higher levels of telomerase activity...A.H. (1989) Pitfalls of spinal deformities associated with neurofibromatosis in children . Clin. Orthop. Relat. Res., 245, 29–42. 12. Sbihi, A... children . Rev. Stomatol. Chir. Maxillofac., 103, 105–113. 19. Bilezekian, J., Raisz, L. and Rodan, G. (2002) Principles of Bone Biology, Academic Press
Breast Reconstruction After Mastectomy
... Cancer Prevention Genetics of Breast & Gynecologic Cancers Breast Cancer Screening Research Breast Reconstruction After Mastectomy On This Page What is breast reconstruction? How do surgeons use implants to reconstruct a woman’s breast? How do surgeons ...
Breast reconstruction - implants
Breast implants surgery; Mastectomy - breast reconstruction with implants; Breast cancer - breast reconstruction with implants ... harder to find a tumor if your breast cancer comes back. Getting breast implants does not take as long as breast reconstruction ...
Accelerated median root prior reconstruction for pinhole single-photon emission tomography (SPET)
Energy Technology Data Exchange (ETDEWEB)
Sohlberg, Antti [Department of Clinical Physiology and Nuclear Medicine, Kuopio University Hospital, PO Box 1777 FIN-70211, Kuopio (Finland); Ruotsalainen, Ulla [Institute of Signal Processing, DMI, Tampere University of Technology, PO Box 553 FIN-33101, Tampere (Finland); Watabe, Hiroshi [National Cardiovascular Center Research Institute, 5-7-1 Fujisihro-dai, Suita City, Osaka 565-8565 (Japan); Iida, Hidehiro [National Cardiovascular Center Research Institute, 5-7-1 Fujisihro-dai, Suita City, Osaka 565-8565 (Japan); Kuikka, Jyrki T [Department of Clinical Physiology and Nuclear Medicine, Kuopio University Hospital, PO Box 1777 FIN-70211, Kuopio (Finland)
2003-07-07
Pinhole collimation can be used to improve spatial resolution in SPET. However, the resolution improvement is achieved at the cost of reduced sensitivity, which leads to projection images with poor statistics. Images reconstructed from these projections using the maximum likelihood expectation maximization (ML-EM) algorithms, which have been used to reduce the artefacts generated by the filtered backprojection (FBP) based reconstruction, suffer from noise/bias trade-off: noise contaminates the images at high iteration numbers, whereas early abortion of the algorithm produces images that are excessively smooth and biased towards the initial estimate of the algorithm. To limit the noise accumulation we propose the use of the pinhole median root prior (PH-MRP) reconstruction algorithm. MRP is a Bayesian reconstruction method that has already been used in PET imaging and shown to possess good noise reduction and edge preservation properties. In this study the PH-MRP algorithm was accelerated with the ordered subsets (OS) procedure and compared to the FBP, OS-EM and conventional Bayesian reconstruction methods in terms of noise reduction, quantitative accuracy, edge preservation and visual quality. The results showed that the accelerated PH-MRP algorithm was very robust. It provided visually pleasing images with lower noise level than the FBP or OS-EM and with smaller bias and sharper edges than the conventional Bayesian methods.
Jet Vertex Charge Reconstruction
Nektarijevic, Snezana; The ATLAS collaboration
2015-01-01
A newly developed algorithm called the jet vertex charge tagger, aimed at identifying the sign of the charge of jets containing $b$-hadrons, referred to as $b$-jets, is presented. In addition to the well established track-based jet charge determination, this algorithm introduces the so-called \\emph{jet vertex charge} reconstruction, which exploits the charge information associated to the displaced vertices within the jet. Furthermore, the charge of a soft muon contained in the jet is taken into account when available. All available information is combined into a multivariate discriminator. The algorithm has been developed on jets matched to generator level $b$-hadrons provided by $t\\bar{t}$ events simulated at $\\sqrt{s}$=13~TeV using the full ATLAS detector simulation and reconstruction.
Development of regularized expectation maximization algorithms for fan-beam SPECT data
International Nuclear Information System (INIS)
Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min
2005-01-01
SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions
Teleportation of an arbitrary two-qudit state based on the non-maximally four-qudit cluster state
Institute of Scientific and Technical Information of China (English)
2008-01-01
Two different schemes are presented for quantum teleportation of an arbitrary two-qudit state using a non-maximally four-qudit cluster state as the quantum channel. The first scheme is based on the Bell-basis measurements and the re-ceiver may probabilistically reconstruct the original state by performing proper transformation on her particles and an auxiliary two-level particle; the second scheme is based on the generalized Bell-basis measurements and the probability of successfully teleporting the unknown state depends on those measurements which are adjusted by Alice. A comparison of the two schemes shows that the latter has a smaller probability than that of the former and contrary to the former, the channel information and auxiliary qubit are not necessary for the receiver in the latter.
Optimal topologies for maximizing network transmission capacity
Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.
2018-04-01
It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in-vivo studies
Petibon, Yoann; Rakvongthai, Yothin; Fakhri, Georges El; Ouyang, Jinsong
2017-01-01
side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of K1, the tracer transport rate (mL.min−1.mL−1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced K1 maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced K1 estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in-vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance. PMID:28379843
Bipartite Bell Inequality and Maximal Violation
International Nuclear Information System (INIS)
Li Ming; Fei Shaoming; Li-Jost Xian-Qing
2011-01-01
We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Maximal Inequalities for Dependent Random Variables
DEFF Research Database (Denmark)
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...
Maximizing Function through Intelligent Robot Actuator Control
National Aeronautics and Space Administration — Maximizing Function through Intelligent Robot Actuator Control Successful missions to Mars and beyond will only be possible with the support of high-performance...
An ethical justification of profit maximization
DEFF Research Database (Denmark)
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....
A definition of maximal CP-violation
International Nuclear Information System (INIS)
Roos, M.
1985-01-01
The unitary matrix of quark flavour mixing is parametrized in a general way, permitting a mathematically natural definition of maximal CP violation. Present data turn out to violate this definition by 2-3 standard deviations. (orig.)
A cosmological problem for maximally symmetric supergravity
International Nuclear Information System (INIS)
German, G.; Ross, G.G.
1986-01-01
Under very general considerations it is shown that inflationary models of the universe based on maximally symmetric supergravity with flat potentials are unable to resolve the cosmological energy density (Polonyi) problem. (orig.)
Insulin resistance and maximal oxygen uptake
DEFF Research Database (Denmark)
Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans
2003-01-01
BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...
Maximal supergravities and the E10 model
International Nuclear Information System (INIS)
Kleinschmidt, Axel; Nicolai, Hermann
2006-01-01
The maximal rank hyperbolic Kac-Moody algebra e 10 has been conjectured to play a prominent role in the unification of duality symmetries in string and M theory. We review some recent developments supporting this conjecture
Energy Technology Data Exchange (ETDEWEB)
Kramberger, C., E-mail: Christian.Kramberger-Kaplan@univie.ac.at; Meyer, J.C., E-mail: Jannik.Meyer@univie.ac.at
2016-11-15
We investigate the recovery of structures from large-area, low dose exposures that distribute the dose over many identical copies of an object. The reconstruction is done via a maximum likelihood approach that does neither require to identify nor align the individual particles. We also simulate small molecular adsorbates on graphene and demonstrate the retrieval of images with atomic resolution from large area and extremely low dose raw data. Doses as low as 5 e{sup −}/Å{sup 2} are sufficient if all symmetries (translations, rotations and mirrors) of the supporting membrane are exploited to retrieve the structure of individual adsorbed molecules. We compare different optimization schemes, consider mixed molecules and adsorption sites, and requirements on the amount of data. We further demonstrate that the maximum likelihood approach is only count limited by requiring at least three independent counts per entity. Finally, we demonstrate that the approach works with real experimental data and in the presence of aberrations.
International Nuclear Information System (INIS)
Jones, W.F.; Byars, L.G.; Casey, M.E.
1988-01-01
A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners
Gaussian maximally multipartite-entangled states
Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio
2009-12-01
We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .
Gaussian maximally multipartite-entangled states
International Nuclear Information System (INIS)
Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano
2009-01-01
We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.
Neutrino mass textures with maximal CP violation
International Nuclear Information System (INIS)
Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki
2005-01-01
We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases
Why firms should not always maximize profits
Kolstad, Ivar
2006-01-01
Though corporate social responsibility (CSR) is on the agenda of most major corporations, corporate executives still largely support the view that corporations should maximize the returns to their owners. There are two lines of defence for this position. One is the Friedmanian view that maximizing owner returns is the corporate social responsibility of corporations. The other is a position voiced by many executives, that CSR and profits go together. This paper argues that the first position i...
Maximally Informative Observables and Categorical Perception
Tsiang, Elaine
2012-01-01
We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...
Bayesian image reconstruction for emission tomography based on median root prior
International Nuclear Information System (INIS)
Alenius, S.
1997-01-01
The aim of the present study was to investigate a new type of Bayesian one-step late reconstruction method which utilizes a median root prior (MRP). The method favours images which have locally monotonous radioactivity concentrations. The new reconstruction algorithm was applied to ideal simulated data, phantom data and some patient examinations with PET. The same projection data were reconstructed with filtered back-projection (FBP) and maximum likelihood-expectation maximization (ML-EM) methods for comparison. The MRP method provided good-quality images with a similar resolution to the FBP method with a ramp filter, and at the same time the noise properties were as good as with Hann-filtered FBP images. The typical artefacts seen in FBP reconstructed images outside of the object were completely removed, as was the grainy noise inside the object. Quantitativley, the resulting average regional radioactivity concentrations in a large region of interest in images produced by the MRP method corresponded to the FBP and ML-EM results but at the pixel by pixel level the MRP method proved to be the most accurate of the tested methods. In contrast to other iterative reconstruction methods, e.g. ML-EM, the MRP method was not sensitive to the number of iterations nor to the adjustment of reconstruction parameters. Only the Bayesian parameter β had to be set. The proposed MRP method is much more simple to calculate than the methods described previously, both with regard to the parameter settings and in terms of general use. The new MRP reconstruction method was shown to produce high-quality quantitative emission images with only one parameter setting in addition to the number of iterations. (orig.)
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Adaptive algebraic reconstruction technique
International Nuclear Information System (INIS)
Lu Wenkai; Yin Fangfang
2004-01-01
Algebraic reconstruction techniques (ART) are iterative procedures for reconstructing objects from their projections. It is proven that ART can be computationally efficient by carefully arranging the order in which the collected data are accessed during the reconstruction procedure and adaptively adjusting the relaxation parameters. In this paper, an adaptive algebraic reconstruction technique (AART), which adopts the same projection access scheme in multilevel scheme algebraic reconstruction technique (MLS-ART), is proposed. By introducing adaptive adjustment of the relaxation parameters during the reconstruction procedure, one-iteration AART can produce reconstructions with better quality, in comparison with one-iteration MLS-ART. Furthermore, AART outperforms MLS-ART with improved computational efficiency
Hu, Chunying; Huang, Qiuchen; Yu, Lili; Ye, Miao
2016-07-01
[Purpose] The purpose of this study was to examine the immediate effects of robot-assisted therapy on functional activity level after anterior cruciate ligament reconstruction. [Subjects and Methods] Participants included 10 patients (8 males and 2 females) following anterior cruciate ligament reconstruction. The subjects participated in robot-assisted therapy and treadmill exercise on different days. The Timed Up-and-Go test, Functional Reach Test, surface electromyography of the vastus lateralis and vastus medialis, and maximal extensor strength of isokinetic movement of the knee joint were evaluated in both groups before and after the experiment. [Results] The results for the Timed Up-and-Go Test and the 10-Meter Walk Test improved in the robot-assisted rehabilitation group. Surface electromyography of the vastus medialis muscle showed significant increases in maximum and average discharge after the intervention. [Conclusion] The results suggest that walking ability and muscle strength can be improved by robotic training.
Shareholder, stakeholder-owner or broad stakeholder maximization
Mygind, Niels
2004-01-01
With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...
Dynamical generation of maximally entangled states in two identical cavities
International Nuclear Information System (INIS)
Alexanian, Moorad
2011-01-01
The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.
Value maximizing maintenance policies under general repair
International Nuclear Information System (INIS)
Marais, Karen B.
2013-01-01
One class of maintenance optimization problems considers the notion of general repair maintenance policies where systems are repaired or replaced on failure. In each case the optimality is based on minimizing the total maintenance cost of the system. These cost-centric optimizations ignore the value dimension of maintenance and can lead to maintenance strategies that do not maximize system value. This paper applies these ideas to the general repair optimization problem using a semi-Markov decision process, discounted cash flow techniques, and dynamic programming to identify the value-optimal actions for any given time and system condition. The impact of several parameters on maintenance strategy, such as operating cost and revenue, system failure characteristics, repair and replacement costs, and the planning time horizon, is explored. This approach provides a quantitative basis on which to base maintenance strategy decisions that contribute to system value. These decisions are different from those suggested by traditional cost-based approaches. The results show (1) how the optimal action for a given time and condition changes as replacement and repair costs change, and identifies the point at which these costs become too high for profitable system operation; (2) that for shorter planning horizons it is better to repair, since there is no time to reap the benefits of increased operating profit and reliability; (3) how the value-optimal maintenance policy is affected by the system's failure characteristics, and hence whether it is worthwhile to invest in higher reliability; and (4) the impact of the repair level on the optimal maintenance policy. -- Highlights: •Provides a quantitative basis for maintenance strategy decisions that contribute to system value. •Shows how the optimal action for a given condition changes as replacement and repair costs change. •Shows how the optimal policy is affected by the system's failure characteristics. •Shows when it is
Vacua of maximal gauged D=3 supergravities
International Nuclear Information System (INIS)
Fischbacher, T; Nicolai, H; Samtleben, H
2002-01-01
We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories
Minimal and Maximal Operator Space Structures on Banach Spaces
P., Vinod Kumar; Balasubramani, M. S.
2014-01-01
Given a Banach space $X$, there are many operator space structures possible on $X$, which all have $X$ as their first matrix level. Blecher and Paulsen identified two extreme operator space structures on $X$, namely $Min(X)$ and $Max(X)$ which represents respectively, the smallest and the largest operator space structures admissible on $X$. In this note, we consider the subspace and the quotient space structure of minimal and maximal operator spaces.
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
Utility Maximization in Nonconvex Wireless Systems
Brehmer, Johannes
2012-01-01
This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.
Maximizing band gaps in plate structures
DEFF Research Database (Denmark)
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... theoretically and experimentally and the issue of finite size effects is addressed....
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
DEFF Research Database (Denmark)
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....
Learning curves for mutual information maximization
International Nuclear Information System (INIS)
Urbanczik, R.
2003-01-01
An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered
Finding Maximal Pairs with Bounded Gap
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.
1999-01-01
. In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....
REGEN: Ancestral Genome Reconstruction for Bacteria
Directory of Open Access Journals (Sweden)
João C. Setubal
2012-07-01
Full Text Available Ancestral genome reconstruction can be understood as a phylogenetic study with more details than a traditional phylogenetic tree reconstruction. We present a new computational system called REGEN for ancestral bacterial genome reconstruction at both the gene and replicon levels. REGEN reconstructs gene content, contiguous gene runs, and replicon structure for each ancestral genome. Along each branch of the phylogenetic tree, REGEN infers evolutionary events, including gene creation and deletion and replicon fission and fusion. The reconstruction can be performed by either a maximum parsimony or a maximum likelihood method. Gene content reconstruction is based on the concept of neighboring gene pairs. REGEN was designed to be used with any set of genomes that are sufficiently related, which will usually be the case for bacteria within the same taxonomic order. We evaluated REGEN using simulated genomes and genomes in the Rhizobiales order.
REGEN: Ancestral Genome Reconstruction for Bacteria.
Yang, Kuan; Heath, Lenwood S; Setubal, João C
2012-07-18
Ancestral genome reconstruction can be understood as a phylogenetic study with more details than a traditional phylogenetic tree reconstruction. We present a new computational system called REGEN for ancestral bacterial genome reconstruction at both the gene and replicon levels. REGEN reconstructs gene content, contiguous gene runs, and replicon structure for each ancestral genome. Along each branch of the phylogenetic tree, REGEN infers evolutionary events, including gene creation and deletion and replicon fission and fusion. The reconstruction can be performed by either a maximum parsimony or a maximum likelihood method. Gene content reconstruction is based on the concept of neighboring gene pairs. REGEN was designed to be used with any set of genomes that are sufficiently related, which will usually be the case for bacteria within the same taxonomic order. We evaluated REGEN using simulated genomes and genomes in the Rhizobiales order.
Energy Technology Data Exchange (ETDEWEB)
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Solutions for autonomy and reconstruction
Energy Technology Data Exchange (ETDEWEB)
Wilming, Wilhelm
2011-07-01
Stand-alone systems, whether solar home or pico solar systems, have reached a cost level at which they are an increasingly interesting option for wide-area development in grid-remote regions or for reconstruction where the previous grid infrastructure has been destroyed. (orig.)
Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung
2014-05-01
The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.
Breast reconstruction - natural tissue
... flap; TRAM; Latissimus muscle flap with a breast implant; DIEP flap; DIEAP flap; Gluteal free flap; Transverse upper gracilis flap; TUG; Mastectomy - breast reconstruction with natural tissue; Breast cancer - breast reconstruction with natural tissue
Breast reconstruction after mastectomy
Directory of Open Access Journals (Sweden)
Daniel eSchmauss
2016-01-01
Full Text Available Breast cancer is the leading cause of cancer death in women worldwide. Its surgical approach has become less and less mutilating in the last decades. However, the overall number of breast reconstructions has significantly increased lately. Nowadays breast reconstruction should be individualized at its best, first of all taking into consideration oncological aspects of the tumor, neo-/adjuvant treatment and genetic predisposition, but also its timing (immediate versus delayed breast reconstruction, as well as the patient’s condition and wish. This article gives an overview over the various possibilities of breast reconstruction, including implant- and expander-based reconstruction, flap-based reconstruction (vascularized autologous tissue, the combination of implant and flap, reconstruction using non-vascularized autologous fat, as well as refinement surgery after breast reconstruction.
Maximizing the Range of a Projectile.
Brown, Ronald A.
1992-01-01
Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)
Robust Utility Maximization Under Convex Portfolio Constraints
International Nuclear Information System (INIS)
Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed
2015-01-01
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Reserve design to maximize species persistence
Robert G. Haight; Laurel E. Travis
2008-01-01
We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...
Maximal indecomposable past sets and event horizons
International Nuclear Information System (INIS)
Krolak, A.
1984-01-01
The existence of maximal indecomposable past sets MIPs is demonstrated using the Kuratowski-Zorn lemma. A criterion for the existence of an absolute event horizon in space-time is given in terms of MIPs and a relation to black hole event horizon is shown. (author)
Maximization of eigenvalues using topology optimization
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2000-01-01
to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
A THEORY OF MAXIMIZING SENSORY INFORMATION
Hateren, J.H. van
1992-01-01
A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power
Maximizing scientific knowledge from randomized clinical trials
DEFF Research Database (Denmark)
Gustafsson, Finn; Atar, Dan; Pitt, Bertram
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly vari...
A Model of College Tuition Maximization
Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.
2009-01-01
This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…
Logit Analysis for Profit Maximizing Loan Classification
Watt, David L.; Mortensen, Timothy L.; Leistritz, F. Larry
1988-01-01
Lending criteria and loan classification methods are developed. Rating system breaking points are analyzed to present a method to maximize loan revenues. Financial characteristics of farmers are used as determinants of delinquency in a multivariate logistic model. Results indicate that debt-to-asset and operating ration are most indicative of default.
Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.
Cormie, Prue; McGuigan, Michael R; Newton, Robert U
2011-01-01
This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.
Bioengineered human IAS reconstructs with functional and molecular properties similar to intact IAS
Singh, Jagmohan
2012-01-01
Because of its critical importance in rectoanal incontinence, we determined the feasibility to reconstruct internal anal sphincter (IAS) from human IAS smooth muscle cells (SMCs) with functional and molecular attributes similar to the intact sphincter. The reconstructs were developed using SMCs from the circular smooth muscle layer of the human IAS, grown in smooth muscle differentiation media under sterile conditions in Sylgard-coated tissue culture plates with central Sylgard posts. The basal tone in the reconstructs and its changes were recorded following 0 Ca2+, KCl, bethanechol, isoproterenol, protein kinase C (PKC) activator phorbol 12,13-dibutyrate, and Rho kinase (ROCK) and PKC inhibitors Y-27632 and Gö-6850, respectively. Western blot (WB), immunofluorescence (IF), and immunocytochemical (IC) analyses were also performed. The reconstructs developed spontaneous tone (0.68 ± 0.26 mN). Bethanechol (a muscarinic agonist) and K+ depolarization produced contraction, whereas isoproterenol (β-adrenoceptor agonist) and Y-27632 produced a concentration-dependent decrease in the tone. Maximal decrease in basal tone with Y-27632 and Gö-6850 (each 10−5 M) was 80.45 ± 3.29 and 17.76 ± 3.50%, respectively. WB data with the IAS constructs′ SMCs revealed higher levels of RhoA/ROCK, protein kinase C-potentiated inhibitor or inhibitory phosphoprotein for myosin phosphatase (CPI-17), phospho-CPI-17, MYPT1, and 20-kDa myosin light chain vs. rectal smooth muscle. WB, IF, and IC studies of original SMCs and redispersed from the reconstructs for the relative distribution of different signal transduction proteins confirmed the feasibility of reconstruction of IAS with functional properties similar to intact IAS and demonstrated the development of myogenic tone with critical dependence on RhoA/ROCK. We conclude that it is feasible to bioengineer IAS constructs using human IAS SMCs that behave like intact IAS. PMID:22790596
Understanding Violations of Gricean Maxims in Preschoolers and Adults
Directory of Open Access Journals (Sweden)
Mako eOkanda
2015-07-01
Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Distortion of maximal elevator activity by unilateral premature tooth contact
DEFF Research Database (Denmark)
Bakke, Merete; Møller, Eigild
1980-01-01
In four subjects the electrical activity in the anterior and posterior temporal and masseter muscles during maximal bite was recorded bilaterally with and without premature unilateral contact. Muscle activity was measured as the average level and the peak of the mean voltage with layers of strips...... of 0.05, 0.10, 0.15 and 2.0 mm, placed between first molars either on the left or the right side, and compared with the level of activity with undistrubed occlusion. Unilateral premature contact caused a significant asymmetry of action in all muscles under study with stronger activity ipsilaterally...
Growth-Maximizing Public Debt under Changing Demographics
DEFF Research Database (Denmark)
Bokan, Nikola; Hougaard Jensen, Svend E.; Hallett, Andrew Hughes
2016-01-01
This paper develops an overlapping-generations model to study the growth-maximizing level of public debt under conditions of demograhic change. It is shown that the optimal debt level depends on a positive marginal productivity of public capital. In general, it also depends on the demographic par...... will have to adjust its fiscal plans to accommodate those changes, most likely downward, if growth is to be preserved. An advantage of this model is that it allows us to determine in advance the way in which fiscal policies need to adjust as demographic parameters change....
Energy Technology Data Exchange (ETDEWEB)
Chouvelon, T.; Caurant, F.; Mendez Fernandez, P.; Bustamante, P. [Littoral Environnement et Societes, La Rochelle (France); Spitz, J. [Centre de Recherche sur les Mammiferes Marins, La Rochelle (France)
2013-07-15
Assessing species' trophic level is one key aspect of ecosystem models, providing an indicator to monitor trophic links and ecosystem changes. The stable isotope ratios (SIR) of carbon and nitrogen provide longer term information on the average diet of consumers than the traditional stomach content method. However, using SIR in predators implies a good knowledge of the factors influencing prey species signature and lower trophic levels themselves, such as spatial and temporal variations. In this study, 129 species belonging to several taxa (i.e. crustaceans, molluscs, fish, marine mammals) from the Bay of Biscay where analysed for their isotopic signatures. Results confirmed the existence of several trophic food webs with probable different baseline signatures in this area, an essential consideration when using the isotopic tool for calculating species trophic level and potential evolution in space and time. Results demonstrated a spatial gradient from the shoreline to the oceanic domain for both carbon and nitrogen. (author)
Lambeck, Kurt; Purcell, Anthony; Flemming, Nicholas. C.; Vita-Finzi, Claudio; Alsharekh, Abdullah M.; Bailey, Geoffrey N.
2011-12-01
The history of sea level within the Red Sea basin impinges on several areas of research. For archaeology and prehistory, past sea levels of the southern sector define possible pathways of human dispersal out of Africa. For tectonics, the interglacial sea levels provide estimates of rates for vertical tectonics. For global sea level studies, the Red Sea sediments contain a significant record of changing water chemistry with implications on the mass exchange between oceans and ice sheets during glacial cycles. And, because of its geometry and location, the Red Sea provides a test laboratory for models of glacio-hydro-isostasy. The Red Sea margins contain incomplete records of sea level for the Late Holocene, for the Last Glacial Maximum, for the Last Interglacial and for earlier interglacials. These are usually interpreted in terms of tectonics and ocean volume changes but it is shown here that the glacio-hydro-isostatic process is an additional important component with characteristic spatial variability. Through an iterative analysis of the Holocene and interglacial evidence a separation of the tectonic, isostatic and eustatic contributions is possible and we present a predictive model for palaeo-shorelines and water depths for a time interval encompassing the period proposed for migrations of modern humans out of Africa. Principal conclusions include the following. (i) Late Holocene sea level signals evolve along the length of the Red Sea, with characteristic mid-Holocene highstands not developing in the central part. (ii) Last Interglacial sea level signals are also location dependent and, in the absence of tectonics, are not predicted to occur more than 1-2 m above present sea level. (iii) For both periods, Red Sea levels at 'expected far-field' elevations are not necessarily indicative of tectonic stability and the evidence points to a long-wavelength tectonic uplift component along both the African and Arabian northern and central sides of the Red Sea. (iv) The
Track reconstruction in CMS high luminosity environment
AUTHOR|(CDS)2067159
2016-01-01
The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...
Track reconstruction in CMS high luminosity environment
Goetzmann, Christophe
2014-01-01
The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...
Soelen, E.E. van; Lammertsma, E.I.; Cremer, H.; Donders, T.H.; Sangiorgi, F.; Brooks, G.R.; Larson, R.A.; Sinninghe Damsté, J.S.; Wagner-Cremer, F.; Reichart, G.J.
2010-01-01
A suite of organic geochemical, micropaleontological and palynological proxies was applied to sediments from Southwest Florida, to study the Holocene environmental changes associated with sea-level rise. Sediments were recovered from Hillsborough Bay, part of Tampa Bay, and studied using biomarkers,
Org.Lcsim: Event Reconstruction in Java
International Nuclear Information System (INIS)
Graf, Norman
2011-01-01
Maximizing the physics performance of detectors being designed for the International Linear Collider, while remaining sensitive to cost constraints, requires a powerful, efficient, and flexible simulation, reconstruction and analysis environment to study the capabilities of a large number of different detector designs. The preparation of Letters Of Intent for the International Linear Collider involved the detailed study of dozens of detector options, layouts and readout technologies; the final physics benchmarking studies required the reconstruction and analysis of hundreds of millions of events. We describe the Java-based software toolkit (org.lcsim) which was used for full event reconstruction and analysis. The components are fully modular and are available for tasks from digitization of tracking detector signals through to cluster finding, pattern recognition, track-fitting, calorimeter clustering, individual particle reconstruction, jet-finding, and analysis. The detector is defined by the same xml input files used for the detector response simulation, ensuring the simulation and reconstruction geometries are always commensurate by construction. We discuss the architecture as well as the performance.
DEFF Research Database (Denmark)
Emerich Souza, Priscila; Kroon, Aart; Nielsen, Lars
2018-01-01
Detailed topographical data and high-resolution ground-penetrating radar (GPR) reflection data are presented from the present-day beach and across successive raised beach-ridges at Itilleq (Disko, West Greenland). In the western part of our study area, the present low-tide level is well-marked by......Detailed topographical data and high-resolution ground-penetrating radar (GPR) reflection data are presented from the present-day beach and across successive raised beach-ridges at Itilleq (Disko, West Greenland). In the western part of our study area, the present low-tide level is well...... beach-ridge GPR profiles. Most of them are located at the boundary between a unit with reflection characteristics representing palaeo foreshore deposits, and a deeper and more complex radar unit characterized by diffractions, which, however, is not penetrated to large depths by the GPR signals. Based...
Milker, Yvonne; Horton, Benjamin P.; Khan, Nicole S.; Nelson, Alan R.; Witter, Robert C.; Engelhart, Simon E.; Ewald, Michael; Brophy, Laura; Bridgeland, William T.
2016-04-01
Stratigraphic sequences beneath salt marshes along the U.S. Pacific Northwest coast preserve 7000 years of plate-boundary earthquakes at the Cascadia subduction zone. The sequences record rapid rises in relative sea level during regional coseismic subsidence caused by great earthquakes and gradual falls in relative sea level during interseismic uplift between earthquakes. These relative sea-level changes are commonly quantified using foraminiferal transfer functions with the assumption that foraminifera rapidly recolonize salt marshes and adjacent tidal flats following coseismic subsidence. The restoration of tidal inundation in the Ni-les'tun unit (NM unit) of the Bandon Marsh National Wildlife Refuge (Oregon), following extensive dike removal in August 2011, allowed us to directly observe changes in foraminiferal assemblages that occur during rapid "coseismic" (simulated by dike removal with sudden tidal flooding) and "interseismic" (stabilization of the marsh following flooding) relative sea-level changes analogous to those of past earthquake cycles. We analyzed surface sediment samples from 10 tidal stations at the restoration site (NM unit) from mudflat to high marsh, and 10 unflooded stations in the Bandon Marsh control site. Samples were collected shortly before and at 1- to 6-month intervals for 3 years after tidal restoration of the NM unit. Although tide gauge and grain-size data show rapid restoration of tides during approximately the first 3 months after dike removal, recolonization of the NM unit by foraminifera was delayed at least 10 months. Re-establishment of typical tidal foraminiferal assemblages, as observed at the control site, required 31 months after tidal restoration, with Miliammina fusca being the dominant pioneering species. If typical of past recolonizations, this delayed foraminiferal recolonization affects the accuracy of coseismic subsidence estimates during past earthquakes because significant postseismic uplift may shortly follow
Refined reservoir description to maximize oil recovery
International Nuclear Information System (INIS)
Flewitt, W.E.
1975-01-01
To assure maximized oil recovery from older pools, reservoir description has been advanced by fully integrating original open-hole logs and the recently introduced interpretive techniques made available through cased-hole wireline saturation logs. A refined reservoir description utilizing normalized original wireline porosity logs has been completed in the Judy Creek Beaverhill Lake ''A'' Pool, a reefal carbonate pool with current potential productivity of 100,000 BOPD and 188 active wells. Continuous porosity was documented within a reef rim and cap while discontinuous porous lenses characterized an interior lagoon. With the use of pulsed neutron logs and production data a separate water front and pressure response was recognized within discrete environmental units. The refined reservoir description aided in reservoir simulation model studies and quantifying pool performance. A pattern water flood has now replaced the original peripheral bottom water drive to maximize oil recovery
Maximal frustration as an immunological principle.
de Abreu, F Vistulo; Mostardinha, P
2009-03-06
A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.
EM for phylogenetic topology reconstruction on nonhomogeneous data.
Ibáñez-Marcelo, Esther; Casanellas, Marta
2014-06-17
The reconstruction of the phylogenetic tree topology of four taxa is, still nowadays, one of the main challenges in phylogenetics. Its difficulties lie in considering not too restrictive evolutionary models, and correctly dealing with the long-branch attraction problem. The correct reconstruction of 4-taxon trees is crucial for making quartet-based methods work and being able to recover large phylogenies. We adapt the well known expectation-maximization algorithm to evolutionary Markov models on phylogenetic 4-taxon trees. We then use this algorithm to estimate the substitution parameters, compute the corresponding likelihood, and to infer the most likely quartet. In this paper we consider an expectation-maximization method for maximizing the likelihood of (time nonhomogeneous) evolutionary Markov models on trees. We study its success on reconstructing 4-taxon topologies and its performance as input method in quartet-based phylogenetic reconstruction methods such as QFIT and QuartetSuite. Our results show that the method proposed here outperforms neighbor-joining and the usual (time-homogeneous continuous-time) maximum likelihood methods on 4-leaved trees with among-lineage instantaneous rate heterogeneity, and perform similarly to usual continuous-time maximum-likelihood when data satisfies the assumptions of both methods. The method presented in this paper is well suited for reconstructing the topology of any number of taxa via quartet-based methods and is highly accurate, specially regarding largely divergent trees and time nonhomogeneous data.
Derivative pricing based on local utility maximization
Jan Kallsen
2002-01-01
This paper discusses a new approach to contingent claim valuation in general incomplete market models. We determine the neutral derivative price which occurs if investors maximize their local utility and if derivative demand and supply are balanced. We also introduce the sensitivity process of a contingent claim. This process quantifies the reliability of the neutral derivative price and it can be used to construct price bounds. Moreover, it allows to calibrate market models in order to be co...
Control of Shareholders’ Wealth Maximization in Nigeria
A. O. Oladipupo; C. O. Okafor
2014-01-01
This research focuses on who controls shareholder’s wealth maximization and how does this affect firm’s performance in publicly quoted non-financial companies in Nigeria. The shareholder fund was the dependent while explanatory variables were firm size (proxied by log of turnover), retained earning (representing management control) and dividend payment (representing measure of shareholders control). The data used for this study were obtained from the Nigerian Stock Exchange [NSE] fact book an...
Definable maximal discrete sets in forcing extensions
DEFF Research Database (Denmark)
Törnquist, Asger Dag; Schrittesser, David
2018-01-01
Let be a Σ11 binary relation, and recall that a set A is -discrete if no two elements of A are related by . We show that in the Sacks and Miller forcing extensions of L there is a Δ12 maximal -discrete set. We use this to answer in the negative the main question posed in [5] by showing...
Dynamic Convex Duality in Constrained Utility Maximization
Li, Yusong; Zheng, Harry
2016-01-01
In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...
Single maximal versus combination punch kinematics.
Piorkowski, Barry A; Lees, Adrian; Barton, Gabor J
2011-03-01
The aim of this study was to determine the influence of punch type (Jab, Cross, Lead Hook and Reverse Hook) and punch modality (Single maximal, 'In-synch' and 'Out of synch' combination) on punch speed and delivery time. Ten competition-standard volunteers performed punches with markers placed on their anatomical landmarks for 3D motion capture with an eight-camera optoelectronic system. Speed and duration between key moments were computed. There were significant differences in contact speed between punch types (F(2,18,84.87) = 105.76, p = 0.001) with Lead and Reverse Hooks developing greater speed than Jab and Cross. There were significant differences in contact speed between punch modalities (F(2,64,102.87) = 23.52, p = 0.001) with the Single maximal (M+/- SD: 9.26 +/- 2.09 m/s) higher than 'Out of synch' (7.49 +/- 2.32 m/s), 'In-synch' left (8.01 +/- 2.35 m/s) or right lead (7.97 +/- 2.53 m/s). Delivery times were significantly lower for Jab and Cross than Hook. Times were significantly lower 'In-synch' than a Single maximal or 'Out of synch' combination mode. It is concluded that a defender may have more evasion-time than previously reported. This research could be of use to performers and coaches when considering training preparations.
Gradient Dynamics and Entropy Production Maximization
Janečka, Adam; Pavelka, Michal
2018-01-01
We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.
LOAD THAT MAXIMIZES POWER OUTPUT IN COUNTERMOVEMENT JUMP
Directory of Open Access Journals (Sweden)
Pedro Jimenez-Reyes
2016-02-01
Full Text Available ABSTRACT Introduction: One of the main problems faced by strength and conditioning coaches is the issue of how to objectively quantify and monitor the actual training load undertaken by athletes in order to maximize performance. It is well known that performance of explosive sports activities is largely determined by mechanical power. Objective: This study analysed the height at which maximal power output is generated and the corresponding load with which is achieved in a group of male-trained track and field athletes in the test of countermovement jump (CMJ with extra loads (CMJEL. Methods: Fifty national level male athletes in sprinting and jumping performed a CMJ test with increasing loads up to a height of 16 cm. The relative load that maximized the mechanical power output (Pmax was determined using a force platform and lineal encoder synchronization and estimating the power by peak power, average power and flight time in CMJ. Results: The load at which the power output no longer existed was at a height of 19.9 ± 2.35, referring to a 99.1 ± 1% of the maximum power output. The load that maximizes power output in all cases has been the load with which an athlete jump a height of approximately 20 cm. Conclusion: These results highlight the importance of considering the height achieved in CMJ with extra load instead of power because maximum power is always attained with the same height. We advise for the preferential use of the height achieved in CMJEL test, since it seems to be a valid indicative of an individual's actual neuromuscular potential providing a valid information for coaches and trainers when assessing the performance status of our athletes and to quantify and monitor training loads, measuring only the height of the jump in the exercise of CMJEL.
The SRT reconstruction algorithm for semiquantification in PET imaging
Energy Technology Data Exchange (ETDEWEB)
Kastis, George A., E-mail: gkastis@academyofathens.gr [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Samartzis, Alexandros P. [Nuclear Medicine Department, Evangelismos General Hospital, Athens 10676 (Greece); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA, United Kingdom and Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece)
2015-10-15
Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT
The SRT reconstruction algorithm for semiquantification in PET imaging
International Nuclear Information System (INIS)
Kastis, George A.; Gaitanis, Anastasios; Samartzis, Alexandros P.; Fokas, Athanasios S.
2015-01-01
Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of 18 F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT
Wetter, Oliver; Tuttenuj, Daniel
2016-04-01
systematically analysed the period from 1446-1542 and could prove a large number of pre-instrumental flood events of river Rhine, Birs, Birsig and Wiese in Basel. All in all the weekly led account books contained 54 Rhine flood events, whereas chroniclers and annalists only recorded seven floods during the same period. This is a ratio of almost eight to one. This large difference points to the significantly sharper "observation skills" of the account books towards smaller floods, which may be explained by the fact that bridges can be endangered by relatively small floods because of driftwood, whereas it is known that chroniclers or annalists were predominantly focussing on spectacular (extreme) flood events. We [Oliver Wetter and Daniel Tuttenuj] are now able to present first preliminary results of reconstructed peak water levels and peak discharges of pre instrumental river Aare-, Emme-, Limmat-, Reuss-, Rhine- and Saane floods. These first results clearly show the strengths as well as the limits of the data and method used, depending mainly on the river types. Of the above mentioned rivers only the floods of river Emme could not be reconstructed whereas the long-term development of peak water levels and peak discharges of the other rivers clearly correlate with major local and supra-regional Swiss flood corrections over time. PhD student Daniel Tuttenuj is going to present the results for river Emme and Saane (see Abstract Daniel Tuttenuj), whereas Dr Oliver Wetter is going to present the results for the other rivers and gives a first insight on long-term recurring periods of smaller river Birs-, Birsig-, Rhine- and Wiese flood events based on the analysis of the weekly led account books "Wochenausgabenbücher der Stadt Basel" (see also Abstract of Daniel Tuttenuj).
Tuttenuj, Daniel; Wetter, Oliver
2016-04-01
contained 54 Rhine flood events, whereas chroniclers and annalists only recorded seven floods during the same period. This is a ratio of almost eight to one. This large difference points to the significantly sharper "observation skills" of the account books towards smaller floods, which may be explained by the fact that bridges can be endangered by relatively small floods because of driftwood, whereas it is known that chroniclers or annalists were predominantly focussing on spectacular (extreme) flood events. We [Oliver Wetter and Daniel Tuttenuj] are now able to present first preliminary results of reconstructed peak water levels and peak discharges of pre instrumental river Aare-, Emme-, Limmat-, Reuss-, Rhine- and Saane floods. These first results clearly show the strengths as well as the limits of the data and method used, depending mainly on the river types. Of the above mentioned rivers only the floods of river Emme could not be reconstructed whereas the long-term development of peak water levels and peak discharges of the other rivers clearly correlate with major local and supra-regional Swiss flood corrections over time. PhD student Daniel Tuttenuj is going to present the results of river Emme and Saane, whereas Dr Oliver Wetter is going to present the results for the other rivers and gives a first insight on long-term recurring periods of smaller river Birs, Birsig, Rhine and Wiese flood events based on the analysis of the weekly led account books "Wochenausgabenbücher der Stadt Basel" (see Abstract Oliver Wetter).
International Nuclear Information System (INIS)
Tuna, U.; Johansson, J.; Ruotsalainen, U.
2014-01-01
The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with
Cardiorespiratory Coordination in Repeated Maximal Exercise
Directory of Open Access Journals (Sweden)
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Kobayashi, Yoshiomi; Shinozaki, Yoshio; Takahashi, Yohei; Takaishi, Hironari; Ogawa, Jun
2017-01-01
Intervertebral instability risks following L5-S1 transforaminal lumbar interbody fusion (TLIF) and causes of bony bridge formation on computed tomography (CT) remain largely unknown. We evaluated the temporal changes on plain radiographs and reconstructed CT images from 178 patients who had undergone single-level L5-S1 TLIF between February 2011 and February 2015. We statistically analyzed temporal changes the L5-S1 angle on radiographs and intervertebral stability (IVS) at the last observation. Bony bridge formation between the L5-S1 vertebral bodies and the titanium cage subsidence were analyzed by using reconstructed CT. Preoperative L5-S1 angle in the non-IVS group was significantly greater than that in the IVS group. The cage subsidence was classified as follows: type A, both upper and lower endplates; type B, either endplate; or type C, no subsidence. Types B and C decreased over time, whereas type A increased after surgery. The bony bridges between vertebral bodies were found in 87.2% of patients, and 94.5% of all bony bridges were found only in the cage, not on the contralateral side. Our findings suggested that high preoperative L5-S1 angle increased the risk of intervertebral instability after TLIF. The L5-S1 angle decreased over time with increasing type A subsidence, and almost all bony bridges were found only in the cage. These results suggest that the vertebral bodies were stabilized because of cage subsidence, and final bony bridges were created. Methods to improve bony bridge creation are needed to obtain reliable L5-S1 intervertebral bone union. Copyright © 2016 Elsevier Ltd. All rights reserved.
Karakatsanis, Nicolas A.; Tsoumpas, Charalampos; Zaidi, Habib
Bulk body motion may randomly occur during PET acquisitions introducing blurring, attenuation emission mismatches and, in dynamic PET, discontinuities in the measured time activity curves between consecutive frames. Meanwhile, dynamic PET scans are longer, thus increasing the probability of bulk
International Nuclear Information System (INIS)
Chouvelon, Tiphaine; Spitz, J.; Caurant, F.; Mendez Fernandez, P.; Bustamante, P.
2011-01-01
The Bay of Biscay is a very large bay opened on the North-East Atlantic Ocean. The continental shelf covers over 220 000 km 2 . The hydrological structure is influenced by 2 main rivers plumes (Loire and Gironde) and a continental slope indented by numerous canyons. The Bay of Biscay supports both numerous important fisheries and a rich fauna including many protected species (e.g., marine mammals). The management of this ecosystem subjected to numerous anthropogenic impacts notably depends on the good understanding of its food webs' structure. Within each major group of taxa, spatial groups displayed significantly different δ 13 C and δ 15 N values (KW tests, all p 13 C and δ 15 N values decreased from near-shore organisms to deep-sea or oceanic organisms. These results highlighted the existence of several food webs with distinct baseline signatures in the Bay of Biscay. Differences in δ 15 N values in particular are linked to processes occurring at the Dissolved Inorganic Nitrogen (DIN) level, and to nutrients and particulate organic matter available for primary production in general. Therefore, the contrasted hydrological landscapes from the Bay of Biscay probably impact signatures of the primary producers in the different areas of the Bay.
Evaluating low pass filters on SPECT reconstructed cardiac orientation estimation
Dwivedi, Shekhar
2009-02-01
Low pass filters can affect the quality of clinical SPECT images by smoothing. Appropriate filter and parameter selection leads to optimum smoothing that leads to a better quantification followed by correct diagnosis and accurate interpretation by the physician. This study aims at evaluating the low pass filters on SPECT reconstruction algorithms. Criteria for evaluating the filters are estimating the SPECT reconstructed cardiac azimuth and elevation angle. Low pass filters studied are butterworth, gaussian, hamming, hanning and parzen. Experiments are conducted using three reconstruction algorithms, FBP (filtered back projection), MLEM (maximum likelihood expectation maximization) and OSEM (ordered subsets expectation maximization), on four gated cardiac patient projections (two patients with stress and rest projections). Each filter is applied with varying cutoff and order for each reconstruction algorithm (only butterworth used for MLEM and OSEM). The azimuth and elevation angles are calculated from the reconstructed volume and the variation observed in the angles with varying filter parameters is reported. Our results demonstrate that behavior of hamming, hanning and parzen filter (used with FBP) with varying cutoff is similar for all the datasets. Butterworth filter (cutoff > 0.4) behaves in a similar fashion for all the datasets using all the algorithms whereas with OSEM for a cutoff < 0.4, it fails to generate cardiac orientation due to oversmoothing, and gives an unstable response with FBP and MLEM. This study on evaluating effect of low pass filter cutoff and order on cardiac orientation using three different reconstruction algorithms provides an interesting insight into optimal selection of filter parameters.
Postactivation potentiation biases maximal isometric strength assessment.
Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Oliveira, Thiago Pires; Assumpção, Claudio de Oliveira; Greco, Camila Coelho; Cardozo, Adalgiso Croscato; Denadai, Benedito Sérgio
2014-01-01
Postactivation potentiation (PAP) is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs). The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n = 23) performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT), time to achieve it (tPTI), contractile impulse (CI), root mean square of the electromyographic signal during PTI (RMS), and rate of torque development (RTD), in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m), RTD (746 ± 152 N·m·s(-1) versus 727 ± 158 N·m·s(-1)), and RMS (59.1 ± 12.2% RMSMAX versus 54.8 ± 9.4% RMSMAX) were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms). We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.
Gain maximization in a probabilistic entanglement protocol
di Lorenzo, Antonio; Esteves de Queiroz, Johnny Hebert
Entanglement is a resource. We can therefore define gain as a monotonic function of entanglement G (E) . If a pair with entanglement E is produced with probability P, the net gain is N = PG (E) - (1 - P) C , where C is the cost of a failed attempt. We study a protocol where a pair of quantum systems is produced in a maximally entangled state ρm with probability Pm, while it is produced in a partially entangled state ρp with the complementary probability 1 -Pm . We mix a fraction w of the partially entangled pairs with the maximally entangled ones, i.e. we take the state to be ρ = (ρm + wUlocρpUloc+) / (1 + w) , where Uloc is an appropriate unitary local operation designed to maximize the entanglement of ρ. This procedure on one hand reduces the entanglement E, and hence the gain, but on the other hand it increases the probability of success to P =Pm + w (1 -Pm) , therefore the net gain N may increase. There may be hence, a priori, an optimal value for w, the fraction of failed attempts that we mix in. We show that, in the hypothesis of a linear gain G (E) = E , even assuming a vanishing cost C -> 0 , the net gain N is increasing with w, therefore the best strategy is to always mix the partially entangled states. Work supported by CNPq, Conselho Nacional de Desenvolvimento Científico e Tecnológico, proc. 311288/2014-6, and by FAPEMIG, Fundação de Amparo à Pesquisa de Minas Gerais, proc. IC-FAPEMIG2016-0269 and PPM-00607-16.
Maximizing percentage depletion in solid minerals
International Nuclear Information System (INIS)
Tripp, J.; Grove, H.D.; McGrath, M.
1982-01-01
This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables
What currency do bumble bees maximize?
Directory of Open Access Journals (Sweden)
Nicholas L Charlton
2010-08-01
Full Text Available In modelling bumble bee foraging, net rate of energetic intake has been suggested as the appropriate currency. The foraging behaviour of honey bees is better predicted by using efficiency, the ratio of energetic gain to expenditure, as the currency. We re-analyse several studies of bumble bee foraging and show that efficiency is as good a currency as net rate in terms of predicting behaviour. We suggest that future studies of the foraging of bumble bees should be designed to distinguish between net rate and efficiency maximizing behaviour in an attempt to discover which is the more appropriate currency.
DEFF Research Database (Denmark)
Lisonek, Petr
1996-01-01
A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...
Maximizing policy learning in international committees
DEFF Research Database (Denmark)
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Pouliot type duality via a-maximization
International Nuclear Information System (INIS)
Kawano, Teruhiko; Ookouchi, Yutaka; Tachikawa, Yuji; Yagi, Futoshi
2006-01-01
We study four-dimensional N=1Spin(10) gauge theory with a single spinor and N Q vectors at the superconformal fixed point via the electric-magnetic duality and a-maximization. When gauge invariant chiral primary operators hit the unitarity bounds, we find that the theory with no superpotential is identical to the one with some superpotential at the infrared fixed point. The auxiliary field method in the electric theory offers a satisfying description of the infrared fixed point, which is consistent with the better picture in the magnetic theory. In particular, it gives a clear description of the emergence of new massless degrees of freedom in the electric theory
Maximization techniques for oilfield development profits
International Nuclear Information System (INIS)
Lerche, I.
1999-01-01
In 1981 Nind provided a quantitative procedure for estimating the optimum number of development wells to emplace on an oilfield to maximize profit. Nind's treatment assumed that there was a steady selling price, that all wells were placed in production simultaneously, and that each well's production profile was identical and a simple exponential decline with time. This paper lifts these restrictions to allow for price fluctuations, variable with time emplacement of wells, and production rates that are more in line with actual production records than is a simple exponential decline curve. As a consequence, it is possible to design production rate strategies, correlated with price fluctuations, so as to maximize the present-day worth of a field. For price fluctuations that occur on a time-scale rapid compared to inflation rates it is appropriate to have production rates correlate directly with such price fluctuations. The same strategy does not apply for price fluctuations occurring on a time-scale long compared to inflation rates where, for small amplitudes in the price fluctuations, it is best to sell as much product as early as possible to overcome inflation factors, while for large amplitude fluctuations the best strategy is to sell product as early as possible but to do so mainly on price upswings. Examples are provided to show how these generalizations of Nind's (1981) formula change the complexion of oilfield development optimization. (author)
Woo, Kyong-Je; Paik, Joo-Myeong; Bang, Sa Ik; Mun, Goo-Hyun; Pyon, Jai-Kyong
2017-06-01
The question of whether expander inflation/deflation status has any bearing on surgical complications in the setting of adjuvant radiation (XRT) has not been addressed. The objective of this study is to investigate whether the inflation/deflation status of the expander at the time of XRT is associated with complications in immediate two-stage expander-implant breast reconstruction. A retrospective review of 49 consecutive patients who underwent immediate two-stage expander-implant breast reconstruction and received post-mastectomy XRT was conducted. Full deflation of the expanders was performed in the deflation group (20 patients), while the expanders remained inflated in the inflation group at the time of XRT (29 patients). XRT-related complications of each stage of reconstructions were compared between the two groups, and multivariable regression analysis was performed to identify risk factors for XRT-related complications. Overall XRT-related complications (65.0 vs. 6.9%, p deflation group. The most common cause of reconstruction failure in the deflation group was failure to re-expand due to skin fibrosis and contracture. In multivariable analysis, deflation of expanders was a significant risk factor for overall complications (odds = 94.4, p = 0.001) and reconstruction failures (odds = 9.09, p = 0.022) of the first-stage reconstructions. Maximal inflation without deflation before XRT can be an option to minimize XRT-related complications and reconstruction failure of the first-stage reconstructions. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Image Reconstruction. Chapter 13
Energy Technology Data Exchange (ETDEWEB)
Nuyts, J. [Department of Nuclear Medicine and Medical Imaging Research Center, Katholieke Universiteit Leuven, Leuven (Belgium); Matej, S. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA (United States)
2014-12-15
This chapter discusses how 2‑D or 3‑D images of tracer distribution can be reconstructed from a series of so-called projection images acquired with a gamma camera or a positron emission tomography (PET) system [13.1]. This is often called an ‘inverse problem’. The reconstruction is the inverse of the acquisition. The reconstruction is called an inverse problem because making software to compute the true tracer distribution from the acquired data turns out to be more difficult than the ‘forward’ direction, i.e. making software to simulate the acquisition. There are basically two approaches to image reconstruction: analytical reconstruction and iterative reconstruction. The analytical approach is based on mathematical inversion, yielding efficient, non-iterative reconstruction algorithms. In the iterative approach, the reconstruction problem is reduced to computing a finite number of image values from a finite number of measurements. That simplification enables the use of iterative instead of mathematical inversion. Iterative inversion tends to require more computer power, but it can cope with more complex (and hopefully more accurate) models of the acquisition process.
Update on orbital reconstruction.
Chen, Chien-Tzung; Chen, Yu-Ray
2010-08-01
Orbital trauma is common and frequently complicated by ocular injuries. The recent literature on orbital fracture is analyzed with emphasis on epidemiological data assessment, surgical timing, method of approach and reconstruction materials. Computed tomographic (CT) scan has become a routine evaluation tool for orbital trauma, and mobile CT can be applied intraoperatively if necessary. Concomitant serious ocular injury should be carefully evaluated preoperatively. Patients presenting with nonresolving oculocardiac reflex, 'white-eyed' blowout fracture, or diplopia with a positive forced duction test and CT evidence of orbital tissue entrapment require early surgical repair. Otherwise, enophthalmos can be corrected by late surgery with a similar outcome to early surgery. The use of an endoscope-assisted approach for orbital reconstruction continues to grow, offering an alternative method. Advances in alloplastic materials have improved surgical outcome and shortened operating time. In this review of modern orbital reconstruction, several controversial issues such as surgical indication, surgical timing, method of approach and choice of reconstruction material are discussed. Preoperative fine-cut CT image and thorough ophthalmologic examination are key elements to determine surgical indications. The choice of surgical approach and reconstruction materials much depends on the surgeon's experience and the reconstruction area. Prefabricated alloplastic implants together with image software and stereolithographic models are significant advances that help to more accurately reconstruct the traumatized orbit. The recent evolution of orbit reconstruction improves functional and aesthetic results and minimizes surgical complications.
DD4Hep based event reconstruction
AUTHOR|(SzGeCERN)683529; Frank, Markus; Gaede, Frank-Dieter; Hynds, Daniel; Lu, Shaojun; Nikiforou, Nikiforos; Petric, Marko; Simoniello, Rosa; Voutsinas, Georgios Gerasimos
The DD4HEP detector description toolkit offers a flexible and easy-to-use solution for the consistent and complete description of particle physics detectors in a single system. The sub-component DDREC provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDREC there is no need to define an additional, separate reconstruction geometry as is often done in HEP, but one can transparently extend the existing detailed simulation model to be also used for the reconstruction. Based on the extension mechanism of DD4HEP, DDREC allows one to attach user defined data structures to detector elements at all levels of the geometry hierarchy. These data structures define a high level view onto the detectors describing their physical properties, such as measurement layers, point resolutions, and cell sizes. For the purpose of charged particle track reconstruction, dedicated surface objects can be attached to every volume in the detector geometry. These surfaces provide the measuremen...
DEFF Research Database (Denmark)
Galavis, P.E.; Hollensen, Christian; Jallow, N.
2010-01-01
Background. Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes...... reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were...... classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small...
Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.
Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane
2016-08-01
Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.
Permutationally invariant state reconstruction
DEFF Research Database (Denmark)
Moroder, Tobias; Hyllus, Philipp; Tóth, Géza
2012-01-01
Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti...... optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer.......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...
Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua
2018-05-01
Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.
A Novel Kernel-Based Regularization Technique for PET Image Reconstruction
Directory of Open Access Journals (Sweden)
Abdelwahhab Boudjelal
2017-06-01
Full Text Available Positron emission tomography (PET is an imaging technique that generates 3D detail of physiological processes at the cellular level. The technique requires a radioactive tracer, which decays and releases a positron that collides with an electron; consequently, annihilation photons are emitted, which can be measured. The purpose of PET is to use the measurement of photons to reconstruct the distribution of radioisotopes in the body. Currently, PET is undergoing a revamp, with advancements in data measurement instruments and the computing methods used to create the images. These computer methods are required to solve the inverse problem of “image reconstruction from projection”. This paper proposes a novel kernel-based regularization technique for maximum-likelihood expectation-maximization ( κ -MLEM to reconstruct the image. Compared to standard MLEM, the proposed algorithm is more robust and is more effective in removing background noise, whilst preserving the edges; this suppresses image artifacts, such as out-of-focus slice blur.
Software Architecture Reconstruction Method, a Survey
Zainab Nayyar; Nazish Rafique
2014-01-01
Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...
Electron Reconstruction in the CMS Electromagnetic Calorimeter
Meschi, Emilio; Seez, Christopher; Vikas, Pratibha
2001-01-01
This note describes the reconstruction of electrons using the electromagnetic calorimeter (ECAL) alone. This represents the first step in the High Level Trigger reconstruction and selection chain. By making "super-clusters" (i.e. clusters of clusters) much of the energy radiated by bremsstrahlung in the tracker material can be recovered. Representative performance figures for energy and position resolution in the barrel are given.
Optimized image acquisition for breast tomosynthesis in projection and reconstruction space
International Nuclear Information System (INIS)
Chawla, Amarpreet S.; Lo, Joseph Y.; Baker, Jay A.; Samei, Ehsan
2009-01-01
Breast tomosynthesis has been an exciting new development in the field of breast imaging. While the diagnostic improvement via tomosynthesis is notable, the full potential of tomosynthesis has not yet been realized. This may be attributed to the dependency of the diagnostic quality of tomosynthesis on multiple variables, each of which needs to be optimized. Those include dose, number of angular projections, and the total angular span of those projections. In this study, the authors investigated the effects of these acquisition parameters on the overall diagnostic image quality of breast tomosynthesis in both the projection and reconstruction space. Five mastectomy specimens were imaged using a prototype tomosynthesis system. 25 angular projections of each specimen were acquired at 6.2 times typical single-view clinical dose level. Images at lower dose levels were then simulated using a noise modification routine. Each projection image was supplemented with 84 simulated 3 mm 3D lesions embedded at the center of 84 nonoverlapping ROIs. The projection images were then reconstructed using a filtered backprojection algorithm at different combinations of acquisition parameters to investigate which of the many possible combinations maximizes the performance. Performance was evaluated in terms of a Laguerre-Gauss channelized Hotelling observer model-based measure of lesion detectability. The analysis was also performed without reconstruction by combining the model results from projection images using Bayesian decision fusion algorithm. The effect of acquisition parameters on projection images and reconstructed slices were then compared to derive an optimization rule for tomosynthesis. The results indicated that projection images yield comparable but higher performance than reconstructed images. Both modes, however, offered similar trends: Performance improved with an increase in the total acquisition dose level and the angular span. Using a constant dose level and angular
Robust reconstruction of a signal from its unthresholded recurrence plot subject to disturbances
Energy Technology Data Exchange (ETDEWEB)
Sipers, Aloys, E-mail: aloys.sipers@zuyd.nl [Department of Bèta Sciences and Technology, Zuyd University (Netherlands); Borm, Paul, E-mail: paul.borm@zuyd.nl [Department of Bèta Sciences and Technology, Zuyd University (Netherlands); Peeters, Ralf, E-mail: ralf.peeters@maastrichtuniversity.nl [Department of Data Science and Knowledge Engineering, Maastricht University (Netherlands)
2017-02-12
To make valid inferences from recurrence plots for time-delay embedded signals, two underlying key questions are: (1) to what extent does an unthresholded recurrence (URP) plot carry the same information as the signal that generated it, and (2) how does the information change when the URP gets distorted. We studied the first question in our earlier work , where it was shown that the URP admits the reconstruction of the underlying signal (up to its mean and a choice of sign) if and only if an associated graph is connected. Here we refine this result and we give an explicit condition in terms of the embedding parameters and the discrete Fourier spectrum of the URP. We also develop a method for the reconstruction of the underlying signal which overcomes several drawbacks that earlier approaches had. To address the second question we investigate robustness of the proposed reconstruction method under disturbances. We carry out two simulation experiments which are characterized by a broad band and a narrow band spectrum respectively. For each experiment we choose a noise level and two different pairs of embedding parameters. The conventional binary recurrence plot (RP) is obtained from the URP by thresholding and zero-one conversion, which can be viewed as severe distortion acting on the URP. Typically the reconstruction of the underlying signal from an RP is expected to be rather inaccurate. However, by introducing the concept of a multi-level recurrence plot (MRP) we propose to bridge the information gap between the URP and the RP, while still achieving a high data compression rate. We demonstrate the working of the proposed reconstruction procedure on MRPs, indicating that MRPs with just a few discretization levels can usually capture signal properties and morphologies more accurately than conventional RPs. - Highlights: • A recurrence plot is maximally informative if and only if the corresponding graph is connected. • The diameter of the connected graph is always
Shareholder, stakeholder-owner or broad stakeholder maximization
DEFF Research Database (Denmark)
Mygind, Niels
2004-01-01
With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...
Dopaminergic balance between reward maximization and policy complexity
Directory of Open Access Journals (Sweden)
Naama eParush
2011-05-01
Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.
Maximizing Lumen Gain With Directional Atherectomy.
Stanley, Gregory A; Winscott, John G
2016-08-01
To describe the use of a low-pressure balloon inflation (LPBI) technique to delineate intraluminal plaque and guide directional atherectomy in order to maximize lumen gain and achieve procedure success. The technique is illustrated in a 77-year-old man with claudication who underwent superficial femoral artery revascularization using a HawkOne directional atherectomy catheter. A standard angioplasty balloon was inflated to 1 to 2 atm during live fluoroscopy to create a 3-dimensional "lumenogram" of the target lesion. Directional atherectomy was performed only where plaque impinged on the balloon at a specific fluoroscopic orientation. The results of the LPBI technique were corroborated with multimodality diagnostic imaging, including digital subtraction angiography, intravascular ultrasound, and intra-arterial pressure measurements. With the LPBI technique, directional atherectomy can routinely achieve <10% residual stenosis, as illustrated in this case, thereby broadly supporting a no-stent approach to lower extremity endovascular revascularization. © The Author(s) 2016.
Primordial two-component maximally symmetric inflation
Enqvist, K.; Nanopoulos, D. V.; Quirós, M.; Kounnas, C.
1985-12-01
We propose a two-component inflation model, based on maximally symmetric supergravity, where the scales of reheating and the inflation potential at the origin are decoupled. This is possible because of the second-order phase transition from SU(5) to SU(3)×SU(2)×U(1) that takes place when φ≅φcinflation at the global minimum, and leads to a reheating temperature TR≅(1015-1016) GeV. This makes it possible to generate baryon asymmetry in the conventional way without any conflict with experimental data on proton lifetime. The mass of the gravitinos is m3/2≅1012 GeV, thus avoiding the gravitino problem. Monopoles are diluted by residual inflation in the broken phase below the cosmological bounds if φcUSA.
Distributed-Memory Fast Maximal Independent Set
Energy Technology Data Exchange (ETDEWEB)
Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew
2017-09-13
The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.
Quench dynamics of topological maximally entangled states.
Chung, Ming-Chiang; Jhu, Yi-Hao; Chen, Pochung; Mou, Chung-Yu
2013-07-17
We investigate the quench dynamics of the one-particle entanglement spectra (OPES) for systems with topologically nontrivial phases. By using dimerized chains as an example, it is demonstrated that the evolution of OPES for the quenched bipartite systems is governed by an effective Hamiltonian which is characterized by a pseudospin in a time-dependent pseudomagnetic field S(k,t). The existence and evolution of the topological maximally entangled states (tMESs) are determined by the winding number of S(k,t) in the k-space. In particular, the tMESs survive only if nontrivial Berry phases are induced by the winding of S(k,t). In the infinite-time limit the equilibrium OPES can be determined by an effective time-independent pseudomagnetic field Seff(k). Furthermore, when tMESs are unstable, they are destroyed by quasiparticles within a characteristic timescale in proportion to the system size.
Maximizing policy learning in international committees
DEFF Research Database (Denmark)
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......In the voluminous literature on the European Union's open method of coordination (OMC), no one has hitherto analysed on the basis of scholarly examination the question of what contributes to the learning processes in the OMC committees. On the basis of a questionnaire sent to all participants......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Lovelock black holes with maximally symmetric horizons
Energy Technology Data Exchange (ETDEWEB)
Maeda, Hideki; Willison, Steven; Ray, Sourya, E-mail: hideki@cecs.cl, E-mail: willison@cecs.cl, E-mail: ray@cecs.cl [Centro de Estudios CientIficos (CECs), Casilla 1469, Valdivia (Chile)
2011-08-21
We investigate some properties of n( {>=} 4)-dimensional spacetimes having symmetries corresponding to the isometries of an (n - 2)-dimensional maximally symmetric space in Lovelock gravity under the null or dominant energy condition. The well-posedness of the generalized Misner-Sharp quasi-local mass proposed in the past study is shown. Using this quasi-local mass, we clarify the basic properties of the dynamical black holes defined by a future outer trapping horizon under certain assumptions on the Lovelock coupling constants. The C{sup 2} vacuum solutions are classified into four types: (i) Schwarzschild-Tangherlini-type solution; (ii) Nariai-type solution; (iii) special degenerate vacuum solution; and (iv) exceptional vacuum solution. The conditions for the realization of the last two solutions are clarified. The Schwarzschild-Tangherlini-type solution is studied in detail. We prove the first law of black-hole thermodynamics and present the expressions for the heat capacity and the free energy.
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Directory of Open Access Journals (Sweden)
Paulo André da Conceição Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
Maximal energy extraction under discrete diffusive exchange
Energy Technology Data Exchange (ETDEWEB)
Hay, M. J., E-mail: hay@princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Schiff, J. [Department of Mathematics, Bar-Ilan University, Ramat Gan 52900 (Israel); Fisch, N. J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Maximizing profitability in a hospital outpatient pharmacy.
Jorgenson, J A; Kilarski, J W; Malatestinic, W N; Rudy, T A
1989-07-01
This paper describes the strategies employed to increase the profitability of an existing ambulatory pharmacy operated by the hospital. Methods to generate new revenue including implementation of a home parenteral therapy program, a home enteral therapy program, a durable medical equipment service, and home care disposable sales are described. Programs to maximize existing revenue sources such as increasing the capture rate on discharge prescriptions, increasing "walk-in" prescription traffic and increasing HMO prescription volumes are discussed. A method utilized to reduce drug expenditures is also presented. By minimizing expenses and increasing the revenues for the ambulatory pharmacy operation, net profit increased from +26,000 to over +140,000 in one year.
Maximizing the benefits of a dewatering system
International Nuclear Information System (INIS)
Matthews, P.; Iverson, T.S.
1999-01-01
The use of dewatering systems in the mining, industrial sludge and sewage waste treatment industries is discussed, also describing some of the problems that have been encountered while using drilling fluid dewatering technology. The technology is an acceptable drilling waste handling alternative but it has had problems associated with recycled fluid incompatibility, high chemical costs and system inefficiencies. This paper discussed the following five action areas that can maximize the benefits and help reduce costs of a dewatering project: (1) co-ordinate all services, (2) choose equipment that fits the drilling program, (3) match the chemical treatment with the drilling fluid types, (4) determine recycled fluid compatibility requirements, and (5) determine the disposal requirements before project start-up. 2 refs., 5 figs
Mixtures of maximally entangled pure states
Energy Technology Data Exchange (ETDEWEB)
Flores, M.M., E-mail: mflores@nip.up.edu.ph; Galapon, E.A., E-mail: eric.galapon@gmail.com
2016-09-15
We study the conditions when mixtures of maximally entangled pure states remain entangled. We found that the resulting mixed state remains entangled when the number of entangled pure states to be mixed is less than or equal to the dimension of the pure states. For the latter case of mixing a number of pure states equal to their dimension, we found that the mixed state is entangled provided that the entangled pure states to be mixed are not equally weighted. We also found that one can restrict the set of pure states that one can mix from in order to ensure that the resulting mixed state is genuinely entangled. Also, we demonstrate how these results could be applied as a way to detect entanglement in mixtures of the entangled pure states with noise.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
A Criterion to Identify Maximally Entangled Four-Qubit State
International Nuclear Information System (INIS)
Zha Xinwei; Song Haiyang; Feng Feng
2011-01-01
Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)
Maximal lattice free bodies, test sets and the Frobenius problem
DEFF Research Database (Denmark)
Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt
Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....
Mirro, Amy E.; Brady, Samuel L.; Kaufman, Robert. A.
2016-01-01
Purpose To implement the maximum level of statistical iterative reconstruction that can be used to establish dose-reduced head CT protocols in a primarily pediatric population. Methods Select head examinations (brain, orbits, sinus, maxilla and temporal bones) were investigated. Dose-reduced head protocols using an adaptive statistical iterative reconstruction (ASiR) were compared for image quality with the original filtered back projection (FBP) reconstructed protocols in phantom using the following metrics: image noise frequency (change in perceived appearance of noise texture), image noise magnitude, contrast-to-noise ratio (CNR), and spatial resolution. Dose reduction estimates were based on computed tomography dose index (CTDIvol) values. Patient CTDIvol and image noise magnitude were assessed in 737 pre and post dose reduced examinations. Results Image noise texture was acceptable up to 60% ASiR for Soft reconstruction kernel (at both 100 and 120 kVp), and up to 40% ASiR for Standard reconstruction kernel. Implementation of 40% and 60% ASiR led to an average reduction in CTDIvol of 43% for brain, 41% for orbits, 30% maxilla, 43% for sinus, and 42% for temporal bone protocols for patients between 1 month and 26 years, while maintaining an average noise magnitude difference of 0.1% (range: −3% to 5%), improving CNR of low contrast soft tissue targets, and improving spatial resolution of high contrast bony anatomy, as compared to FBP. Conclusion The methodology in this study demonstrates a methodology for maximizing patient dose reduction and maintaining image quality using statistical iterative reconstruction for a primarily pediatric population undergoing head CT examination. PMID:27056425
Iterative reconstruction reduces abdominal CT dose
International Nuclear Information System (INIS)
Martinsen, Anne Catrine Trægde; Sæther, Hilde Kjernlie; Hol, Per Kristian; Olsen, Dag Rune; Skaane, Per
2012-01-01
Objective: In medical imaging, lowering radiation dose from computed tomography scanning, without reducing diagnostic performance is a desired achievement. Iterative image reconstruction may be one tool to achieve dose reduction. This study reports the diagnostic performance using a blending of 50% statistical iterative reconstruction (ASIR) and filtered back projection reconstruction (FBP) compared to standard FBP image reconstruction at different dose levels for liver phantom examinations. Methods: An anthropomorphic liver phantom was scanned at 250, 185, 155, 140, 120 and 100 mA s, on a 64-slice GE Lightspeed VCT scanner. All scans were reconstructed with ASIR and FBP. Four readers evaluated independently on a 5-point scale 21 images, each containing 32 test sectors. In total 672 areas were assessed. ROC analysis was used to evaluate the differences. Results: There was a difference in AUC between the 250 mA s FBP images and the 120 and 100 mA s FBP images. ASIR reconstruction gave a significantly higher diagnostic performance compared to standard reconstruction at 100 mA s. Conclusion: A blending of 50–90% ASIR and FBP may improve image quality of low dose CT examinations of the liver, and thus give a potential for reducing radiation dose.
Hybrid spectral CT reconstruction.
Directory of Open Access Journals (Sweden)
Darin P Clark
Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with
Hybrid spectral CT reconstruction
Clark, Darin P.
2017-01-01
Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral
Vidovič, L.; Milanič, M.; Majaron, B.
2013-09-01
Pulsed photothermal radiometry (PPTR) allows for noninvasive determination of the laser-induced temperature depth profile in strongly scattering samples, including human skin. In a recent experimental study, we have demonstrated that such information can be used to derive rather accurate predictions of the maximal safe radiant exposure on an individual patient basis. This has important implications for efficacy and safety of several laser applications in dermatology and aesthetic surgery, which are often compromised by risk of adverse side effects (e.g., scarring, and dyspigmentation) resulting from nonselective absorption of strong laser light in epidermal melanin. In this study, the differences between the individual maximal safe radiant exposure values as predicted from PPTR temperature depth profiling performed using a commercial mid-IR thermal camera (as used to acquire the original patient data) and our customized PPTR setup are analyzed. To this end, the latter has been used to acquire 17 PPTR records from three healthy volunteers, using 1 ms laser irradiation at 532 nm and a signal sampling rate of 20 000 . The laser-induced temperature profiles are reconstructed first from the intact PPTR signals, and then by binning the data to imitate the lower sampling rate of the IR camera (1000 fps). Using either the initial temperature profile in a dedicated numerical model of heat transfer or protein denaturation dynamics, the predicted levels of epidermal thermal damage and the corresponding are compared. A similar analysis is performed also with regard to the differences between noise characteristics of the two PPTR setups.
Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments
Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)
2002-01-01
A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.
Consumer-driven profit maximization in broiler production and processing
Directory of Open Access Journals (Sweden)
Ecio de Farias Costa
2004-01-01
Full Text Available Increased emphasis on consumer markets in broiler profit-maximizing modeling generates results that differ from those by traditional profit-maximization models. This approach reveals that the adoption of step pricing and consideration of marketing options (examples of responsiveness to consumers affect the optimal feed formulation levels and types of broiler production to generate maximum profitability. The adoption of step pricing attests that higher profits can be obtained for targeted weights only if premium prices for broiler products are contracted.Um aumento na ênfase dada ao mercado de consumidores de carne de frango e modelos de maximização de lucros na produção de frangos de corte geram resultados que diferem daqueles obtidos em modelos tradicionais de maximização de lucros. Esta metodologia revela que a adoção de step-pricing e considerando opções de mercado (exemplos de resposta às preferências de consumidores afetam os níveis ótimos de formulação de rações e os tipos de produção de frangos de corte que geram uma lucratividade máxima. A adoção de step-pricing atesta que maiores lucros podem ser obtidos para pesos-alvo somente se preços-prêmio para produtos processados de carne de frango forem contratados.
On maximal surfaces in asymptotically flat space-times
International Nuclear Information System (INIS)
Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.
1990-01-01
Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)
Time-of-flight PET image reconstruction using origin ensembles
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Overview of image reconstruction
International Nuclear Information System (INIS)
Marr, R.B.
1980-04-01
Image reconstruction (or computerized tomography, etc.) is any process whereby a function, f, on R/sup n/ is estimated from empirical data pertaining to its integrals, ∫f(x) dx, for some collection of hyperplanes of dimension k < n. The paper begins with background information on how image reconstruction problems have arisen in practice, and describes some of the application areas of past or current interest; these include radioastronomy, optics, radiology and nuclear medicine, electron microscopy, acoustical imaging, geophysical tomography, nondestructive testing, and NMR zeugmatography. Then the various reconstruction algorithms are discussed in five classes: summation, or simple back-projection; convolution, or filtered back-projection; Fourier and other functional transforms; orthogonal function series expansion; and iterative methods. Certain more technical mathematical aspects of image reconstruction are considered from the standpoint of uniqueness, consistency, and stability of solution. The paper concludes by presenting certain open problems. 73 references
The evolving breast reconstruction
DEFF Research Database (Denmark)
Thomsen, Jørn Bo; Gunnarsson, Gudjon Leifur
2014-01-01
The aim of this editorial is to give an update on the use of the propeller thoracodorsal artery perforator flap (TAP/TDAP-flap) within the field of breast reconstruction. The TAP-flap can be dissected by a combined use of a monopolar cautery and a scalpel. Microsurgical instruments are generally...... not needed. The propeller TAP-flap can be designed in different ways, three of these have been published: (I) an oblique upwards design; (II) a horizontal design; (III) an oblique downward design. The latissimus dorsi-flap is a good and reliable option for breast reconstruction, but has been criticized...... for oncoplastic and reconstructive breast surgery and will certainly become an invaluable addition to breast reconstructive methods....
Forging Provincial Reconstruction Teams
National Research Council Canada - National Science Library
Honore, Russel L; Boslego, David V
2007-01-01
The Provincial Reconstruction Team (PRT) training mission completed by First U.S. Army in April 2006 was a joint Service effort to meet a requirement from the combatant commander to support goals in Afghanistan...
Breast Reconstruction with Implants
... your surgical options and discuss the advantages and disadvantages of implant-based reconstruction, and may show you ... Policy Notice of Privacy Practices Notice of Nondiscrimination Advertising Mayo Clinic is a not-for-profit organization ...
A Note of Caution on Maximizing Entropy
Directory of Open Access Journals (Sweden)
Richard E. Neapolitan
2014-07-01
Full Text Available The Principle of Maximum Entropy is often used to update probabilities due to evidence instead of performing Bayesian updating using Bayes’ Theorem, and its use often has efficacious results. However, in some circumstances the results seem unacceptable and unintuitive. This paper discusses some of these cases, and discusses how to identify some of the situations in which this principle should not be used. The paper starts by reviewing three approaches to probability, namely the classical approach, the limiting frequency approach, and the Bayesian approach. It then introduces maximum entropy and shows its relationship to the three approaches. Next, through examples, it shows that maximizing entropy sometimes can stand in direct opposition to Bayesian updating based on reasonable prior beliefs. The paper concludes that if we take the Bayesian approach that probability is about reasonable belief based on all available information, then we can resolve the conflict between the maximum entropy approach and the Bayesian approach that is demonstrated in the examples.
New features of the maximal abelian projection
International Nuclear Information System (INIS)
Bornyakov, V.G.; Polikarpov, M.I.; Syritsyn, S.N.; Schierholz, G.; Suzuki, T.
2005-12-01
After fixing the Maximal Abelian gauge in SU(2) lattice gauge theory we decompose the nonabelian gauge field into the so called monopole field and the modified nonabelian field with monopoles removed. We then calculate respective static potentials and find that the potential due to the modified nonabelian field is nonconfining while, as is well known, the monopole field potential is linear. Furthermore, we show that the sum of these potentials approximates the nonabelian static potential with 5% or higher precision at all distances considered. We conclude that at large distances the monopole field potential describes the classical energy of the hadronic string while the modified nonabelian field potential describes the string fluctuations. Similar decomposition was observed to work for the adjoint static potential. A check was also made of the center projection in the direct center gauge. Two static potentials, determined by projected Z 2 and by modified nonabelian field without Z 2 component were calculated. It was found that their sum is a substantially worse approximation of the SU(2) static potential than that found in the monopole case. It is further demonstrated that similar decomposition can be made for the flux tube action/energy density. (orig.)
Maximization of fructose esters synthesis by response surface methodology.
Neta, Nair Sampaio; Peres, António M; Teixeira, José A; Rodrigues, Ligia R
2011-07-01
Enzymatic synthesis of fructose fatty acid ester was performed in organic solvent media, using a purified lipase from Candida antartica B immobilized in acrylic resin. Response surface methodology with a central composite rotatable design based on five levels was implemented to optimize three experimental operating conditions (temperature, agitation and reaction time). A statistical significant cubic model was established. Temperature and reaction time were found to be the most significant parameters. The optimum operational conditions for maximizing the synthesis of fructose esters were 57.1°C, 100 rpm and 37.8 h. The model was validated in the identified optimal conditions to check its adequacy and accuracy, and an experimental esterification percentage of 88.4% (±0.3%) was obtained. These results showed that an improvement of the enzymatic synthesis of fructose esters was obtained under the optimized conditions. Copyright © 2011 Elsevier B.V. All rights reserved.
Maximizing Information from Residential Measurements of Volatile Organic Compounds
Energy Technology Data Exchange (ETDEWEB)
Maddalena, Randy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Li, Na [Berkeley Analytical Associates, Richmond, CA (United States); Hodgson, Alfred [Berkeley Analytical Associates, Richmond, CA (United States); Offermann, Francis [Indoor Environmental Engineering, San Francisco, CA (United States); Singer, Brett [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2013-02-01
Continually changing materials used in home construction and finishing can introduce new chemicals or changes in the VOC profile in residential air and the trend towards tighter homes can lead to higher exposure concentrations for many indoor sources. However, the complex mixture of VOCs in residential air makes it difficult to discover emerging contaminants and/or trends in pollutant profiles. The purpose of this study is to prepare a comprehensive library of chemicals found in homes, along with a semi-quantitative approach to maximize the information gained from VOC measurements. We carefully reviewed data from 108 new California homes and identified 238 individual compounds. The majority of the identified VOCs originated indoors. Only 31% were found to have relevant health based exposure guidelines and less than 10% had a chronic reference exposure level (CREL). The finding highlights the importance of extending IAQ studies to include a wider range of VOCs
On Maximal Hard-Core Thinnings of Stationary Particle Processes
Hirsch, Christian; Last, Günter
2018-02-01
The present paper studies existence and distributional uniqueness of subclasses of stationary hard-core particle systems arising as thinnings of stationary particle processes. These subclasses are defined by natural maximality criteria. We investigate two specific criteria, one related to the intensity of the hard-core particle process, the other one being a local optimality criterion on the level of realizations. In fact, the criteria are equivalent under suitable moment conditions. We show that stationary hard-core thinnings satisfying such criteria exist and are frequently distributionally unique. More precisely, distributional uniqueness holds in subcritical and barely supercritical regimes of continuum percolation. Additionally, based on the analysis of a specific example, we argue that fluctuations in grain sizes can play an important role for establishing distributional uniqueness at high intensities. Finally, we provide a family of algorithmically constructible approximations whose volume fractions are arbitrarily close to the maximum.
An Expectation-Maximization Method for Calibrating Synchronous Machine Models
Energy Technology Data Exchange (ETDEWEB)
Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang
2013-07-21
The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.
Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization
Abegaz, Brook W.; Mahajan, Satish M.; Negeri, Ebisa O.
2016-06-01
Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers' level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.
Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C
2010-07-21
The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.
Patients' Aesthetic Concerns After Horizontally Placed Abdominal Free Flap Breast Reconstruction.
Kim, Eun Key; Suh, Young Chul; Maldonado, Andrés A; Yun, Jiyoung; Lee, Taik Jong
2015-10-01
The present study aimed to analyze patients' aesthetic concerns after breast reconstruction with abdominal free flap by reporting secondary cosmetic procedures performed based on the patients' request, and analyzed the effect of adjuvant therapies and other variables on such outcomes. All patients who underwent unilateral immediate reconstruction were enrolled prospectively. Free abdominal flaps were placed horizontally with little manipulation. Secondary procedures were actively recommended during the follow-up period to meet the patients' aesthetic concerns. The numbers and types of the secondary procedures and the effects of various factors were analyzed. 150 patients met the eligibility criteria. The average number of overall secondary surgeries per patient was 1.25. Patients with skin-sparing mastectomy required significantly higher number of secondary surgeries compared with those who underwent nipple-areolar skin-sparing mastectomy. When confined to the cosmetic procedures, 58 (38.7 %) patients underwent 75 operations. The most common procedures were flank dog ear revision, fat injection of the reconstructed breast, and breast liposuction. None of the radiated patients underwent liposuction of the flap. Most commonly liposuctioned regions were the central-lateral and lower-lateral, while fat was most commonly injected to the upper-medial and upper-central part of the breast. The present study delineated the numbers and types of the secondary operations after horizontally placed abdominal free flap transfer with analysis of the influence of various factors. Addressing such issues during the primary reconstruction would help to reduce the need and extent of the secondary operations and to maximize aesthetic outcome. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
International Nuclear Information System (INIS)
Zeng, G.L.; Weng, Y.; Gullberg, G.T.
1997-01-01
Single photon emission computed tomography (SPECT) imaging with cone-beam collimators provides improved sensitivity and spatial resolution for imaging small objects with large field-of-view detectors. It is known that Tuy's cone-beam data sufficiency condition must be met to obtain artifact-free reconstructions. Even though Tuy's condition was derived for an attenuation-free situation, the authors hypothesize that an artifact-free reconstruction can be obtained even if the cone-beam data are attenuated, provided the imaging orbit satisfies Tuy's condition and the exact attenuation map is known. In the authors' studies, emission data are acquired using nonplanar circle-and-line orbits to acquire cone-beam data for tomographic reconstructions. An extended iterative ML-EM (maximum likelihood-expectation maximization) reconstruction algorithm is derived and used to reconstruct projection data with either a pre-acquired or assumed attenuation map. Quantitative accuracy of the attenuation corrected emission reconstruction is significantly improved
POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN
Directory of Open Access Journals (Sweden)
Sang Ayu Isnu Maharani
2017-06-01
Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim
Industrial dynamic tomographic reconstruction
International Nuclear Information System (INIS)
Oliveira, Eric Ferreira de
2016-01-01
The state of the art methods applied to industrial processes is currently based on the principles of classical tomographic reconstructions developed for tomographic patterns of static distributions, or is limited to cases of low variability of the density distribution function of the tomographed object. Noise and motion artifacts are the main problems caused by a mismatch in the data from views acquired in different instants. All of these add to the known fact that using a limited amount of data can result in the presence of noise, artifacts and some inconsistencies with the distribution under study. One of the objectives of the present work is to discuss the difficulties that arise from implementing reconstruction algorithms in dynamic tomography that were originally developed for static distributions. Another objective is to propose solutions that aim at reducing a temporal type of information loss caused by employing regular acquisition systems to dynamic processes. With respect to dynamic image reconstruction it was conducted a comparison between different static reconstruction methods, like MART and FBP, when used for dynamic scenarios. This comparison was based on a MCNPx simulation as well as an analytical setup of an aluminum cylinder that moves along the section of a riser during the process of acquisition, and also based on cross section images from CFD techniques. As for the adaptation of current tomographic acquisition systems for dynamic processes, this work established a sequence of tomographic views in a just-in-time fashion for visualization purposes, a form of visually disposing density information as soon as it becomes amenable to image reconstruction. A third contribution was to take advantage of the triple color channel necessary to display colored images in most displays, so that, by appropriately scaling the acquired values of each view in the linear system of the reconstruction, it was possible to imprint a temporal trace into the regularly
International Nuclear Information System (INIS)
Watanabe, Shuichi; Kudo, Hiroyuki; Saito, Tsuneo
1993-01-01
In this paper, we propose a new reconstruction algorithm based on MAP (maximum a posteriori probability) estimation principle for emission tomography. To improve noise suppression properties of the conventional ML-EM (maximum likelihood expectation maximization) algorithm, direct three-dimensional reconstruction that utilizes intensity correlations between adjacent transaxial slices is introduced. Moreover, to avoid oversmoothing of edges, a priori knowledge of RI (radioisotope) distribution is represented by using a doubly-stochastic image model called the compound Gauss-Markov random field. The a posteriori probability is maximized by using the iterative GEM (generalized EM) algorithm. Computer simulation results are shown to demonstrate validity of the proposed algorithm. (author)
Alternative reconstruction after pancreaticoduodenectomy
Directory of Open Access Journals (Sweden)
Cooperman Avram M
2008-01-01
Full Text Available Abstract Background Pancreaticoduodenectomy is the procedure of choice for tumors of the head of the pancreas and periampulla. Despite advances in surgical technique and postoperative care, the procedure continues to carry a high morbidity rate. One of the most common morbidities is delayed gastric emptying with rates of 15%–40%. Following two prolonged cases of delayed gastric emptying, we altered our reconstruction to avoid this complication altogether. Subsequently, our patients underwent a classic pancreaticoduodenectomy with an undivided Roux-en-Y technique for reconstruction. Methods We reviewed the charts of our last 13 Whipple procedures evaluating them for complications, specifically delayed gastric emptying. We compared the outcomes of those patients to a control group of 15 patients who underwent the Whipple procedure with standard reconstruction. Results No instances of delayed gastric emptying occurred in patients who underwent an undivided Roux-en-Y technique for reconstruction. There was 1 wound infection (8%, 1 instance of pneumonia (8%, and 1 instance of bleeding from the gastrojejunal staple line (8%. There was no operative mortality. Conclusion Use of the undivided Roux-en-Y technique for reconstruction following the Whipple procedure may decrease the incidence of delayed gastric emptying. In addition, it has the added benefit of eliminating bile reflux gastritis. Future randomized control trials are recommended to further evaluate the efficacy of the procedure.
Maximizers versus satisficers: Decision-making styles, competence, and outcomes
Andrew M. Parker; Wändi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al.\\ (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decision...
Scattering amplitudes over finite fields and multivariate functional reconstruction
International Nuclear Information System (INIS)
Peraro, Tiziano
2016-01-01
Several problems in computer algebra can be efficiently solved by reducing them to calculations over finite fields. In this paper, we describe an algorithm for the reconstruction of multivariate polynomials and rational functions from their evaluation over finite fields. Calculations over finite fields can in turn be efficiently performed using machine-size integers in statically-typed languages. We then discuss the application of the algorithm to several techniques related to the computation of scattering amplitudes, such as the four- and six-dimensional spinor-helicity formalism, tree-level recursion relations, and multi-loop integrand reduction via generalized unitarity. The method has good efficiency and scales well with the number of variables and the complexity of the problem. As an example combining these techniques, we present the calculation of full analytic expressions for the two-loop five-point on-shell integrands of the maximal cuts of the planar penta-box and the non-planar double-pentagon topologies in Yang-Mills theory, for a complete set of independent helicity configurations.
Scattering amplitudes over finite fields and multivariate functional reconstruction
Energy Technology Data Exchange (ETDEWEB)
Peraro, Tiziano [Higgs Centre for Theoretical Physics,School of Physics and Astronomy, The University of Edinburgh,James Clerk Maxwell Building, Peter Guthrie Tait Road, Edinburgh EH9 3FD (United Kingdom)
2016-12-07
Several problems in computer algebra can be efficiently solved by reducing them to calculations over finite fields. In this paper, we describe an algorithm for the reconstruction of multivariate polynomials and rational functions from their evaluation over finite fields. Calculations over finite fields can in turn be efficiently performed using machine-size integers in statically-typed languages. We then discuss the application of the algorithm to several techniques related to the computation of scattering amplitudes, such as the four- and six-dimensional spinor-helicity formalism, tree-level recursion relations, and multi-loop integrand reduction via generalized unitarity. The method has good efficiency and scales well with the number of variables and the complexity of the problem. As an example combining these techniques, we present the calculation of full analytic expressions for the two-loop five-point on-shell integrands of the maximal cuts of the planar penta-box and the non-planar double-pentagon topologies in Yang-Mills theory, for a complete set of independent helicity configurations.
International Nuclear Information System (INIS)
Wetterich, C.
1999-01-01
The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)
On the way towards a generalized entropy maximization procedure
International Nuclear Information System (INIS)
Bagci, G. Baris; Tirnakli, Ugur
2009-01-01
We propose a generalized entropy maximization procedure, which takes into account the generalized averaging procedures and information gain definitions underlying the generalized entropies. This novel generalized procedure is then applied to Renyi and Tsallis entropies. The generalized entropy maximization procedure for Renyi entropies results in the exponential stationary distribution asymptotically for q element of (0,1] in contrast to the stationary distribution of the inverse power law obtained through the ordinary entropy maximization procedure. Another result of the generalized entropy maximization procedure is that one can naturally obtain all the possible stationary distributions associated with the Tsallis entropies by employing either ordinary or q-generalized Fourier transforms in the averaging procedure.
Violating Bell inequalities maximally for two d-dimensional systems
International Nuclear Information System (INIS)
Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin
2006-01-01
We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information
Lindström, Elin; Sundin, Anders; Trampal, Carlos; Lindsjö, Lars; Ilan, Ezgi; Danfors, Torsten; Antoni, Gunnar; Sörensen, Jens; Lubberink, Mark
2018-02-15
Resolution and quantitative accuracy of positron emission tomography (PET) are highly influenced by the reconstruction method. Penalized likelihood estimation algorithms allow for fully convergent iterative reconstruction, generating a higher image contrast while limiting noise compared to ordered subsets expectation maximization (OSEM). In this study, block-sequential regularized expectation maximization (BSREM) was compared to time-of-flight OSEM (TOF-OSEM). Various strengths of noise penalization factor β were tested along with scan durations and transaxial field of views (FOVs) with the aim to evaluate the performance and clinical use of BSREM for 18 F-FDG-PET-computed tomography (CT), both in quantitative terms and in a qualitative visual evaluation. Methods: Eleven clinical whole-body 18 F-FDG-PET/CT examinations acquired on a digital TOF PET/CT scanner were included. The data were reconstructed using BSREM with point spread function (PSF) recovery and β 133, 267, 400 and 533, and TOF-OSEM with PSF, for various acquisition times/bed position (bp) and FOVs. Noise, signal-to-noise ratio (SNR), signal-to-background ratio (SBR), and standardized uptake values (SUVs) were analysed. A blinded visual image quality evaluation, rating several aspects, performed by two nuclear medicine physicians complemented the analysis. Results: The lowest levels of noise were reached with the highest β resulting in the highest SNR, which in turn resulted in the lowest SBR. Noise equivalence to TOF-OSEM was found with β 400 but produced a significant increase of SUV max (11%), SNR (22%) and SBR (12%) compared to TOF-OSEM. BSREM with β 533 at decreased acquisition (2 min/bp) was comparable to TOF-OSEM at full acquisition duration (3 min/bp). Reconstructed FOV had an impact on BSREM outcome measures, SNR increased while SBR decreased when shifting FOV from 70 to 50 cm. The visual image quality evaluation resulted in similar scores for reconstructions although β 400 obtained the
International Nuclear Information System (INIS)
Yeong, C.L.; Torquato, S.
1998-01-01
We formulate a procedure to reconstruct the structure of general random heterogeneous media from limited morphological information by extending the methodology of Rintoul and Torquato [J. Colloid Interface Sci. 186, 467 (1997)] developed for dispersions. The procedure has the advantages that it is simple to implement and generally applicable to multidimensional, multiphase, and anisotropic structures. Furthermore, an extremely useful feature is that it can incorporate any type and number of correlation functions in order to provide as much morphological information as is necessary for accurate reconstruction. We consider a variety of one- and two-dimensional reconstructions, including periodic and random arrays of rods, various distribution of disks, Debye random media, and a Fontainebleau sandstone sample. We also use our algorithm to construct heterogeneous media from specified hypothetical correlation functions, including an exponentially damped, oscillating function as well as physically unrealizable ones. copyright 1998 The American Physical Society
Delayed breast implant reconstruction
DEFF Research Database (Denmark)
Hvilsom, Gitte B.; Hölmich, Lisbet R.; Steding-Jessen, Marianne
2012-01-01
We evaluated the association between radiation therapy and severe capsular contracture or reoperation after 717 delayed breast implant reconstruction procedures (288 1- and 429 2-stage procedures) identified in the prospective database of the Danish Registry for Plastic Surgery of the Breast during...... of radiation therapy was associated with a non-significantly increased risk of reoperation after both 1-stage (HR = 1.4; 95% CI: 0.7-2.5) and 2-stage (HR = 1.6; 95% CI: 0.9-3.1) procedures. Reconstruction failure was highest (13.2%) in the 2-stage procedures with a history of radiation therapy. Breast...... reconstruction approaches other than implants should be seriously considered among women who have received radiation therapy....
Task complexity and maximal isometric strength gains through motor learning
McGuire, Jessica; Green, Lara A.; Gabriel, David A.
2014-01-01
Abstract This study compared the effects of a simple versus complex contraction pattern on the acquisition, retention, and transfer of maximal isometric strength gains and reductions in force variability. A control group (N = 12) performed simple isometric contractions of the wrist flexors. An experimental group (N = 12) performed complex proprioceptive neuromuscular facilitation (PNF) contractions consisting of maximal isometric wrist extension immediately reversing force direction to wrist flexion within a single trial. Ten contractions were completed on three consecutive days with a retention and transfer test 2‐weeks later. For the retention test, the groups performed their assigned contraction pattern followed by a transfer test that consisted of the other contraction pattern for a cross‐over design. Both groups exhibited comparable increases in strength (20.2%, P < 0.01) and reductions in mean torque variability (26.2%, P < 0.01), which were retained and transferred. There was a decrease in the coactivation ratio (antagonist/agonist muscle activity) for both groups, which was retained and transferred (35.2%, P < 0.01). The experimental group exhibited a linear decrease in variability of the torque‐ and sEMG‐time curves, indicating transfer to the simple contraction pattern (P < 0.01). The control group underwent a decrease in variability of the torque‐ and sEMG‐time curves from the first day of training to retention, but participants returned to baseline levels during the transfer condition (P < 0.01). However, the difference between torque RMS error versus the variability in torque‐ and sEMG‐time curves suggests the demands of the complex task were transferred, but could not be achieved in a reproducible way. PMID:25428951
Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar
2015-01-01
Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466
The Functional Impact of Breast Reconstruction: An Overview and Update
Directory of Open Access Journals (Sweden)
Jonas A. Nelson, MD
2018-03-01
Full Text Available As rates of bilateral mastectomy and immediate reconstruction rise, the aesthetic and psychosocial benefits of breast reconstruction are increasingly well understood. However, an understanding of functional outcome and its optimization is still lacking. This endpoint is critical to maximizing postoperative quality of life. All reconstructive modalities have possible functional consequences. Studies demonstrate that implant-based reconstruction impacts subjective movement, but patients’ day-to-day function may not be objectively hindered despite self-reported disability. For latissimus dorsi flap reconstruction, patients also report some dysfunction at the donor site, but this does not seem to result in significant, long-lasting limitation of daily activity. Athletic and other vigorous activities are most affected. For abdominal free flaps, patient perception of postoperative disability is generally not significant, despite the varying degrees of objective disadvantage that have been identified depending on the extent of rectus muscle sacrifice. With these functional repercussions in mind, a broader perspective on the attempt to ensure minimal functional decline after breast surgery should focus not only on surgical technique but also on postoperative rehabilitation. Early directed physical therapy may be an instrumental element in facilitating return to baseline function. With the patient’s optimal quality of life as an overarching objective, a multifaceted approach to functional preservation may be the answer to this continued challenge. This review will examine these issues in depth in an effort to better understand postoperative functional outcomes with a focus on the younger, active breast reconstruction patient.
Fast dictionary-based reconstruction for diffusion spectrum imaging.
Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar
2013-11-01
Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.
Optimization of Second Fault Detection Thresholds to Maximize Mission POS
Anzalone, Evan
2018-01-01
both magnitude and time. As such, the Navigation team is taking advantage of the INS's capability to schedule and change fault detection thresholds in flight. These values are optimized along a nominal trajectory in order to maximize probability of mission success, and reducing the probability of false positives (defined as when the INS would report a second fault condition resulting in loss of mission, but the vehicle would still meet insertion requirements within system-level margins). This paper will describe an optimization approach using Genetic Algorithms to tune the threshold parameters to maximize vehicle resilience to second fault events as a function of potential fault magnitude and time of fault over an ascent mission profile. The analysis approach, and performance assessment of the results will be presented to demonstrate the applicability of this process to second fault detection to maximize mission probability of success.
HEEL BONE RECONSTRUCTIVE OSTEOSYNTHESIS
Directory of Open Access Journals (Sweden)
A. N. Svetashov
2010-01-01
Full Text Available To detect the most appropriate to heel bone injury severity variants of reconstructive osteosynthesis it was analyzed treatment results of 56 patients. In 15 (26.8% patients classic methods of surgical service were applied, in 41 (73.2% cases to restore the defect porous implants were used. Osteosynthesis without heel bone plastic restoration accomplishment was ineffective in 60% patients from control group. Reconstructive osteosynthesis method ensures long-term good functional effect of rehabilitation in 96.4% patients from the basic group.
International Nuclear Information System (INIS)
Chabanat, E.; D'Hondt, J.; Estre, N.; Fruehwirth, R.; Prokofiev, K.; Speer, T.; Vanlaer, P.; Waltenberger, W.
2005-01-01
Due to the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ('vertex finding') and an estimation problem ('vertex fitting'). Starting from least-squares methods, robustifications of the classical algorithms are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels
Chabanat, E; D'Hondt, J; Vanlaer, P; Prokofiev, K; Speer, T; Frühwirth, R; Waltenberger, W
2005-01-01
Because of the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ("vertex finding") and an estimation problem ("vertex fitting"). Starting from least-square methods, ways to render the classical algorithms more robust are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels.
Oral and Oropharyngeal Reconstruction with a Free Flap.
Jeong, Woo Shik; Oh, Tae Suk
2016-06-01
Extensive surgical resection of the aerodigestive track can result in a large and complex defect of the oropharynx, which represents a significant reconstructive challenge for the plastic surgery. Development of microsurgical techniques has allowed for free flap reconstruction of oropharyngeal defects, with superior outcomes as well as decreases in postoperative complications. The reconstructive goals for oral and oropharyngeal defects are to restore the anatomy, to maintain continuity of the intraoral surface and oropharynx, to protect vital structures such as carotid arteries, to cover exposed portions of internal organs in preparation for adjuvant radiation, and to preserve complex functions of the oral cavity and oropharynx. Oral and oropharyngeal cancers should be treated with consideration of functional recovery. Multidisciplinary treatment strategies are necessary for maximizing disease control and preserving the natural form and function of the oropharynx.
Quantitation of PET data with the EM reconstruction technique
International Nuclear Information System (INIS)
Rosenqvist, G.; Dahlbom, M.; Erikson, L.; Bohm, C.; Blomqvist, G.
1989-01-01
The expectation maximization (EM) algorithm offers high spatial resolution and excellent noise reduction with low statistics PET data, since it incorporates the Poisson nature of the data. The main difficulties are long computation times, difficulties to find appropriate criteria to terminate the reconstruction and to quantify the resulting image data. In the present work a modified EM algorithm has been implements on a VAX 11/780. Its capability to quantify image data has been tested in phantom studies and in two clinical cases, cerebral blood flow studies and dopamine D2-receptor studies. Data from phantom studies indicate the superiority of images reconstructed with the EM technique compared to images reconstructed with the conventional filtered back-projection (FB) technique in areas with low statistics. At higher statistics the noise characteristics of the two techniques coincide. Clinical data support these findings
Bayesian image reconstruction for improving detection performance of muon tomography.
Wang, Guobao; Schultz, Larry J; Qi, Jinyi
2009-05-01
Muon tomography is a novel technology that is being developed for detecting high-Z materials in vehicles or cargo containers. Maximum likelihood methods have been developed for reconstructing the scattering density image from muon measurements. However, the instability of maximum likelihood estimation often results in noisy images and low detectability of high-Z targets. In this paper, we propose using regularization to improve the image quality of muon tomography. We formulate the muon reconstruction problem in a Bayesian framework by introducing a prior distribution on scattering density images. An iterative shrinkage algorithm is derived to maximize the log posterior distribution. At each iteration, the algorithm obtains the maximum a posteriori update by shrinking an unregularized maximum likelihood update. Inverse quadratic shrinkage functions are derived for generalized Laplacian priors and inverse cubic shrinkage functions are derived for generalized Gaussian priors. Receiver operating characteristic studies using simulated data demonstrate that the Bayesian reconstruction can greatly improve the detection performance of muon tomography.
Evaluation of anti-hyperglycemic effect of Actinidia kolomikta (Maxim. etRur.) Maxim. root extract.
Hu, Xuansheng; Cheng, Delin; Wang, Linbo; Li, Shuhong; Wang, Yuepeng; Li, Kejuan; Yang, Yingnan; Zhang, Zhenya
2015-05-01
This study aimed to evaluate the anti-hyperglycemic effect of ethanol extract from Actinidia kolomikta (Maxim. etRur.) Maxim. root (AKE).An in vitro evaluation was performed by using rat intestinal α-glucosidase (maltase and sucrase), the key enzymes linked with type 2 diabetes. And an in vivo evaluation was also performed by loading maltose, sucrose, glucose to normal rats. As a result, AKE showed concentration-dependent inhibition effects on rat intestinal maltase and rat intestinal sucrase with IC(50) values of 1.83 and 1.03mg/mL, respectively. In normal rats, after loaded with maltose, sucrose and glucose, administration of AKE significantly reduced postprandial hyperglycemia, which is similar to acarbose used as an anti-diabetic drug. High contents of total phenolics (80.49 ± 0.05mg GAE/g extract) and total flavonoids (430.69 ± 0.91mg RE/g extract) were detected in AKE. In conclusion, AKE possessed anti-hyperglycemic effects and the possible mechanisms were associated with its inhibition on α-glucosidase and the improvement on insulin release and/or insulin sensitivity as well. The anti-hyperglycemic activity possessed by AKE maybe attributable to its high contents of phenolic and flavonoid compounds.
Alternative approaches to maximally supersymmetric field theories
International Nuclear Information System (INIS)
Broedel, Johannes
2010-01-01
The central objective of this work is the exploration and application of alternative possibilities to describe maximally supersymmetric field theories in four dimensions: N=4 super Yang-Mills theory and N=8 supergravity. While twistor string theory has been proven very useful in the context of N=4 SYM, no analogous formulation for N=8 supergravity is available. In addition to the part describing N=4 SYM theory, twistor string theory contains vertex operators corresponding to the states of N=4 conformal supergravity. Those vertex operators have to be altered in order to describe (non-conformal) Einstein supergravity. A modified version of the known open twistor string theory, including a term which breaks the conformal symmetry for the gravitational vertex operators, has been proposed recently. In a first part of the thesis structural aspects and consistency of the modified theory are discussed. Unfortunately, the majority of amplitudes can not be constructed, which can be traced back to the fact that the dimension of the moduli space of algebraic curves in twistor space is reduced in an inconsistent manner. The issue of a possible finiteness of N=8 supergravity is closely related to the question of the existence of valid counterterms in the perturbation expansion of the theory. In particular, the coefficient in front of the so-called R 4 counterterm candidate has been shown to vanish by explicit calculation. This behavior points into the direction of a symmetry not taken into account, for which the hidden on-shell E 7(7) symmetry is the prime candidate. The validity of the so-called double-soft scalar limit relation is a necessary condition for a theory exhibiting E 7(7) symmetry. By calculating the double-soft scalar limit for amplitudes derived from an N=8 supergravity action modified by an additional R 4 counterterm, one can test for possible constraints originating in the E 7(7) symmetry. In a second part of the thesis, the appropriate amplitudes are calculated
Maximizing the Adjacent Possible in Automata Chemistries.
Hickinbotham, Simon; Clark, Edward; Nellis, Adam; Stepney, Susan; Clarke, Tim; Young, Peter
2016-01-01
Automata chemistries are good vehicles for experimentation in open-ended evolution, but they are by necessity complex systems whose low-level properties require careful design. To aid the process of designing automata chemistries, we develop an abstract model that classifies the features of a chemistry from a physical (bottom up) perspective and from a biological (top down) perspective. There are two levels: things that can evolve, and things that cannot. We equate the evolving level with biology and the non-evolving level with physics. We design our initial organisms in the biology, so they can evolve. We design the physics to facilitate evolvable biologies. This architecture leads to a set of design principles that should be observed when creating an instantiation of the architecture. These principles are Everything Evolves, Everything's Soft, and Everything Dies. To evaluate these ideas, we present experiments in the recently developed Stringmol automata chemistry. We examine the properties of Stringmol with respect to the principles, and so demonstrate the usefulness of the principles in designing automata chemistries.
Selected event reconstruction algorithms for the CBM experiment at FAIR
International Nuclear Information System (INIS)
Lebedev, Semen; Höhne, Claudia; Lebedev, Andrey; Ososkov, Gennady
2014-01-01
Development of fast and efficient event reconstruction algorithms is an important and challenging task in the Compressed Baryonic Matter (CBM) experiment at the future FAIR facility. The event reconstruction algorithms have to process terabytes of input data produced in particle collisions. In this contribution, several event reconstruction algorithms are presented. Optimization of the algorithms in the following CBM detectors are discussed: Ring Imaging Cherenkov (RICH) detector, Transition Radiation Detectors (TRD) and Muon Chamber (MUCH). The ring reconstruction algorithm in the RICH is discussed. In TRD and MUCH track reconstruction algorithms are based on track following and Kalman Filter methods. All algorithms were significantly optimized to achieve maximum speed up and minimum memory consumption. Obtained results showed that a significant speed up factor for all algorithms was achieved and the reconstruction efficiency stays at high level.
New weighting methods for phylogenetic tree reconstruction using multiple loci.
Misawa, Kazuharu; Tajima, Fumio
2012-08-01
Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.
Kinetic theory in maximal-acceleration invariant phase space
International Nuclear Information System (INIS)
Brandt, H.E.
1989-01-01
A vanishing directional derivative of a scalar field along particle trajectories in maximal acceleration invariant phase space is identical in form to the ordinary covariant Vlasov equation in curved spacetime in the presence of both gravitational and nongravitational forces. A natural foundation is thereby provided for a covariant kinetic theory of particles in maximal-acceleration invariant phase space. (orig.)
IIB solutions with N>28 Killing spinors are maximally supersymmetric
International Nuclear Information System (INIS)
Gran, U.; Gutowski, J.; Papadopoulos, G.; Roest, D.
2007-01-01
We show that all IIB supergravity backgrounds which admit more than 28 Killing spinors are maximally supersymmetric. In particular, we find that for all N>28 backgrounds the supercovariant curvature vanishes, and that the quotients of maximally supersymmetric backgrounds either preserve all 32 or N<29 supersymmetries
Muscle mitochondrial capacity exceeds maximal oxygen delivery in humans
DEFF Research Database (Denmark)
Boushel, Robert Christopher; Gnaiger, Erich; Calbet, Jose A L
2011-01-01
Across a wide range of species and body mass a close matching exists between maximal conductive oxygen delivery and mitochondrial respiratory rate. In this study we investigated in humans how closely in-vivo maximal oxygen consumption (VO(2) max) is matched to state 3 muscle mitochondrial respira...
Pace's Maxims for Homegrown Library Projects. Coming Full Circle
Pace, Andrew K.
2005-01-01
This article discusses six maxims by which to run library automation. The following maxims are discussed: (1) Solve only known problems; (2) Avoid changing data to fix display problems; (3) Aut viam inveniam aut faciam; (4) If you cannot make it yourself, buy something; (5) Kill the alligator closest to the boat; and (6) Just because yours is…
Optimal ranking regime analysis of TreeFlow dendrohydrological reconstructions
The Optimal Ranking Regime (ORR) method was used to identify 6-100 year time windows containing significant ranking sequences in 55 western U.S. streamflow reconstructions, and reconstructions of the level of the Great Salt Lake and San Francisco Bay salinity during 1500-2007. The method’s ability t...
Socioeconomic position and breast reconstruction in Danish women
DEFF Research Database (Denmark)
Hvilsom, Gitte B; Hölmich, Lisbet R; Frederiksen, Kirsten Skovsgaard
2011-01-01
Few studies have been conducted on the socioeconomic position of women undergoing breast reconstruction, and none have been conducted in the Danish population. We investigated the association between educational level and breast reconstruction in a nationwide cohort of Danish women with breast...
Reconstructing Neutrino Mass Spectrum
Smirnov, A. Yu.
1999-01-01
Reconstruction of the neutrino mass spectrum and lepton mixing is one of the fundamental problems of particle physics. In this connection we consider two central topics: (i) the origin of large lepton mixing, (ii) possible existence of new (sterile) neutrino states. We discuss also possible relation between large mixing and existence of sterile neutrinos.
Position reconstruction in LUX
Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Druszkiewicz, E.; Edwards, B. N.; Fallon, S. R.; Fan, A.; Fiorucci, S.; Gaitskell, R. J.; Genovesi, J.; Ghag, C.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Velan, V.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.
2018-02-01
The (x, y) position reconstruction method used in the analysis of the complete exposure of the Large Underground Xenon (LUX) experiment is presented. The algorithm is based on a statistical test that makes use of an iterative method to recover the photomultiplier tube (PMT) light response directly from the calibration data. The light response functions make use of a two dimensional functional form to account for the photons reflected on the inner walls of the detector. To increase the resolution for small pulses, a photon counting technique was employed to describe the response of the PMTs. The reconstruction was assessed with calibration data including 83mKr (releasing a total energy of 41.5 keV) and 3H (β- with Q = 18.6 keV) decays, and a deuterium-deuterium (D-D) neutron beam (2.45 MeV) . Within the detector's fiducial volume, the reconstruction has achieved an (x, y) position uncertainty of σ = 0.82 cm and σ = 0.17 cm for events of only 200 and 4,000 detected electroluminescence photons respectively. Such signals are associated with electron recoils of energies ~0.25 keV and ~10 keV, respectively. The reconstructed position of the smallest events with a single electron emitted from the liquid surface (22 detected photons) has a horizontal (x, y) uncertainty of 2.13 cm.
Energy Technology Data Exchange (ETDEWEB)
Maughan, N [Washington University in Saint Louis, Saint Louis, MO (United States); Conti, M [Siemens Healthcare Molecular Imaging, Knoxville, TN (United States); Parikh, P [Washington Univ. School of Medicine, Saint Louis, MO (United States); Faul, D [Siemens Healthcare, New York, NY (United States); Laforest, R [Washington University School of Medicine, Saint Louis, MO (United States)
2015-06-15
Purpose: Imaging Y-90 microspheres with PET/MRI following hepatic radioembolization has the potential for predicting treatment outcome and, in turn, improving patient care. The positron decay branching ratio, however, is very small (32 ppm), yielding images with poor statistics even when therapy doses are used. Our purpose is to find PET reconstruction parameters that maximize the PET recovery coefficients and minimize noise. Methods: An initial 7.5 GBq of Y-90 chloride solution was used to fill an ACR phantom for measurements with a PET/MRI scanner (Siemens Biograph mMR). Four hot cylinders and a warm background activity volume of the phantom were filled with a 10:1 ratio. Phantom attenuation maps were derived from scaled CT images of the phantom and included the MR phased array coil. The phantom was imaged at six time points between 7.5–1.0 GBq total activity over a period of eight days. PET images were reconstructed via OP-OSEM with 21 subsets and varying iteration number (1–5), post-reconstruction filter size (5–10 mm), and either absolute or relative scatter correction. Recovery coefficients, SNR, and noise were measured as well as total activity in the phantom. Results: For the 120 different reconstructions, recovery coefficients ranged from 0.1–0.6 and improved with increasing iteration number and reduced post-reconstruction filter size. SNR, however, improved substantially with lower iteration numbers and larger post-reconstruction filters. From the phantom data, we found that performing 2 iterations, 21 subsets, and applying a 5 mm Gaussian post-reconstruction filter provided optimal recovery coefficients at a moderate noise level for a wide range of activity levels. Conclusion: The choice of reconstruction parameters for Y-90 PET images greatly influences both the accuracy of measurements and image quality. We have found reconstruction parameters that provide optimal recovery coefficients with minimized noise. Future work will include the effects
Mammogram segmentation using maximal cell strength updation in cellular automata.
Anitha, J; Peter, J Dinesh
2015-08-01
Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.
Gang, Grace J; Siewerdsen, Jeffrey H; Stayman, J Webster
2017-12-01
This paper presents a joint optimization of dynamic fluence field modulation (FFM) and regularization in quadratic penalized-likelihood reconstruction that maximizes a task-based imaging performance metric. We adopted a task-driven imaging framework for prospective designs of the imaging parameters. A maxi-min objective function was adopted to maximize the minimum detectability index ( ) throughout the image. The optimization algorithm alternates between FFM (represented by low-dimensional basis functions) and local regularization (including the regularization strength and directional penalty weights). The task-driven approach was compared with three FFM strategies commonly proposed for FBP reconstruction (as well as a task-driven TCM strategy) for a discrimination task in an abdomen phantom. The task-driven FFM assigned more fluence to less attenuating anteroposterior views and yielded approximately constant fluence behind the object. The optimal regularization was almost uniform throughout image. Furthermore, the task-driven FFM strategy redistribute fluence across detector elements in order to prescribe more fluence to the more attenuating central region of the phantom. Compared with all strategies, the task-driven FFM strategy not only improved minimum by at least 17.8%, but yielded higher over a large area inside the object. The optimal FFM was highly dependent on the amount of regularization, indicating the importance of a joint optimization. Sample reconstructions of simulated data generally support the performance estimates based on computed . The improvements in detectability show the potential of the task-driven imaging framework to improve imaging performance at a fixed dose, or, equivalently, to provide a similar level of performance at reduced dose.
Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2010-02-01
esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm
Using Molecular Biology to Maximize Concurrent Training
Baar, Keith
2014-01-01
Very few sports use only endurance or strength. Outside of running long distances on a flat surface and power-lifting, practically all sports require some combination of endurance and strength. Endurance and strength can be developed simultaneously to some degree. However, the development of a high level of endurance seems to prohibit the development or maintenance of muscle mass and strength. This interaction between endurance and strength is called the concurrent training effect. This revie...
Maximizing the phylogenetic diversity of seed banks.
Griffiths, Kate E; Balding, Sharon T; Dickie, John B; Lewis, Gwilym P; Pearce, Tim R; Grenyer, Richard
2015-04-01
Ex situ conservation efforts such as those of zoos, botanical gardens, and seed banks will form a vital complement to in situ conservation actions over the coming decades. It is therefore necessary to pay the same attention to the biological diversity represented in ex situ conservation facilities as is often paid to protected-area networks. Building the phylogenetic diversity of ex situ collections will strengthen our capacity to respond to biodiversity loss. Since 2000, the Millennium Seed Bank Partnership has banked seed from 14% of the world's plant species. We assessed the taxonomic, geographic, and phylogenetic diversity of the Millennium Seed Bank collection of legumes (Leguminosae). We compared the collection with all known legume genera, their known geographic range (at country and regional levels), and a genus-level phylogeny of the legume family constructed for this study. Over half the phylogenetic diversity of legumes at the genus level was represented in the Millennium Seed Bank. However, pragmatic prioritization of species of economic importance and endangerment has led to the banking of a less-than-optimal phylogenetic diversity and prioritization of range-restricted species risks an underdispersed collection. The current state of the phylogenetic diversity of legumes in the Millennium Seed Bank could be substantially improved through the strategic banking of relatively few additional taxa. Our method draws on tools that are widely applied to in situ conservation planning, and it can be used to evaluate and improve the phylogenetic diversity of ex situ collections. © 2014 Society for Conservation Biology.
International Nuclear Information System (INIS)
Beretta, Gian Paolo
2006-01-01
We discuss a nonlinear model for relaxation by energy redistribution within an isolated, closed system composed of noninteracting identical particles with energy levels e i with i=1,2,...,N. The time-dependent occupation probabilities p i (t) are assumed to obey the nonlinear rate equations τ dp i /dt=-p i ln p i -α(t)p i -β(t)e i p i where α(t) and β(t) are functionals of the p i (t)'s that maintain invariant the mean energy E=Σ i=1 N e i p i (t) and the normalization condition 1=Σ i=1 N p i (t). The entropy S(t)=-k B Σ i=1 N p i (t)ln p i (t) is a nondecreasing function of time until the initially nonzero occupation probabilities reach a Boltzmann-like canonical distribution over the occupied energy eigenstates. Initially zero occupation probabilities, instead, remain zero at all times. The solutions p i (t) of the rate equations are unique and well defined for arbitrary initial conditions p i (0) and for all times. The existence and uniqueness both forward and backward in time allows the reconstruction of the ancestral or primordial lowest entropy state. By casting the rate equations in terms not of the p i 's but of their positive square roots √(p i ), they unfold from the assumption that time evolution is at all times along the local direction of steepest entropy ascent or, equivalently, of maximal entropy generation. These rate equations have the same mathematical structure and basic features as the nonlinear dynamical equation proposed in a series of papers ending with G. P. Beretta, Found. Phys. 17, 365 (1987) and recently rediscovered by S. Gheorghiu-Svirschevski [Phys. Rev. A 63, 022105 (2001);63, 054102 (2001)]. Numerical results illustrate the features of the dynamics and the differences from the rate equations recently considered for the same problem by M. Lemanska and Z. Jaeger [Physica D 170, 72 (2002)]. We also interpret the functionals k B α(t) and k B β(t) as nonequilibrium generalizations of the thermodynamic-equilibrium Massieu
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)
2010-09-21
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
International Nuclear Information System (INIS)
Pereira, N F; Sitek, A
2010-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Resolution effects in reconstructing ancestral genomes.
Zheng, Chunfang; Jeong, Yuji; Turcotte, Madisyn Gabrielle; Sankoff, David
2018-05-09
The reconstruction of ancestral genomes must deal with the problem of resolution, necessarily involving a trade-off between trying to identify genomic details and being overwhelmed by noise at higher resolutions. We use the median reconstruction at the synteny block level, of the ancestral genome of the order Gentianales, based on coffee, Rhazya stricta and grape, to exemplify the effects of resolution (granularity) on comparative genomic analyses. We show how decreased resolution blurs the differences between evolving genomes, with respect to rate, mutational process and other characteristics.
Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation
International Nuclear Information System (INIS)
Takeda, Koujin; Kabashima, Yoshiyuki
2013-01-01
We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach
Charge reconstruction in large-area photomultipliers
Grassi, M.; Montuschi, M.; Baldoncini, M.; Mantovani, F.; Ricci, B.; Andronico, G.; Antonelli, V.; Bellato, M.; Bernieri, E.; Brigatti, A.; Brugnera, R.; Budano, A.; Buscemi, M.; Bussino, S.; Caruso, R.; Chiesa, D.; Corti, D.; Dal Corso, F.; Ding, X. F.; Dusini, S.; Fabbri, A.; Fiorentini, G.; Ford, R.; Formozov, A.; Galet, G.; Garfagnini, A.; Giammarchi, M.; Giaz, A.; Insolia, A.; Isocrate, R.; Lippi, I.; Longhitano, F.; Lo Presti, D.; Lombardi, P.; Marini, F.; Mari, S. M.; Martellini, C.; Meroni, E.; Mezzetto, M.; Miramonti, L.; Monforte, S.; Nastasi, M.; Ortica, F.; Paoloni, A.; Parmeggiano, S.; Pedretti, D.; Pelliccia, N.; Pompilio, R.; Previtali, E.; Ranucci, G.; Re, A. C.; Romani, A.; Saggese, P.; Salamanna, G.; Sawy, F. H.; Settanta, G.; Sisti, M.; Sirignano, C.; Spinetti, M.; Stanco, L.; Strati, V.; Verde, G.; Votano, L.
2018-02-01
Large-area PhotoMultiplier Tubes (PMT) allow to efficiently instrument Liquid Scintillator (LS) neutrino detectors, where large target masses are pivotal to compensate for neutrinos' extremely elusive nature. Depending on the detector light yield, several scintillation photons stemming from the same neutrino interaction are likely to hit a single PMT in a few tens/hundreds of nanoseconds, resulting in several photoelectrons (PEs) to pile-up at the PMT anode. In such scenario, the signal generated by each PE is entangled to the others, and an accurate PMT charge reconstruction becomes challenging. This manuscript describes an experimental method able to address the PMT charge reconstruction in the case of large PE pile-up, providing an unbiased charge estimator at the permille level up to 15 detected PEs. The method is based on a signal filtering technique (Wiener filter) which suppresses the noise due to both PMT and readout electronics, and on a Fourier-based deconvolution able to minimize the influence of signal distortions—such as an overshoot. The analysis of simulated PMT waveforms shows that the slope of a linear regression modeling the relation between reconstructed and true charge values improves from 0.769 ± 0.001 (without deconvolution) to 0.989 ± 0.001 (with deconvolution), where unitary slope implies perfect reconstruction. A C++ implementation of the charge reconstruction algorithm is available online at [1].
Evaluation of 3D reconstruction algorithms for a small animal PET camera
International Nuclear Information System (INIS)
Johnson, C.A.; Gandler, W.R.; Seidel, J.
1996-01-01
The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera
Energy Technology Data Exchange (ETDEWEB)
Park, Juil [Seoul National University Children' s Hospital, Department of Radiology, Seoul (Korea, Republic of); Choi, Young Hun [Seoul National University Children' s Hospital, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Cheon, Jung-Eun; Kim, Woo Sun; Kim, In-One [Seoul National University Children' s Hospital, Department of Radiology, Seoul (Korea, Republic of); Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Pak, Seong Yong [Siemens Healthineers, Seoul (Korea, Republic of); Krauss, Bernhard [Siemens Healthineers, Forchheim (Germany)
2017-11-15
Advanced virtual monochromatic reconstruction from dual-energy brain CT has not been evaluated in children. To determine the most effective advanced virtual monochromatic imaging energy level for maximizing pediatric brain parenchymal image quality in dual-energy unenhanced brain CT and to compare this technique with conventional monochromatic reconstruction and polychromatic scanning. Using both conventional (Mono) and advanced monochromatic reconstruction (Mono+) techniques, we retrospectively reconstructed 13 virtual monochromatic imaging energy levels from 40 keV to 100 keV in 5-keV increments from dual-source, dual-energy unenhanced brain CT scans obtained in 23 children. We analyzed gray and white matter noise ratios, signal-to-noise ratios and contrast-to-noise ratio, and posterior fossa artifact. We chose the optimal mono-energetic levels and compared them with conventional CT. For Mono+maximum optima were observed at 60 keV, and minimum posterior fossa artifact at 70 keV. For Mono, optima were at 65-70 keV, with minimum posterior fossa artifact at 75 keV. Mono+ was superior to Mono and to polychromatic CT for image-quality measures. Subjective analysis rated Mono+superior to other image sets. Optimal virtual monochromatic imaging using Mono+ algorithm demonstrated better image quality for gray-white matter differentiation and reduction of the artifact in the posterior fossa. (orig.)
Inquiry in bibliography some of the bustan`s maxim
Directory of Open Access Journals (Sweden)
sajjad rahmatian
2016-12-01
Full Text Available Sa`di is on of those poets who`s has placed a special position to preaching and guiding the people and among his works, allocated throughout the text of bustan to advice and maxim on legal and ethical various subjects. Surely, sa`di on the way of to compose this work and expression of its moral point, direct or indirect have been affected by some previous sources and possibly using their content. The main purpose of this article is that the pay review of basis and sources of bustan`s maxims and show that sa`di when expression the maxims of this work has been affected by which of the texts and works. For this purpose is tried to with search and research on the resources that have been allocated more or less to the aphorisms, to discover and extract traces of influence sa`di from their moral and didactic content. From the most important the finding of this study can be mentioned that indirect effect of some pahlavi books of maxim (like maxims of azarbad marespandan and bozorgmehr book of maxim and also noted sa`di directly influenced of moral and ethical works of poets and writers before him, and of this, sa`di`s influence from abo- shakur balkhi maxims, ferdowsi and keikavus is remarkable and noteworthy.
Can monkeys make investments based on maximized pay-off?
Directory of Open Access Journals (Sweden)
Sophie Steelandt
2011-03-01
Full Text Available Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella and thirteen macaques (Macaca fascicularis, Macaca tonkeana in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible.
Maximizing biomarker discovery by minimizing gene signatures
Directory of Open Access Journals (Sweden)
Chang Chang
2011-12-01
Full Text Available Abstract Background The use of gene signatures can potentially be of considerable value in the field of clinical diagnosis. However, gene signatures defined with different methods can be quite various even when applied the same disease and the same endpoint. Previous studies have shown that the correct selection of subsets of genes from microarray data is key for the accurate classification of disease phenotypes, and a number of methods have been proposed for the purpose. However, these methods refine the subsets by only considering each single feature, and they do not confirm the association between the genes identified in each gene signature and the phenotype of the disease. We proposed an innovative new method termed Minimize Feature's Size (MFS based on multiple level similarity analyses and association between the genes and disease for breast cancer endpoints by comparing classifier models generated from the second phase of MicroArray Quality Control (MAQC-II, trying to develop effective meta-analysis strategies to transform the MAQC-II signatures into a robust and reliable set of biomarker for clinical applications. Results We analyzed the similarity of the multiple gene signatures in an endpoint and between the two endpoints of breast cancer at probe and gene levels, the results indicate that disease-related genes can be preferably selected as the components of gene signature, and that the gene signatures for the two endpoints could be interchangeable. The minimized signatures were built at probe level by using MFS for each endpoint. By applying the approach, we generated a much smaller set of gene signature with the similar predictive power compared with those gene signatures from MAQC-II. Conclusions Our results indicate that gene signatures of both large and small sizes could perform equally well in clinical applications. Besides, consistency and biological significances can be detected among different gene signatures, reflecting the
How data-ink maximization can motivate learners – Persuasion in data visualization
Gottschalk, Judith
2017-01-01
This paper discusses both the macro- and the micro-level of persuasion in data visualizations in persuasive tools for language learning. The hypothesis of this paper is that persuasive data visualizations decrease reading time and increase reading accuracy of graph charts. Based on Tufte’s (1983) data-ink maximization principle the report introduces a framework for persuasive data visualizations on the persuasive micro level which employs Few’s (2013) conception of de-emphasizing non-data and...
Algebraic reconstruction techniques for spectral reconstruction in diffuse optical tomography
International Nuclear Information System (INIS)
Brendel, Bernhard; Ziegler, Ronny; Nielsen, Tim
2008-01-01
Reconstruction in diffuse optical tomography (DOT) necessitates solving the diffusion equation, which is nonlinear with respect to the parameters that have to be reconstructed. Currently applied solving methods are based on the linearization of the equation. For spectral three-dimensional reconstruction, the emerging equation system is too large for direct inversion, but the application of iterative methods is feasible. Computational effort and speed of convergence of these iterative methods are crucial since they determine the computation time of the reconstruction. In this paper, the iterative methods algebraic reconstruction technique (ART) and conjugated gradients (CGs) as well as a new modified ART method are investigated for spectral DOT reconstruction. The aim of the modified ART scheme is to speed up the convergence by considering the specific conditions of spectral reconstruction. As a result, it converges much faster to favorable results than conventional ART and CG methods
ACTS: from ATLAS software towards a common track reconstruction software
AUTHOR|(INSPIRE)INSPIRE-00349786; The ATLAS collaboration; Salzburger, Andreas; Kiehn, Moritz; Hrdinka, Julia; Calace, Noemi
2017-01-01
Reconstruction of charged particles' trajectories is a crucial task for most particle physics experiments. The high instantaneous luminosity achieved at the LHC leads to a high number of proton-proton collisions per bunch crossing, which has put the track reconstruction software of the LHC experiments through a thorough test. Preserving track reconstruction performance under increasingly difficult experimental conditions, while keeping the usage of computational resources at a reasonable level, is an inherent problem for many HEP experiments. Exploiting concurrent algorithms and using multivariate techniques for track identification are the primary strategies to achieve that goal. Starting from current ATLAS software, the ACTS project aims to encapsulate track reconstruction software into a generic, framework- and experiment-independent software package. It provides a set of high-level algorithms and data structures for performing track reconstruction tasks as well as fast track simulation. The software is de...
Maximizing Strategy with an Effective Balanced Scorecard
Directory of Open Access Journals (Sweden)
Wendy Endrianto
2016-03-01
Full Text Available The research was conducted by studying the literature on the topic discussed. Presented descriptively in a systematic way to address each of the key discussion on this research, then connecting factors correlated with each other were finally seeking a conclusion the most effective method in meeting the company's goals. Then, through this study it can be concluded that in order to synergize between vision, mission and strategy of the company in regard to improving the company's performance is by communicating the balanced scorecard from top management down to the lower level of management so that all elements of the company know their respective roles in order to achieve company’s goal.
DEFF Research Database (Denmark)
Nordsborg, Nikolai; Goodmann, Craig; McKenna, Michael J.
2005-01-01
Dexamethasone, a widely clinically used glucocorticoid, increases human skeletal muscle Na+,K+ pump content, but the effects on maximal Na+,K+ pump activity and subunit specific mRNA are unknown. Ten healthy male subjects ingested dexamethasone for 5 days and the effects on Na+,K+ pump content......, maximal activity and subunit specific mRNA level (a1, a2, ß1, ß2, ß3) in deltoid and vastus lateralis muscle were investigated. Before treatment, maximal Na+,K+ pump activity, as well as a1, a2, ß1 and ß2 mRNA levels were higher (P ... increased Na+,K+ pump maximal activity in vastus lateralis and deltoid by 14 ± 7% (P Na+,K+ pump content by 18 ± 9% (P
Herrera, Ramón
2018-03-01
The reconstruction of a warm inflationary universe model from the scalar spectral index n_S(N) and the tensor to scalar ratio r( N) as a function of the number of e-folds N is studied. Under a general formalism we find the effective potential and the dissipative coefficient in terms of the cosmological parameters n_S and r considering the weak and strong dissipative stages under the slow roll approximation. As a specific example, we study the attractors for the index n_S given by nS-1∝ N^{-1} and for the ratio r∝ N^{-2}, in order to reconstruct the model of warm inflation. Here, expressions for the effective potential V(φ ) and the dissipation coefficient Γ (φ ) are obtained.
Gravitational collapse of charged dust shell and maximal slicing condition
International Nuclear Information System (INIS)
Maeda, Keiichi
1980-01-01
The maximal slicing condition is a good time coordinate condition qualitatively when pursuing the gravitational collapse by the numerical calculation. The analytic solution of the gravitational collapse under the maximal slicing condition is given in the case of a spherical charged dust shell and the behavior of time slices with this coordinate condition is investigated. It is concluded that under the maximal slicing condition we can pursue the gravitational collapse until the radius of the shell decreases to about 0.7 x (the radius of the event horizon). (author)
Optimal quantum error correcting codes from absolutely maximally entangled states
Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio
2018-02-01
Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \
Breakdown of maximality conjecture in continuous phase transitions
International Nuclear Information System (INIS)
Mukamel, D.; Jaric, M.V.
1983-04-01
A Landau-Ginzburg-Wilson model associated with a single irreducible representation which exhibits an ordered phase whose symmetry group is not a maximal isotropy subgroup of the symmetry group of the disordered phase is constructed. This example disproves the maximality conjecture suggested in numerous previous studies. Below the (continuous) transition, the order parameter points along a direction which varies with the temperature and with the other parameters which define the model. An extension of the maximality conjecture to reducible representations was postulated in the context of Higgs symmetry breaking mechanism. Our model can also be extended to provide a counter example in these cases. (author)
Yang, Li; Wang, Guobao; Qi, Jinyi
2016-04-01
Detecting cancerous lesions is a major clinical application of emission tomography. In a previous work, we studied penalized maximum-likelihood (PML) image reconstruction for lesion detection in static PET. Here we extend our theoretical analysis of static PET reconstruction to dynamic PET. We study both the conventional indirect reconstruction and direct reconstruction for Patlak parametric image estimation. In indirect reconstruction, Patlak parametric images are generated by first reconstructing a sequence of dynamic PET images, and then performing Patlak analysis on the time activity curves (TACs) pixel-by-pixel. In direct reconstruction, Patlak parametric images are estimated directly from raw sinogram data by incorporating the Patlak model into the image reconstruction procedure. PML reconstruction is used in both the indirect and direct reconstruction methods. We use a channelized Hotelling observer (CHO) to assess lesion detectability in Patlak parametric images. Simplified expressions for evaluating the lesion detectability have been derived and applied to the selection of the regularization parameter value to maximize detection performance. The proposed method is validated using computer-based Monte Carlo simulations. Good agreements between the theoretical predictions and the Monte Carlo results are observed. Both theoretical predictions and Monte Carlo simulation results show the benefit of the indirect and direct methods under optimized regularization parameters in dynamic PET reconstruction for lesion detection, when compared with the conventional static PET reconstruction.
Listeners are maximally flexible in updating phonetic beliefs over time.
Saltzman, David; Myers, Emily
2018-04-01
Perceptual learning serves as a mechanism for listenexrs to adapt to novel phonetic information. Distributional tracking theories posit that this adaptation occurs as a result of listeners accumulating talker-specific distributional information about the phonetic category in question (Kleinschmidt & Jaeger, 2015, Psychological Review, 122). What is not known is how listeners build these talker-specific distributions; that is, if they aggregate all information received over a certain time period, or if they rely more heavily upon the most recent information received and down-weight older, consolidated information. In the present experiment, listeners were exposed to four interleaved blocks of a lexical decision task and a phonetic categorization task in which the lexical decision blocks were designed to bias perception in opposite directions along a "s"-"sh" continuum. Listeners returned several days later and completed the identical task again. Evidence was consistent with listeners using a relatively short temporal window of integration at the individual session level. Namely, in each individual session, listeners' perception of a "s"-"sh" contrast was biased by the information in the immediately preceding lexical decision block, and there was no evidence that listeners summed their experience with the talker over the entire session. Similarly, the magnitude of the bias effect did not change between sessions, consistent with the idea that talker-specific information remains flexible, even after consolidation. In general, results suggest that listeners are maximally flexible when considering how to categorize speech from a novel talker.
Moderate intra-group bias maximizes cooperation on interdependent populations.
Directory of Open Access Journals (Sweden)
Changbing Tang
Full Text Available Evolutionary game theory on spatial structures has received increasing attention during the past decades. However, the majority of these achievements focuses on single and static population structures, which is not fully consistent with the fact that real structures are composed of many interactive groups. These groups are interdependent on each other and present dynamical features, in which individuals mimic the strategy of neighbors and switch their partnerships continually. It is however unclear how the dynamical and interdependent interactions among groups affect the evolution of collective behaviors. In this work, we employ the prisoner's dilemma game to investigate how the dynamics of structure influences cooperation on interdependent populations, where populations are represented by group structures. It is found that the more robust the links between cooperators (or the more fragile the links between cooperators and defectors, the more prevalent of cooperation. Furthermore, theoretical analysis shows that the intra-group bias can favor cooperation, which is only possible when individuals are likely to attach neighbors within the same group. Yet, interestingly, cooperation can be even inhibited for large intra-group bias, allowing the moderate intra-group bias maximizes the cooperation level.
Energy-driven scheduling algorithm for nanosatellite energy harvesting maximization
Slongo, L. K.; Martínez, S. V.; Eiterer, B. V. B.; Pereira, T. G.; Bezerra, E. A.; Paiva, K. V.
2018-06-01
The number of tasks that a satellite may execute in orbit is strongly related to the amount of energy its Electrical Power System (EPS) is able to harvest and to store. The manner the stored energy is distributed within the satellite has also a great impact on the CubeSat's overall efficiency. Most CubeSat's EPS do not prioritize energy constraints in their formulation. Unlike that, this work proposes an innovative energy-driven scheduling algorithm based on energy harvesting maximization policy. The energy harvesting circuit is mathematically modeled and the solar panel I-V curves are presented for different temperature and irradiance levels. Considering the models and simulations, the scheduling algorithm is designed to keep solar panels working close to their maximum power point by triggering tasks in the appropriate form. Tasks execution affects battery voltage, which is coupled to the solar panels through a protection circuit. A software based Perturb and Observe strategy allows defining the tasks to be triggered. The scheduling algorithm is tested in FloripaSat, which is an 1U CubeSat. A test apparatus is proposed to emulate solar irradiance variation, considering the satellite movement around the Earth. Tests have been conducted to show that the scheduling algorithm improves the CubeSat energy harvesting capability by 4.48% in a three orbit experiment and up to 8.46% in a single orbit cycle in comparison with the CubeSat operating without the scheduling algorithm.
Segmentation-DrivenTomographic Reconstruction
DEFF Research Database (Denmark)
Kongskov, Rasmus Dalgas
such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...
Ray tracing reconstruction investigation for C-arm tomosynthesis
Malalla, Nuhad A. Y.; Chen, Ying
2016-04-01
C-arm tomosynthesis is a three dimensional imaging technique. Both x-ray source and the detector are mounted on a C-arm wheeled structure to provide wide variety of movement around the object. In this paper, C-arm tomosynthesis was introduced to provide three dimensional information over a limited view angle (less than 180o) to reduce radiation exposure and examination time. Reconstruction algorithms based on ray tracing method such as ray tracing back projection (BP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were developed for C-arm tomosynthesis. C-arm tomosynthesis projection images of simulated spherical object were simulated with a virtual geometric configuration with a total view angle of 40 degrees. This study demonstrated the sharpness of in-plane reconstructed structure and effectiveness of removing out-of-plane blur for each reconstruction algorithms. Results showed the ability of ray tracing based reconstruction algorithms to provide three dimensional information with limited angle C-arm tomosynthesis.
A new iterative algorithm to reconstruct the refractive index.
Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y
2007-06-21
The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.
Parametric image reconstruction using spectral analysis of PET projection data
International Nuclear Information System (INIS)
Meikle, Steven R.; Matthews, Julian C.; Cunningham, Vincent J.; Bailey, Dale L.; Livieratos, Lefteris; Jones, Terry; Price, Pat
1998-01-01
Spectral analysis is a general modelling approach that enables calculation of parametric images from reconstructed tracer kinetic data independent of an assumed compartmental structure. We investigated the validity of applying spectral analysis directly to projection data motivated by the advantages that: (i) the number of reconstructions is reduced by an order of magnitude and (ii) iterative reconstruction becomes practical which may improve signal-to-noise ratio (SNR). A dynamic software phantom with typical 2-[ 11 C]thymidine kinetics was used to compare projection-based and image-based methods and to assess bias-variance trade-offs using iterative expectation maximization (EM) reconstruction. We found that the two approaches are not exactly equivalent due to properties of the non-negative least-squares algorithm. However, the differences are small ( 1 and, to a lesser extent, VD). The optimal number of EM iterations was 15-30 with up to a two-fold improvement in SNR over filtered back projection. We conclude that projection-based spectral analysis with EM reconstruction yields accurate parametric images with high SNR and has potential application to a wide range of positron emission tomography ligands. (author)
A combined reconstruction-classification method for diffuse optical tomography
Energy Technology Data Exchange (ETDEWEB)
Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Maximing Learning Strategies to Promote Learner Autonomy
Directory of Open Access Journals (Sweden)
Junaidi Mistar
2001-01-01
Full Text Available Learning a new language is ultimately to be able to communicate with it. Encouraging a sense of responsibility on the part of the learners is crucial for training them to be proficient communicators. As such, understanding the strategies that they employ in acquiring the language skill is important to come to ideas of how to promote learner autonomy. Research recently conducted with three different groups of learners of English at the tertiary education level in Malang indicated that they used metacognitive and social startegies at a high frequency, while memory, cognitive, conpensation, and affective strategies were exercised at a medium frewuency. This finding implies that the learners have acquired some degrees of autonomy because metacognive strategies requires them to independently make plans for their learning activities as well as evaluate the progress, and social strategies requires them to independently enhance communicative interactions with other people. Further actions are then to be taken increase their learning autonomy, that is by intensifying the practice of use of the other four strategy categories, which are not yet applied intensively.
Oxidative stress and nitrite dynamics under maximal load in elite athletes: relation to sport type.
Cubrilo, Dejan; Djordjevic, Dusica; Zivkovic, Vladimir; Djuric, Dragan; Blagojevic, Dusko; Spasic, Mihajlo; Jakovljevic, Vladimir
2011-09-01
Maximal workload in elite athletes induces increased generation of reactive oxygen/nitrogen species (RONS) and oxidative stress, but the dynamics of RONS production are not fully explored. The aim of our study was to examine the effects of long-term engagement in sports with different energy requirements (aerobic, anaerobic, and aerobic/anaerobic) on oxidative stress parameters during progressive exercise test. Concentrations of lactates, nitric oxide (NO) measured through stabile end product-nitrites (NO(2) (-)), superoxide anion radical (O(2) (•-)), and thiobarbituric reactive substances (TBARS) as index of lipid peroxidation were determined in rest, after maximal workload, and at 4 and 10th min of recovery in blood plasma of top level competitors in rowing, cycling, and taekwondo. Results showed that sportmen had similar concentrations of lactates and O(2) (•-) in rest. Nitrite concentrations in rest were the lowest in taekwondo fighters, while rowers had the highest levels among examined groups. The order of magnitude for TBARS level in the rest was bicycling > taekwondo > rowing. During exercise at maximal intensity, the concentration of lactate significantly elevated to similar levels in all tested sportsmen and they were persistently elevated during recovery period of 4 and 10 min. There were no significant changes in O(2) (•-), nitrite, and TBARS levels neither at the maximum intensity of exercise nor during the recovery period comparing to the rest period in examined individuals. Our results showed that long term different training strategies establish different basal nitrites and lipid peroxidation levels in sportmen. However, progressive exercise does not influence basal nitrite and oxidative stress parameters level neither at maximal load nor during the first 10 min of recovery in sportmen studied.
[Reconstructive methods after Fournier gangrene].
Wallner, C; Behr, B; Ring, A; Mikhail, B D; Lehnhardt, M; Daigeler, A
2016-04-01
Fournier's gangrene is a variant of the necrotizing fasciitis restricted to the perineal and genital region. It presents as an acute life-threatening disease and demands rapid surgical debridement, resulting in large soft tissue defects. Various reconstructive methods have to be applied to reconstitute functionality and aesthetics. The objective of this work is to identify different reconstructive methods in the literature and compare them to our current concepts for reconstructing defects caused by Fournier gangrene. Analysis of the current literature and our reconstructive methods on Fournier gangrene. The Fournier gangrene is an emergency requiring rapid, calculated antibiotic treatment and radical surgical debridement. After the acute phase of the disease, appropriate reconstructive methods are indicated. The planning of the reconstruction of the defect depends on many factors, especially functional and aesthetic demands. Scrotal reconstruction requires a higher aesthetic and functional reconstructive degree than perineal cutaneous wounds. In general, thorough wound hygiene, proper pre-operative planning, and careful consideration of the patient's demands are essential for successful reconstruction. In the literature, various methods for reconstruction after Fournier gangrene are described. Reconstruction with a flap is required for a good functional result in complex regions as the scrotum and penis, while cutaneous wounds can be managed through skin grafting. Patient compliance and tissue demand are crucial factors in the decision-making process.
Reference Values for Maximal Inspiratory Pressure: A Systematic Review
Directory of Open Access Journals (Sweden)
Isabela MB Sclauser Pessoa
2014-01-01
Full Text Available BACKGROUND: Maximal inspiratory pressure (MIP is the most commonly used measure to evaluate inspiratory muscle strength. Normative values for MIP vary significantly among studies, which may reflect differences in participant demographics and technique of MIP measurement.