Maximizing results in reconstruction of cheek defects.
Mureau, Marc A M; Hofer, Stefan O P
2009-07-01
The face is exceedingly important, as it is the medium through which individuals interact with the rest of society. Reconstruction of cheek defects after trauma or surgery is a continuing challenge for surgeons who wish to reliably restore facial function and appearance. Important in aesthetic facial reconstruction are the aesthetic unit principles, by which the face can be divided in central facial units (nose, lips, eyelids) and peripheral facial units (cheeks, forehead, chin). This article summarizes established options for reconstruction of cheek defects and provides an overview of several modifications as well as tips and tricks to avoid complications and maximize aesthetic results.
Arctic Sea Level Reconstruction
Svendsen, Peter Limkilde
Reconstruction of historical Arctic sea level is very difficult due to the limited coverage and quality of tide gauge and altimetry data in the area. This thesis addresses many of these issues, and discusses strategies to help achieve a stable and plausible reconstruction of Arctic sea level from...... 1950 to today.The primary record of historical sea level, on the order of several decades to a few centuries, is tide gauges. Tide gauge records from around the world are collected in the Permanent Service for Mean Sea Level (PSMSL) database, and includes data along the Arctic coasts. A reasonable...... amount of data is available along the Norwegian and Russian coasts since 1950, and most published research on Arctic sea level extends cautiously from these areas. Very little tide gauge data is available elsewhere in the Arctic, and records of a length of several decades,as generally recommended for sea...
National Oceanic and Atmospheric Administration, Department of Commerce — Records of past lake levels, mostly related to changes in moisture balance (evaporation-precipitation). Parameter keywords describe what was measured in this data...
Determinants of maximally attained level of pulmonary function
Wang, XB; Mensinga, TT; Schouten, JP; Rijcken, B; Weiss, ST
2004-01-01
This study investigated the determinants of sex-specific maximally attained levels of FEV1, VC, and the ratio of FEV1 to VC. Subjects were between the ages of 15 and 35 years (1,818 males and 1,732 females), participating in the Vlagtwedde/Vlaardingen study in The Netherlands. The subjects were foll
Takahashi, Yasuyuki; Murase, Kenya [Osaka Medical Coll., Takatsuki (Japan). Graduate School; Higashino, Hiroshi; Sogabe, Ichiro; Sakamoto, Kana
2001-12-01
The quality of images reconstructed by means of the maximum likelihood-expectation maximization (ML-EM) and ordered subset (OS)-EM algorithms, was examined with parameters such as the number of iterations and subsets, then compared with the quality of images reconstructed by the filtered back projection method. Phantoms showing signals inside signals, which mimicked single-photon emission computed tomography (SPECT) images of cerebral blood flow and myocardial perfusion, and phantoms showing signals around the signals obtained by SPECT of bone and tumor were used for experiments. To determine signals for recognition, SPECT images in which the signals could be appropriately recognized with a combination of fewer iterations and subsets of different sizes and densities were evaluated by receiver operating characteristic (ROC) analysis. The results of ROC analysis were applied to myocardial phantom experiments and scintigraphy of myocardial perfusion. Taking the image processing time into consideration, good SPECT images were obtained by OS-EM at iteration No. 10 and subset 5. This study will be helpful for selection of parameters such as the number of iterations and subsets when using the ML-EM or OS-EM algorithms. (author)
Rodrigo, Miguel; Climent, Andreu; Liberos, Alejandro; Hernandez-Romero, Ismael; Arenal, Angel; Bermejo, Javier; Fernandez-Aviles, Francisco; Atienza, Felipe; Guillem, Maria
2017-05-23
Electrocardiographic Imaging (ECGI) has become an increasingly used technique for non-invasive diagnosis of cardiac arrhythmias, although the need for medical imaging technology to determine the anatomy hinders its introduction in the clinical practice. This work explores the ability of a new metric based on the inverse reconstruction quality for the location and orientation of the atrial surface inside the torso. Body surface electrical signals from 31 realistic mathematical models and four AF patients were used to estimate the optimal position of the atria inside the torso. The curvature of the L-curve from the Tikhonov method, which was found to be related to the inverse reconstruction quality, was measured after application of deviations in atrial position and orientation. Independent deviations in the atrial position were solved by finding the maximal L-curve curvature with an error of 1.7±2.4 mm in mathematical models and 9.1±11.5 mm in patients. For the case of independent angular deviations, the error in location by using the L-curve was 5.8±7.1º in mathematical models and 12.4º±13.2º in patients. The ability of the L-curve curvature was tested also under superimposed uncertainties in the 3 axis of translation and in the 3 axis of rotation and the error in location was of 2.3±3.2 mm and 6.4º±7.1º in mathematical models, and 7.9±10.7 mm and 12.1º±15.5º in patients. The curvature of L-curve is a useful marker for the atrial position and would allow emending the inaccuracies in its location.
Sea level reconstruction from satellite altimetry and tide gauge data
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2012-01-01
transformations such as maximum autocorrelation factors (MAF), which better take into account the spatio-temporal structure of the variation. Rather than trying to maximize the amount of variance explained, the MAF transform considers noise to be uncorrelated with a spatially or temporally shifted version...... of itself, whereas the desired signal will exhibit autocorrelation. This will be applied to a global dataset, necessitating wrap-around consideration of spatial shifts. Our focus is a timescale going back approximately 50 years, allowing reasonable global availability of tide gauge data. This allows......Ocean satellite altimetry has provided global sets of sea level data for the last two decades, allowing determination of spatial patterns in global sea level. For reconstructions going back further than this period, tide gauge data can be used as a proxy. We examine different methods of combining...
Endrizzi, M.; Delogu, P.; Oliva, P.
2014-12-01
An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7-40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated.
Stock markets reconstruction via entropy maximization driven by fitness and density
Squartini, Tiziano; Cimini, Giulio
2016-01-01
The spreading of financial distress in capital markets and the resulting systemic risk strongly depend on the detailed structure of financial interconnections. Yet, while financial institutions have to disclose their aggregated balance sheet data, the information on single positions is often unavailable due to privacy issues. The resulting challenge is that of using the aggregate information to statistically reconstruct financial networks and correctly predict their higher-order properties. However, standard approaches generate unrealistically dense networks, which severely underestimate systemic risk. Moreover, reconstruction techniques are generally cast for networks of bilateral exposures between financial institutions (such as the interbank market), whereas, the network of their investment portfolios (i.e., the stock market) has received much less attention. Here we develop an improved reconstruction method, based on statistical mechanics concepts and tailored for bipartite market networks. Technically, o...
Endrizzi, M., E-mail: m.endrizzi@ucl.ac.uk [Dipartimento di Fisica, Università di Siena, Via Roma 56, 53100 Siena (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Delogu, P. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Dipartimento di Fisica “E. Fermi”, Università di Pisa, Largo B. Pontecorvo 3, 56127 Pisa (Italy); Oliva, P. [Dipartimento di Chimica e Farmacia, Università di Sassari, via Vienna 2, 07100 Sassari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Cagliari, s.p. per Monserrato-Sestu Km 0.700, 09042 Monserrato (Italy)
2014-12-01
An expectation maximization method is applied to the reconstruction of X-ray tube spectra from transmission measurements in the energy range 7–40 keV. A semiconductor single-photon counting detector, ionization chambers and a scintillator-based detector are used for the experimental measurement of the transmission. The number of iterations required to reach an approximate solution is estimated on the basis of the measurement error, according to the discrepancy principle. The effectiveness of the stopping rule is studied on simulated data and validated with experiments. The quality of the reconstruction depends on the information available on the source itself and the possibility to add this knowledge to the solution process is investigated. The method can produce good approximations provided that the amount of noise in the data can be estimated. - Highlights: • An expectation maximization method was used together with the discrepancy principle. • The discrepancy principle is a suitable criterion for stopping the iteration. • The method can be applied to a variety of detectors/experimental conditions. • The minimum information required is the amount of noise that affects the data. • Improved results are achieved by inserting more information when available.
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-07
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Ye, Hongwei; Vogelsang, Levon; Feiglin, David H.; Lipson, Edward D.; Krol, Andrzej
2008-03-01
In order to improve reconstructed image quality for cone-beam collimator SPECT, we have developed and implemented a fully 3D reconstruction, using an ordered subsets expectation maximization (OSEM) algorithm, along with a volumetric system model - cone-volume system model (CVSM), a modified attenuation compensation, and a 3D depth- and angle-dependent resolution and sensitivity correction. SPECT data were acquired in a 128×128 matrix, in 120 views with a single circular orbit. Two sets of numerical Defrise phantoms were used to simulate CBC SPECT scans, and low noise and scatter-free projection datasets were obtained using the SimSET Monte Carlo package. The reconstructed images, obtained using OSEM with a line-length system model (LLSM) and a 3D Gaussian post-filter, and OSEM with FVSM and a 3D Gaussian post-filter were quantitatively studied. Overall improvement in the image quality has been observed, including better transaxial resolution, higher contrast-to-noise ratio between hot and cold disks, and better accuracy and lower bias in OSEM-CVSM, compared with OSEM-LLSM.
Hong, Hunsop; Schonfeld, Dan
2008-06-01
In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.
Houngbonon, Georges Vivien; Jeanjean, Francois
2014-01-01
This paper empirically assesses the impact of the intensity of competition on investment in new technologies within the mobile telecommunications industry. Using firm level panel data and an instrumental variable estimation it finds an inverted-U relationship between competition intensity and investment. The intermediate level of competition intensity that maximizes investment stands at 62 percent, whereby competition intensity is measured by 1-Lerner index at the firm level. This means that ...
M Sakawa; Kato, K.
2009-01-01
This paper considers stochastic two-level linear programming problems. Using the concept of chance constraints and probability maximization, original problems are transformed into deterministic ones. An interactive fuzzy programming method is presented for deriving a satisfactory solution efficiently with considerations of overall satisfactory balance.
Sumiaki Maeo
Full Text Available Antagonistic muscle pairs cannot be fully activated simultaneously, even with maximal effort, under conditions of voluntary co-contraction, and their muscular activity levels are always below those during agonist contraction with maximal voluntary effort (MVE. Whether the muscular activity level during the task has trainability remains unclear. The present study examined this issue by comparing the muscular activity level during maximal voluntary co-contraction for highly experienced bodybuilders, who frequently perform voluntary co-contraction in their training programs, with that for untrained individuals (nonathletes. The electromyograms (EMGs of biceps brachii and triceps brachii muscles during maximal voluntary co-contraction of elbow flexors and extensors were recorded in 11 male bodybuilders and 10 nonathletes, and normalized to the values obtained during the MVE of agonist contraction for each of the corresponding muscles (% EMGMVE. The involuntary coactivation level in antagonist muscle during the MVE of agonist contraction was also calculated. In both muscles, % EMGMVE values during the co-contraction task for bodybuilders were significantly higher (P<0.01 than those for nonathletes (biceps brachii: 66±14% in bodybuilders vs. 46±13% in nonathletes, triceps brachii: 74±16% vs. 57±9%. There was a significant positive correlation between a length of bodybuilding experience and muscular activity level during the co-contraction task (r = 0.653, P = 0.03. Involuntary antagonist coactivation level during MVE of agonist contraction was not different between the two groups. The current result indicates that long-term participation in voluntary co-contraction training progressively enhances muscular activity during maximal voluntary co-contraction.
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were
Laurence D Hurst
2015-12-01
Full Text Available X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE and data from the Functional Annotation of the Mammalian Genome (FANTOM5 project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds, as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased
Hurst, Laurence D.; Ghanbarian, Avazeh T.; Forrest, Alistair R. R.; Huminiecki, Lukasz
2015-01-01
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Hurst, Laurence D; Ghanbarian, Avazeh T; Forrest, Alistair R R; Huminiecki, Lukasz
2015-12-01
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Hurst, Laurence D.
2015-12-18
X chromosomes are unusual in many regards, not least of which is their nonrandom gene content. The causes of this bias are commonly discussed in the context of sexual antagonism and the avoidance of activity in the male germline. Here, we examine the notion that, at least in some taxa, functionally biased gene content may more profoundly be shaped by limits imposed on gene expression owing to haploid expression of the X chromosome. Notably, if the X, as in primates, is transcribed at rates comparable to the ancestral rate (per promoter) prior to the X chromosome formation, then the X is not a tolerable environment for genes with very high maximal net levels of expression, owing to transcriptional traffic jams. We test this hypothesis using The Encyclopedia of DNA Elements (ENCODE) and data from the Functional Annotation of the Mammalian Genome (FANTOM5) project. As predicted, the maximal expression of human X-linked genes is much lower than that of genes on autosomes: on average, maximal expression is three times lower on the X chromosome than on autosomes. Similarly, autosome-to-X retroposition events are associated with lower maximal expression of retrogenes on the X than seen for X-to-autosome retrogenes on autosomes. Also as expected, X-linked genes have a lesser degree of increase in gene expression than autosomal ones (compared to the human/Chimpanzee common ancestor) if highly expressed, but not if lowly expressed. The traffic jam model also explains the known lower breadth of expression for genes on the X (and the Z of birds), as genes with broad expression are, on average, those with high maximal expression. As then further predicted, highly expressed tissue-specific genes are also rare on the X and broadly expressed genes on the X tend to be lowly expressed, both indicating that the trend is shaped by the maximal expression level not the breadth of expression per se. Importantly, a limit to the maximal expression level explains biased tissue of expression
Vera Vilchez, Jesús; Jimenez, Raimundo; Madinabeitia, Iker; Masiulis, Nerijus; Cárdenas, David
2017-08-04
Fitness level modulates the physiological responses to exercise for a variety of indices. While intense bouts of exercise have been demonstrated to increase tear osmolarity (Tosm), it is not known if fitness level can affect the Tosm response to acute exercise. This study aims to compare the effect of a maximal incremental test on Tosm between trained and untrained military helicopter pilots. Nineteen military helicopter pilots (ten trained and nine untrained) performed a maximal incremental test on a treadmill. A tear sample was collected before and after physical effort to determine the exercise-induced changes on Tosm. The Bayesian statistical analysis demonstrated that Tosm significantly increased from 303.72 ± 6.76 to 310.56 ± 8.80 mmol/L after performance of a maximal incremental test. However, while the untrained group showed an acute Tosm rise (12.33 mmol/L of increment), the trained group experienced a stable Tosm physical effort (1.45 mmol/L). There was a significant positive linear association between fat indices and Tosm changes (correlation coefficients [r] range: 0.77-0.89), whereas the Tosm changes displayed a negative relationship with the cardiorespiratory capacity (VO2 max; r = -0.75) and performance parameters (r = -0.75 for velocity, and r = -0.67 for time to exhaustion). The findings from this study provide evidence that fitness level is a major determinant of Tosm response to maximal incremental physical effort, showing a fairly linear association with several indices related to fitness level. High fitness level seems to be beneficial to avoid Tosm changes as consequence of intense exercise. Copyright © 2017. Published by Elsevier Inc.
Arctic sea-level reconstruction analysis using recent satellite altimetry
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2014-01-01
We present a sea-level reconstruction for the Arctic Ocean using recent satellite altimetry data. The model, forced by historical tide gauge data, is based on empirical orthogonal functions (EOFs) from a calibration period; for this purpose, newly retracked satellite altimetry from ERS-1 and -2...... and Envisat has been used. Despite the limited coverage of these datasets, we have made a reconstruction up to 82 degrees north for the period 1950–2010. We place particular emphasis on determining appropriate preprocessing for the tide gauge data, and on validation of the model, including the ability...... to reconstruct known data. The relationship between the reconstruction and climatic variables, such as atmospheric pressure, and climate oscillations, including the Arctic Oscillation (AO), is examined....
Zou, Xubo; Mathis, W.
2004-09-01
We propose an experimental scheme for one-step implementation of maximally entangled states of many three-level atoms in microwave cavity QED. In the scheme, many three-level atoms initially prepared in the same superposition states are simultaneously sent through one superconducting cavity, and maximally entangled states can be generated without requiring the measurement and individual addressing of the atoms.
Brendle, Joerg
2016-01-01
We show that, consistently, there can be maximal subtrees of P (omega) and P (omega) / fin of arbitrary regular uncountable size below the size of the continuum. We also show that there are no maximal subtrees of P (omega) / fin with countable levels. Our results answer several questions of Campero, Cancino, Hrusak, and Miranda.
Online Muon Reconstruction in the ATLAS Level-2 trigger system
Di Mattia, A; Dos Anjos, A; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, J A C; Boisvert, V; Bosman, M; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Conde-Muíño, P; De Santo, A; Díaz-Gómez, M; Dosil, M; Ellis, Nick; Emeliyanov, D; Epp, B; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kabana, S; Khomich, A; Kilvington, G; Konstantinidis, N P; Kootz, A; Lowe, A; Luci, C; Luminari, L; Maeno, T; Marzano, F; Masik, J; Meessen, C; Mello, A G; Merino, G; Moore, R; Morettini, P; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Panikashvili, N; Parodi, F; Pasqualucci, E; Pérez-Réale, V; Pinfold, J L; Pinto, P; Qian, Z; Resconi, S; Rosati, S; Sánchez, C; Santamarina-Rios, C; Scannicchio, D A; Schiavi, C; Segura, E; De Seixas, J M; Sivoklokov, S Yu; Soluk, R A; Stefanidis, E; Sushkov, S; Sutton, M; Tapprogge, Stefan; Thomas, E; Touchard, F; Venda-Pinto, B; Vercesi, V; Werner, P; Wheeler, S; Wickens, F J; Wiedenmann, W; Wielers, M; Zobernig, G; 2004 IEEE Nuclear Science Symposium And Medical Imaging Conference
2004-01-01
To cope with the 40 MHz event production rate of LHC, the trigger of the ATLAS experiment selects the events in three sequential steps of increasing complexity and accuracy whose final results are close to the offline reconstruction. The Level-1, implemented with custom hardware, identifies physics objects within Regions of Interests and operates a first reduction of the event rate to 75 KHz. The higher trigger levels provide a software based event selection which further reduces the event rate to about 100 Hz. This paper presents the algorithm (muFast) employed at Level-2 to confirm the muon candidates flagged by the Level-1. muFast identifies hits of muon tracks inside the Muon Spectrometer and provides a precise measurement of the muon momentum at the production vertex. The algorithm must process the Level-1 muon output rate (~20 KHz), thus a particular care has been used for its optimization. The result is a very fast track reconstruction algorithm with good physics performances which, in some cases, appr...
Tao, R.; Tang, H.
Chocolate is one of the most popular food types and flavors in the world. Unfortunately, at present, chocolate products contain too much fat, leading to obesity. For example, a typical molding chocolate has various fat up to 40% in total and chocolate for covering ice cream has fat 50 -60%. Especially, as children are the leading chocolate consumers, reducing the fat level in chocolate products to make them healthier is important and urgent. While this issue was called into attention and elaborated in articles and books decades ago and led to some patent applications, no actual solution was found unfortunately. Why is reducing fat in chocolate so difficult? What is the underlying physical mechanism? We have found that this issue is deeply related to the basic science of soft matters, especially to their viscosity and maximally random jammed (MRJ) density φx. All chocolate productions are handling liquid chocolate, a suspension with cocoa solid particles in melted fat, mainly cocoa butter. The fat level cannot be lower than 1-φxin order to have liquid chocolate to flow. Here we show that that with application of an electric field to liquid chocolate, we can aggregate the suspended particles into prolate spheroids. This microstructure change reduces liquid chocolate's viscosity along the flow direction and increases its MRJ density significantly. Hence the fat level in chocolate can be effectively reduced. We are looking forward to a new class of healthier and tasteful chocolate coming to the market soon. Dept. of Physics, Temple Univ, Philadelphia, PA 19122.
The acute effect of maximal exercise on plasma beta-endorphin levels in fibromyalgia patients
Ghavidel-Parsa, Banafsheh; Rajabi, Sahar; Sanaei, Omid; Toutounchi, Mehrangiz
2016-01-01
Background This study aimed to investigate the effect of strenuous exercise on β-endorphine (β-END) level in fibromyalgia (FM) patients compared to healthy subjects. Methods We enrolled 30 FM patients and 15 healthy individuals. All study participants underwent a treadmill exercise test using modified Bruce protocol (M.Bruce). The goal of the test was achieving at least 70% of the predicted maximal heart rate (HRMax). The serum levels of β-END were measured before and after the exercise program. Measurements were done while heart rate was at least 70% of its predicted maximum. Results The mean ± the standard deviation (SD) of exercise duration in the FM and control groups were 24.26 ± 5.29 and 29.06 ± 3.26 minutes, respectively, indicating a shorter time to achieve the goal heart rate in FM patients (P < 0.003). Most FM patients attained 70% HRMax at lower stages (stage 2 and 3) of M.Bruce compared to the control group (70% versus 6.6%, respectively; P < 0.0001). Compared to healthy subjects, FM patients had lower serum β-END levels both in baseline and post-exercise status (Mean ± SD: 122.07 ± 28.56 µg/ml and 246.55 ± 29.57 µg/ml in the control group versus 90.12 ± 20.91 µg/ml and 179.80 ± 28.57 µg/ml in FM patients, respectively; P < 0.001). Conclusions We found that FM patients had lower levels of β-END in both basal and post-exercise status. Exercise increased serum the β-END level in both groups but the average increase in β-END in FM patients was significantly lower than in the control group. PMID:27738503
On the Maximal Dimension of a Completely Entangled Subspace for Finite Level Quantum Systems
K R Parthasarathy
2004-11-01
Let $\\mathcal{H}_i$ be a finite dimensional complex Hilbert space of dimension $d_i$ associated with a finite level quantum system $A_i$ for $i=1, 2,\\ldots,k$. A subspace $S\\subset\\mathcal{H} = \\mathcal{H}_{A_1 A_2\\ldots A_k} = \\mathcal{H}_1 \\otimes \\mathcal{H}_2 \\otimes\\cdots\\otimes \\mathcal{H}_k$ is said to be completely entangled if it has no non-zero product vector of the form $u_1 \\otimes u_2 \\otimes\\cdots\\otimes u_k$ with $u_i$ in $\\mathcal{H}_i$ for each . Using the methods of elementary linear algebra and the intersection theorem for projective varieties in basic algebraic geometry we prove that $$\\max\\limits_{S\\in\\mathcal{E}}\\dim S=d_1 d_2\\ldots d_k-(d_1+\\cdots +d_k)+k-1,$$ where $\\mathcal{E}$ is the collection of all completely entangled subspaces. When $\\mathcal{H}_1 = \\mathcal{H}_2$ and $k = 2$ an explicit orthonormal basis of a maximal completely entangled subspace of $\\mathcal{H}_1 \\otimes \\mathcal{H}_2$ is given. We also introduce a more delicate notion of a perfectly entangled subspace for a multipartite quantum system, construct an example using the theory of stabilizer quantum codes and pose a problem.
Bradley, Paul S; Mohr, Magni; Bendiksen, Mads
2011-01-01
The aims of this study were to (1) determine the reproducibility of sub-maximal and maximal versions of the Yo-Yo intermittent endurance test level 2 (Yo-Yo IE2 test), (2) assess the relationship between the Yo-Yo IE2 test and match performance and (3) quantify the sensitivity of the Yo-Yo IE2 test...... to detect test-retest changes and discriminate between performance for different playing standards and positions in elite soccer. Elite (n = 148) and sub-elite male (n = 14) soccer players carried out the Yo-Yo IE2 test on several occasions over consecutive seasons. Test-retest coefficient of variation (CV......) in Yo-Yo IE2 test performance and heart rate after 6 min were 3.9% (n = 37) and 1.4% (n = 32), respectively. Elite male senior and youth U19 players Yo-Yo IE2 performances were better (P ...
Hodis, Flaviu A.; Johnston, Michael; Meyer, Luanna H.; McClure, John; Hodis, Georgeta M.; Starkey, Louise
2015-01-01
Maximising educational attainment is important for both individuals and societies. However, understanding of why some students achieve better than others is far from complete. Motivation and achievement data from a sample of 782 secondary-school students in New Zealand reveal that two specific types of outcome goals, namely "maximal levels of…
Maximal doses of atorvastatin and rosuvastatin are highly effective in lowering low-density lipoprotein (LDL) cholesterol and triglyceride levels; however, rosuvastatin has been shown to be significantly more effective than atorvastatin in lowering LDL cholesterol and in increasing high-density lipo...
Datta, Pameli; Philipsen, Peter A; Olsen, Peter; Petersen, Bibi; Johansen, Peter; Morling, Niels; Wulf, Hans C
2016-04-01
Vitamin D influences skeletal health as well as other aspects of human health. Even when the most obvious sources of variation such as solar UVB exposure, latitude, season, clothing habits, skin pigmentation and ethnicity are selected for, variation in the serum 25-hydroxy vitamin D (25(OH)D) response to UVB remains extensive and unexplained. Our study assessed the inter-personal variation in 25(OH)D response to UVR and the maximal obtainable 25(OH)D level in 22 healthy participants (220 samples) with similar skin pigmentation during winter with negligible ambient UVB. Participants received identical UVB doses on identical body areas until a maximal level of 25(OH)D was reached. Major inter-personal variation in both the maximal obtainable UVB-induced 25(OH)D level (range 85-216 nmol l(-1), mean 134 nmol l(-1)) and the total increase in 25(OH)D (range 3-139 nmol l(-1), mean 48 nmol l(-1)) was found. A linear model including measured 25(OH)D baselines as personal intercepts explained 54.9% of the variation. By further including personal slopes in the model, as much as 90.8% of the variation could be explained. The explained variation constituted by personal differences in slopes thus represented 35.9%. Age, vitamin D receptor gene polymorphisms, height and constitutive skin pigmentation (a skin area not exposed to UVB) explained 15.1% of this variation. Despite elimination of most known external sources of variation, our study demonstrated inter-personal variation corresponding to an observed maximal difference of 136 nmol l(-1) in the total increase of 25(OH)D and 131 nmol l(-1) in the maximal level of 25(OH)D.
Confidence and sensitivity of sea-level reconstructions
Svendsen, Peter Limkilde
and modelling these outside the reconstruction. The implementation is currently based on data from compound satellite datasets (i.e., two decades of altimetry), and the Simple Ocean Data Assimilation (SODA) model, an existing reconstruction, where a calibration period can be easily extracted and our model...
Liran Carmel
2010-01-01
Full Text Available Evolutionary binary characters are features of species or genes, indicating the absence (value zero or presence (value one of some property. Examples include eukaryotic gene architecture (the presence or absence of an intron in a particular locus, gene content, and morphological characters. In many studies, the acquisition of such binary characters is assumed to represent a rare evolutionary event, and consequently, their evolution is analyzed using various flavors of parsimony. However, when gain and loss of the character are not rare enough, a probabilistic analysis becomes essential. Here, we present a comprehensive probabilistic model to describe the evolution of binary characters on a bifurcating phylogenetic tree. A fast software tool, EREM, is provided, using maximum likelihood to estimate the parameters of the model and to reconstruct ancestral states (presence and absence in internal nodes and events (gain and loss events along branches.
Kolstein, M.; De Lorenzo, G.; Chmeissani, M.
2014-04-01
The Voxel Imaging PET (VIP) Pathfinder project intends to show the advantages of using pixelated solid-state technology for nuclear medicine applications. It proposes designs for Positron Emission Tomography (PET), Positron Emission Mammography (PEM) and Compton gamma camera detectors with a large number of signal channels (of the order of 106). For Compton camera, especially with a large number of readout channels, image reconstruction presents a big challenge. In this work, results are presented for the List-Mode Ordered Subset Expectation Maximization (LM-OSEM) image reconstruction algorithm on simulated data with the VIP Compton camera design. For the simulation, all realistic contributions to the spatial resolution are taken into account, including the Doppler broadening effect. The results show that even with a straightforward implementation of LM-OSEM, good images can be obtained for the proposed Compton camera design. Results are shown for various phantoms, including extended sources and with a distance between the field of view and the first detector plane equal to 100 mm which corresponds to a realistic nuclear medicine environment.
Pais, Thiago M; Foulquié-Moreno, María R; Hubmann, Georg; Duitama, Jorge; Swinnen, Steve; Goovaerts, Annelies; Yang, Yudi; Dumortier, Françoise; Thevelein, Johan M
2013-06-01
The yeast Saccharomyces cerevisiae is able to accumulate ≥17% ethanol (v/v) by fermentation in the absence of cell proliferation. The genetic basis of this unique capacity is unknown. Up to now, all research has focused on tolerance of yeast cell proliferation to high ethanol levels. Comparison of maximal ethanol accumulation capacity and ethanol tolerance of cell proliferation in 68 yeast strains showed a poor correlation, but higher ethanol tolerance of cell proliferation clearly increased the likelihood of superior maximal ethanol accumulation capacity. We have applied pooled-segregant whole-genome sequence analysis to identify the polygenic basis of these two complex traits using segregants from a cross of a haploid derivative of the sake strain CBS1585 and the lab strain BY. From a total of 301 segregants, 22 superior segregants accumulating ≥17% ethanol in small-scale fermentations and 32 superior segregants growing in the presence of 18% ethanol, were separately pooled and sequenced. Plotting SNP variant frequency against chromosomal position revealed eleven and eight Quantitative Trait Loci (QTLs) for the two traits, respectively, and showed that the genetic basis of the two traits is partially different. Fine-mapping and Reciprocal Hemizygosity Analysis identified ADE1, URA3, and KIN3, encoding a protein kinase involved in DNA damage repair, as specific causative genes for maximal ethanol accumulation capacity. These genes, as well as the previously identified MKT1 gene, were not linked in this genetic background to tolerance of cell proliferation to high ethanol levels. The superior KIN3 allele contained two SNPs, which are absent in all yeast strains sequenced up to now. This work provides the first insight in the genetic basis of maximal ethanol accumulation capacity in yeast and reveals for the first time the importance of DNA damage repair in yeast ethanol tolerance.
Thiago M Pais
2013-06-01
Full Text Available The yeast Saccharomyces cerevisiae is able to accumulate ≥17% ethanol (v/v by fermentation in the absence of cell proliferation. The genetic basis of this unique capacity is unknown. Up to now, all research has focused on tolerance of yeast cell proliferation to high ethanol levels. Comparison of maximal ethanol accumulation capacity and ethanol tolerance of cell proliferation in 68 yeast strains showed a poor correlation, but higher ethanol tolerance of cell proliferation clearly increased the likelihood of superior maximal ethanol accumulation capacity. We have applied pooled-segregant whole-genome sequence analysis to identify the polygenic basis of these two complex traits using segregants from a cross of a haploid derivative of the sake strain CBS1585 and the lab strain BY. From a total of 301 segregants, 22 superior segregants accumulating ≥17% ethanol in small-scale fermentations and 32 superior segregants growing in the presence of 18% ethanol, were separately pooled and sequenced. Plotting SNP variant frequency against chromosomal position revealed eleven and eight Quantitative Trait Loci (QTLs for the two traits, respectively, and showed that the genetic basis of the two traits is partially different. Fine-mapping and Reciprocal Hemizygosity Analysis identified ADE1, URA3, and KIN3, encoding a protein kinase involved in DNA damage repair, as specific causative genes for maximal ethanol accumulation capacity. These genes, as well as the previously identified MKT1 gene, were not linked in this genetic background to tolerance of cell proliferation to high ethanol levels. The superior KIN3 allele contained two SNPs, which are absent in all yeast strains sequenced up to now. This work provides the first insight in the genetic basis of maximal ethanol accumulation capacity in yeast and reveals for the first time the importance of DNA damage repair in yeast ethanol tolerance.
Luthar, Suniya S.; Brown, Pamela J.
2007-01-01
The study of resilience has two core characteristics: it is fundamentally applied in nature, seeking to use scientific knowledge to maximize well-being among those at risk, and it draws on expertise from diverse scientific disciplines. Recent advances in biological processes have confirmed the profound deleterious effects of harsh caregiving environments, thereby underscoring the importance of early interventions. What remains to be established at this time is the degree to which insights on particular biological processes (e.g., involving specific brain regions, genes, or hormones) will be applied in the near future to achieve substantial reductions in mental health disparities. Aside from biology, resilience developmental researchers would do well to draw upon relevant evidence from other behavioral sciences as well, notably anthropology as well as family, counseling, and social psychology. Scientists working with adults and with children must remain vigilant to the advances and missteps in each others' work, always ensuring caution in conveying messages about the “innateness” of resilience or its prevalence across different subgroups. Our future research agenda must prioritize reducing abuse and neglect in close relationships; deriving the “critical ingredients” in effective interventions and going to scale with these; working collaboratively to refine theory on the construct; and responsibly, proactively disseminating what we have learned about the nature, limits, and antecedents of resilient adaptation across diverse at-risk groups. PMID:17705909
Idea Sharing: How to Maximize Participation in a Mixed-Level English Class
Carlson, Gordon D.
2015-01-01
Teaching a class of mixed EFL/ESL levels can be problematic for both instructors and students. The disparate levels of ability often mean that some students are not challenged enough while others struggle to keep pace. Drawing on experience in the university classroom in Japan, this practice promotes good preparation, self-reliance, inclusiveness,…
System-Level Validation through Post-Flight Reconstruction and Anchoring
2008-12-30
Level Post-Flight Reconstruction and Anchoring Definitions -System-Level Post-Flight Reconstruction ( PFR ): » Manually recreate and run a past...cause analysis of the system-level anomalies found in the PFR ; generate, test and implement M&S improvements to address anomalies Approved for Public...08-MDA-4058 (30 DEC 08) 5 Foundation of System PFR ystem alidation System-Level Validation is built on individual Element Validation Approved for
When given the opportunity, chimpanzees maximize personal gain rather than “level the playing field”
Lydia M. Hopper
2013-09-01
Full Text Available We provided chimpanzees (Pan troglodytes with the ability to improve the quality of food rewards they received in a dyadic test of inequity. We were interested to see if this provision influenced their responses and, if so, whether it was mediated by a social partner’s outcomes. We tested eight dyads using an exchange paradigm in which, depending on the condition, the chimpanzees were rewarded with either high-value (a grape or low-value (a piece of celery food rewards for each completed exchange. We included four conditions. In the first, “Different” condition, the subject received different, less-preferred, rewards than their partner for each exchange made (a test of inequity. In the “Unavailable” condition, high-value rewards were shown, but not given, to both chimpanzees prior to each exchange and the chimpanzees were rewarded equally with low-value rewards (a test of individual contrast. The final two conditions created equity. In these High-value and Low-value “Same” conditions both chimpanzees received the same food rewards for each exchange. Within each condition, the chimpanzees first completed ten trials in the Baseline Phase, in which the experimenter determined the rewards they received, and then ten trials in the Test Phase. In the Test Phase, the chimpanzees could exchange tokens through the aperture of a small wooden picture frame hung on their cage mesh in order to receive the high-value reward. Thus, in the Test Phase, the chimpanzees were provided with an opportunity to improve the quality of the rewards they received, either absolutely or relative to what their partner received. The chimpanzees responded in a targeted manner; in the Test Phase they attempted to maximize their returns in all conditions in which they had received low-value rewards during the Baseline Phase. Thus, the chimpanzees were apparently motivated to increase their reward regardless of their partners’, but they only used the mechanism
Shinogaya, T; Kimura, M; Matsumoto, M
1997-12-01
The aim of this study is to clarify the normal relationship between jaw elevator muscle activity and occlusal contact in lateral positions in order to assess the appropriate anterior guidance of lateral jaw movements for occlusal reconstruction and treatment. The EMG-activity of the right and left masseter, anterior temporal, and posterior temporal muscles of 9 healthy subjects with full, natural dentition was measured with bipolar surface electrodes during two different biting efforts, one involves bite registration by a silicone material containing carbonate powder (BRS) and another is maximal voluntary clenching (MVC), at the right and left canines' edge to edge positions and intercuspal position. The difference in muscle activity between MVC and BRS, which was regarded as the actual muscle activity necessary for MVC, was calculated as a representative value for each muscle activity. When working-side occlusal contact was restricted by the anterior teeth, including the canines, the total actual EMG activity of the 6 jaw muscles had a significantly strong correlation with the frontal angle of the lateral incisal path and the occlusal contact area at the lateral occlusion. This result suggested the possibility that canine guidance would control the muscle activity during lateral tooth clenching.
Rahnama, Nader; Gaeini, Abbas Ali; Kazemi, Fahimeh
2010-05-01
Consumption of energy drinks has become widespread among athletes. The effectiveness of Red Bull and Hype energy drinks on selected indices of maximal cardiorespiratory fitness and blood lactate levels in male athletes was examined in this study. TEN MALE STUDENT ATHLETES (AGE: 22.4 ± 2.1 years, height: 180.8 ± 7.7 cm, weight: 74.2 ± 8.5 kg) performed three randomized maximal oxygen consumption tests on a treadmill. Each test was separated by four days and participants were asked to ingest Red Bull, Hype or placebo drinks 40 minutes before the exercise bout. The VO (2max), time to exhaustion, heart rate and lactate were measured to determine if the caffeine-based beverages influence performance. ANOVA test was used for analyzing data. A greater value was observed in VO (2max)and time to exhaustion for the Red Bull and Hype trial compared to the placebo trial (p drinks (p > 0.05). For blood lactate levels no significant changes were observed before and two minute after the test (p > 0.05). Ingestion of Red Bull and Hype prior to exercise testing is effective on some indices of cardiorespiratory fitness but not on the blood lactate levels.
Nader Rahnama
2010-01-01
Full Text Available Background: Consumption of energy drinks has become widespread among athletes. The effectiveness of Red Bull and Hype energy drinks on selected indices of maximal cardiorespiratory fitness and blood lactate levels in male athletes was examined in this study. Methods: Ten male student athletes (age: 22.4 ± 2.1 years, height: 180.8 ± 7.7 cm, weight: 74.2 ± 8.5 kg performed three randomized maximal oxygen consumption tests on a treadmill. Each test was separated by four days and participants were asked to ingest Red Bull, Hype or placebo drinks 40 minutes before the exercise bout. The VO 2max , time to exhaustion, heart rate and lactate were measured to determine if the caffeine-based beverages influence performance. ANOVA test was used for analyzing data. Results: A greater value was observed in VO 2max and time to exhaustion for the Red Bull and Hype trial compared to the placebo trial (p 0.05. For blood lactate levels no significant changes were observed before and two minute after the test (p > 0.05. Conclusions: Ingestion of Red Bull and Hype prior to exercise testing is effective on some indices of cardiorespira-tory fitness but not on the blood lactate levels.
Optimal hurricane overwash thickness for maximizing marsh resilience to sea level rise.
Walters, David C; Kirwan, Matthew L
2016-05-01
The interplay between storms and sea level rise, and between ecology and sediment transport governs the behavior of rapidly evolving coastal ecosystems such as marshes and barrier islands. Sediment deposition during hurricanes is thought to increase the resilience of salt marshes to sea level rise by increasing soil elevation and vegetation productivity. We use mesocosms to simulate burial of Spartina alterniflora during hurricane-induced overwash events of various thickness (0-60 cm), and find that adventitious root growth within the overwash sediment layer increases total biomass by up to 120%. In contrast to most previous work illustrating a simple positive relationship between burial depth and vegetation productivity, our work reveals an optimum burial depth (5-10 cm) beyond which burial leads to plant mortality. The optimum burial depth increases with flooding frequency, indicating that storm deposition ameliorates flooding stress, and that its impact on productivity will become more important under accelerated sea level rise. Our results suggest that frequent, low magnitude storm events associated with naturally migrating islands may increase the resilience of marshes to sea level rise, and in turn, slow island migration rates. We find that burial deeper than the optimum results in reduced growth or mortality of marsh vegetation, which suggests that future increases in overwash thickness associated with more intense storms and artificial heightening of dunes could lead to less resilient marshes.
Hydrological forecast of maximal water level in Lepenica river basin and flood control measures
Milanović Ana
2006-01-01
Full Text Available Lepenica river basin territory has became axis of economic and urban development of Šumadija district. However, considering Lepenica River with its tributaries, and their disordered river regime, there is insufficient of water for water supply and irrigation, while on the other hand, this area is suffering big flood and torrent damages (especially Kragujevac basin. The paper presents flood problems in the river basin, maximum water level forecasts, and flood control measures carried out until now. Some of the potential solutions, aiming to achieve the effective flood control, are suggested as well.
Effects of ACL Reconstructive Surgery on Temporal Variations of Cytokine Levels in Synovial Fluid
Marco Bigoni
2016-01-01
Full Text Available Anterior cruciate ligament (ACL reconstruction restores knee stability but does not reduce the incidence of posttraumatic osteoarthritis induced by inflammatory cytokines. The aim of this research was to longitudinally measure IL-1β, IL-6, IL-8, IL-10, and TNF-α levels in patients subjected to ACL reconstruction using bone-patellar tendon-bone graft. Synovial fluid was collected within 24–72 hours of ACL rupture (acute, 1 month after injury immediately prior to surgery (presurgery, and 1 month thereafter (postsurgery. For comparison, a “control” group consisted of individuals presenting chronic ACL tears. Our results indicate that levels of IL-6, IL-8, and IL-10 vary significantly over time in reconstruction patients. In the acute phase, the levels of these cytokines in reconstruction patients were significantly greater than those in controls. In the presurgery phase, cytokine levels in reconstruction patients were reduced and comparable with those in controls. Finally, cytokine levels increased again with respect to control group in the postsurgery phase. The levels of IL-1β and TNF-α showed no temporal variation. Our data show that the history of an ACL injury, including trauma and reconstruction, has a significant impact on levels of IL-6, IL-8, and IL-10 in synovial fluid but does not affect levels of TNF-α and IL-1β.
Jackson, Eleisha L; Shahmoradi, Amir; Spielman, Stephanie J; Jack, Benjamin R; Wilke, Claus O
2016-07-01
Structural properties such as solvent accessibility and contact number predict site-specific sequence variability in many proteins. However, the strength and significance of these structure-sequence relationships vary widely among different proteins, with absolute correlation strengths ranging from 0 to 0.8. In particular, two recent works have made contradictory observations. Yeh et al. (Mol. Biol. Evol. 31:135-139, 2014) found that both relative solvent accessibility (RSA) and weighted contact number (WCN) are good predictors of sitewise evolutionary rate in enzymes, with WCN clearly out-performing RSA. Shahmoradi et al. (J. Mol. Evol. 79:130-142, 2014) considered these same predictors (as well as others) in viral proteins and found much weaker correlations and no clear advantage of WCN over RSA. Because these two studies had substantial methodological differences, however, a direct comparison of their results is not possible. Here, we reanalyze the datasets of the two studies with one uniform analysis pipeline, and we find that many apparent discrepancies between the two analyses can be attributed to the extent of sequence divergence in individual alignments. Specifically, the alignments of the enzyme dataset are much more diverged than those of the virus dataset, and proteins with higher divergence exhibit, on average, stronger structure-sequence correlations. However, the highest structure-sequence correlations are observed at intermediate divergence levels, where both highly conserved and highly variable sites are present in the same alignment.
Shuang Mei
Full Text Available Both dietary fat and carbohydrates (Carbs may play important roles in the development of insulin resistance. The main goal of this study was to further define the roles for fat and dietary carbs in insulin resistance. C57BL/6 mice were fed normal chow diet (CD or HFD containing 0.1-25.5% carbs for 5 weeks, followed by evaluations of calorie consumption, body weight and fat gains, insulin sensitivity, intratissue insulin signaling, ectopic fat, and oxidative stress in liver and skeletal muscle. The role of hepatic gluconeogenesis in the HFD-induced insulin resistance was determined in mice. The role of fat in insulin resistance was also examined in cultured cells. HFD with little carbs (0.1% induced severe insulin resistance. Addition of 5% carbs to HFD dramatically elevated insulin resistance and 10% carbs in HFD was sufficient to induce a maximal level of insulin resistance. HFD with little carbs induced ectopic fat accumulation and oxidative stress in liver and skeletal muscle and addition of carbs to HFD dramatically enhanced ectopic fat and oxidative stress. HFD increased hepatic expression of key gluconeogenic genes and the increase was most dramatic by HFD with little carbs, and inhibition of hepatic gluconeogenesis prevented the HFD-induced insulin resistance. In cultured cells, development of insulin resistance induced by a pathological level of insulin was prevented in the absence of fat. Together, fat is essential for development of insulin resistance and dietary carb is not necessary for HFD-induced insulin resistance due to the presence of hepatic gluconeogenesis but a very small amount of it can promote HFD-induced insulin resistance to a maximal level.
Stefano Zurrida
2011-01-01
Full Text Available Breast cancer is the most common cancer in women. Primary treatment is surgery, with mastectomy as the main treatment for most of the twentieth century. However, over that time, the extent of the procedure varied, and less extensive mastectomies are employed today compared to those used in the past, as excessively mutilating procedures did not improve survival. Today, many women receive breast-conserving surgery, usually with radiotherapy to the residual breast, instead of mastectomy, as it has been shown to be as effective as mastectomy in early disease. The relatively new skin-sparing mastectomy, often with immediate breast reconstruction, improves aesthetic outcomes and is oncologically safe. Nipple-sparing mastectomy is newer and used increasingly, with better acceptance by patients, and again appears to be oncologically safe. Breast reconstruction is an important adjunct to mastectomy, as it has a positive psychological impact on the patient, contributing to improved quality of life.
Event reconstruction performance of the ALICE High Level Trigger for p + p collisions
Richter, M; Alt, T; Appelshauser, H; Arend, A; Becker, B; Bottger, S; Breitner, T; Busching, H; Cicalo, C; Chattopadhyay, S; Cleymans, J; Das, I; Djuvsland, O; Erdal, H; Fearick, R; Gorbunov, S; Haaland, O S; Hille, P T; Kalcher, S; Kanaki, K; Kebschull, U; Kisel, I; Kretz, M; Lara, C; Lindal, S; Lindenstruth, V; Masoodi, A A; Ovrebekk, G; Panse, R; Peschek, J; Ploskon, M; Pocheptsov, T; Rascanu, T; Ronchetti, F; Rohr, D; Rohrich, D; Skaali, B; Steinbeck, T; Szostak, A; Thader, J; Tveter, T S; Ullaland, K; Vilakazi, Z; Weis, R; Zelnicek, P
2011-01-01
The ALICE High Level Trigger comprises a large computing cluster, dedicated interfaces and software applications. It allows on-line event reconstruction of the full data stream of the ALICE experiment at up to 25 GByte/s. The commissioning campaign has passed an important phase since the startup of the Large Hadron Collider in November 2009. The system has been transferred into continuous operation with focus on the event reconstruction and first simple trigger applications. The paper reports for t he first time on the achieved event reconstruction performance in the ALICE central barrel region.
Hudebine D.
2011-06-01
Full Text Available In the petroleum industry, the oil fractions are usually complex mixtures containing several hundreds up to several millions of different chemical species. For this reason, even the most powerful analytical tools do not allow to separate and to identify all the species that are present. Hence, petroleum fractions are currently characterized either by using average macroscopic descriptors (density, elemental analyses, Nuclear Magnetic Resonance, etc. or by using separative techniques (distillation, gas or liquid chromatography, mass spectrometry, etc., which quantify only a limited number of families of molecules however. Reconstruction methods for the petroleum cuts are numerical tools, which allow to evolve towards a molecular detail and which are all based on the following principle: defining simplified but consistent mixtures of chemical compounds from partial analytical data and from expert knowledge of the process under study. Thus, the reconstruction method by entropy maximization, which is proposed in this article, is a recent and powerful technique which allows to determine the molar fractions of a predefined set of chemical compounds by maximizing an entropic criterion and by satisfying the analytical constraints given by the modeler. This approach allows to reduce the number of degrees of freedom from several thousands (corresponding to the molar fractions of the compounds to several tens (corresponding to the Lagrange parameters associated with the analytical constraints and to greatly decrease the CPU time required to perform the calculations. This approach has been successfully applied to reconstruct FCC gasolines by precisely predicting the molecular composition of this type of feedstocks from a distillation and an overall PIONA analysis (Paraffins, Isoparaffins, Olefins, Naphthenes and Aromatics. The extension to other naphthas (Straight Run naphthas, Coker naphthas, hydrotreated naphthas, etc. is straightforward. Dans le domaine
Webster, Kate E; Feller, Julian A; Wittwer, Joanne E
2012-06-01
Following anterior cruciate ligament reconstruction (ACL) patients have altered movement patterns in the reconstructed knee during walking. There is limited information about these alterations over an extended period of time. This study was designed to present a longitudinal analysis of gait patterns following ACL reconstruction surgery. Assessments of level walking were undertaken in 16 participants at a mean 10 months (initial assessment) and again at 3 years (follow-up assessment) after ACL reconstruction surgery. Kinematic and kinetic variables were analysed using a two factor (time, limb) repeated measures ANOVA. Kinematic data showed that patients were able to achieve greater extension about the reconstructed knee at follow-up than at initial assessment. The reconstructed knee was significantly less internally rotated than the contralateral knee at the initial assessment but not at follow-up. Kinetic data showed a significant increase in the external knee extension moment for the reconstructed limb over time. There were also significant increases in the external knee adduction moment for both limbs at the follow-up assessment. The external knee adduction moment was however smaller in the reconstructed knee than the contralateral knee at both assessments. The results indicate that gait variables do change over time and that measurement at a single time point may not reflect the long term outcome of ACL reconstruction surgery. The changes were however small and may not be clinically relevant. However, the consistently reduced external knee adduction moment seen about the reconstructed knee in this study may suggest that factors other than joint moments influence degenerative change over time. Copyright © 2012 Elsevier B.V. All rights reserved.
Statistical selection of tide gauges for Arctic sea-level reconstruction
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2015-05-01
In this paper, we seek an appropriate selection of tide gauges for Arctic Ocean sea-level reconstruction based on a combination of empirical criteria and statistical properties (leverages). Tide gauges provide the only in situ observations of sea level prior to the altimetry era. However, tide gauges are sparse, of questionable quality, and occasionally contradictory in their sea-level estimates. Therefore, it is essential to select the gauges very carefully. In this study, we have established a reconstruction based on empirical orthogonal functions (EOFs) of sea-level variations for the period 1950-2010 for the Arctic Ocean, constrained by tide gauge records, using the basic approach of Church et al. (2004). A major challenge is the sparsity of both satellite and tide gauge data beyond what can be covered with interpolation, necessitating a time-variable selection of tide gauges and the use of an ocean circulation model to provide gridded time series of sea level. As a surrogate for satellite altimetry, we have used the Drakkar ocean model to yield the EOFs. We initially evaluate the tide gauges through empirical criteria to reject obvious outlier gauges. Subsequently, we evaluate the "influence" of each Arctic tide gauge on the EOF-based reconstruction through the use of statistical leverage and use this as an indication in selecting appropriate tide gauges, in order to procedurally identify poor-quality data while still including as much data as possible. To accommodate sparse or contradictory tide gauge data, careful preprocessing and regularization of the reconstruction model are found to make a substantial difference to the quality of the reconstruction and the ability to select appropriate tide gauges for a reliable reconstruction. This is an especially important consideration for the Arctic, given the limited amount of data available. Thus, such a tide gauge selection study can be considered a precondition for further studies of Arctic sea-level
Coastal barrier stratigraphy for Holocene high-resolution sea-level reconstruction
Costas, Susana; Ferreira, Óscar; Plomaritis, Theocharis A.; Leorri, Eduardo
2016-12-01
The uncertainties surrounding present and future sea-level rise have revived the debate around sea-level changes through the deglaciation and mid- to late Holocene, from which arises a need for high-quality reconstructions of regional sea level. Here, we explore the stratigraphy of a sandy barrier to identify the best sea-level indicators and provide a new sea-level reconstruction for the central Portuguese coast over the past 6.5 ka. The selected indicators represent morphological features extracted from coastal barrier stratigraphy, beach berm and dune-beach contact. These features were mapped from high-resolution ground penetrating radar images of the subsurface and transformed into sea-level indicators through comparison with modern analogs and a chronology based on optically stimulated luminescence ages. Our reconstructions document a continuous but slow sea-level rise after 6.5 ka with an accumulated change in elevation of about 2 m. In the context of SW Europe, our results show good agreement with previous studies, including the Tagus isostatic model, with minor discrepancies that demand further improvement of regional models. This work reinforces the potential of barrier indicators to accurately reconstruct high-resolution mid- to late Holocene sea-level changes through simple approaches.
van der Meer, D.G.; van den Berg van Saparoea, A.P.H.; van Hinsbergen, D.J.J.; van de Weg, R.M.B.; Godderis, Y.; Le Hir, G.; Donnadieu, Y.
2017-01-01
The eustatic sea-level curves published in the seventies and eighties have supported scientific advances in the Earth Sciences and the emergence of sequence-stratigraphy as an important hydrocarbon exploration tool. However, validity of reconstructions of eustatic sea level based on sequence stratig
Stable reconstruction of Arctic sea level for the 1950-2010 period
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2016-01-01
on the combination of tide gauge records and a new 20-year reprocessed satellite altimetry derived sea level pattern. Hence the study is limited to the area covered by satellite altimetry (68ºN and 82ºN). It is found that timestep cumulative reconstruction as suggested by Church and White (2000) may yield widely......, a datum-fit of each tide gauges is used and the method takes into account the entirety of each tide gauge record. This makes the Arctic sea level reconstruction much less prone to drifting.From our reconstruction, we found that the Arctic mean sea level trend is around 1.5 mm +/- 0.3 mm/y for the period...
M.G. Bara Filho
2008-01-01
Full Text Available Strength and flexibility are common components of a training program and their maximal values are obtained through specific tests. However, little information about the damage effect of these training procedures in a skeletal muscle is known. Objective: To verify a serum CK changes 24 h after a sub maximal stretching routine and after the static flexibility and maximal strength tests. Methods: the sample was composed by 14 subjects (man and women, 28 ± 6 yr. physical education students. The volunteers were divided in a control group (CG and experimental group (EG that was submitted in a stretching routine (EG-ST, in a maximal flexibility static test (EG-FLEX and in 1-RM test (EG-1-RM, with one week interval among tests. The anthropometrics characteristics were obtained by digital scale with stadiometer (Filizola, São Paulo, Brasil, 2002. The blood samples were obtained using the IFCC method with reference values 26-155 U/L. The De Lorme and Watkins technique was used to access maximal maximal strength through bench press and leg press. The maximal flexibility test consisted in three 20 seconds sets until the point of maximal discomfort. The stretching was done in normal movement amplitude during 6 secons. Results: The basal and post 24 h CK values in CG and EG (ST; Flex and 1 RM were respectively 195,0 ± 129,5 vs. 202,1 ± 124,2; 213,3 ± 133,2 vs. 174,7 ± 115,8; 213,3 ± 133,2 vs. 226,6 ± 126,7 e 213,3 ± 133,2 vs. 275,9 ± 157,2. It was only observed a significant difference (p = 0,02 in the pre and post values inGE-1RM. Conclusion: only maximal strength dynamic exercise was capable to cause skeletal muscle damage.
Stable reconstruction of Arctic sea level for the 1950-2010 period
Limkilde Svendsen, Peter; Andersen, Ole B.; Aasbjerg Nielsen, Allan
2016-08-01
Reconstruction of historical Arctic sea level is generally difficult due to the limited coverage and quality of both tide gauge and altimetry data in the area. Here a strategy to achieve a stable and plausible reconstruction of Arctic sea level from 1950 to today is presented. This work is based on the combination of tide gauge records and a new 20 year reprocessed satellite altimetry-derived sea level pattern. Hence, the study is limited to the area covered by satellite altimetry (68°N and 82°N). It is found that time step cumulative reconstruction as suggested by Church and White (2011) may yield widely variable results and is difficult to stabilize due to the many gaps in both tide gauge and satellite data. A more robust sea level reconstruction approach is to use datum adjustment of the tide gauges in combination with satellite altimetry, as described by Ray and Douglas (2011). In this approach, a datum-fit of each tide gauges is used and the method takes into account the entirety of each tide gauge record. This makes the Arctic sea level reconstruction much less prone to drifting. From our reconstruction, we found that the Arctic mean sea level trend is around 1.5 mm ± 0.3 mm/yr for the period 1950-2010, between 68°N and 82°N. This value is in good agreement with the global mean trend of 1.8 ± 0.3 mm/yr over the same period as found by Church and White (2004).
Deshavath, Narendra Naik; Mohan, Mood; Veeranki, Venkata Dasu; Goud, Vaibhav V; Pinnamaneni, Srinivasa Rao; Benarjee, Tamal
2017-06-01
Conversion of lignocellulosic biomass into monomeric carbohydrates is economically beneficial and suitable for sustainable production of biofuels. Hydrolysis of lignocellulosic biomass using high acid concentration results in decomposition of sugars into fermentative inhibitors. Thus, the main aim of this work was to investigate the optimum hydrolysis conditions for sorghum brown midrib IS11861 biomass to maximize the pentose sugars yield with minimized levels of fermentative inhibitors at low acid concentrations. Process parameters investigated include sulfuric acid concentration (0.2-1 M), reaction time (30-120 min) and temperature (80-121 °C). At the optimum condition (0.2 M sulfuric acid, 121 °C and 120 min), 97.6% of hemicellulose was converted into xylobiose (18.02 mg/g), xylose (225.2 mg/g), arabinose (20.2 mg/g) with low concentration of furfural (4.6 mg/g). Furthermore, the process parameters were statistically optimized using response surface methodology based on central composite design. Due to the presence of low concentration of fermentative inhibitors, 78.6 and 82.8% of theoretical ethanol yield were attained during the fermentation of non-detoxified and detoxified hydrolyzates, respectively, using Pichia stipitis 3498 wild strain, in a techno-economical way.
Xiao-Li Song; Lei Wang; Rui-Zhu Lin; Zhi-Hui Kang; Xin Li; Yun Jiang; Jin-Yue Gao
2007-01-01
...) in a L-type configuration, and verify the theoretical predictions. Applying this technique, we are able to prepare the atoms with maximal coherence to enhance coherent anti-Stokes Raman scattering (CARS) signal...
Cahill, Niamh; Kemp, Andrew C.; Horton, Benjamin P.; Parnell, Andrew C.
2016-02-01
We present a Bayesian hierarchical model for reconstructing the continuous and dynamic evolution of relative sea-level (RSL) change with quantified uncertainty. The reconstruction is produced from biological (foraminifera) and geochemical (δ13C) sea-level indicators preserved in dated cores of salt-marsh sediment. Our model is comprised of three modules: (1) a new Bayesian transfer (B-TF) function for the calibration of biological indicators into tidal elevation, which is flexible enough to formally accommodate additional proxies; (2) an existing chronology developed using the Bchron age-depth model, and (3) an existing Errors-In-Variables integrated Gaussian process (EIV-IGP) model for estimating rates of sea-level change. Our approach is illustrated using a case study of Common Era sea-level variability from New Jersey, USA We develop a new B-TF using foraminifera, with and without the additional (δ13C) proxy and compare our results to those from a widely used weighted-averaging transfer function (WA-TF). The formal incorporation of a second proxy into the B-TF model results in smaller vertical uncertainties and improved accuracy for reconstructed RSL. The vertical uncertainty from the multi-proxy B-TF is ˜ 28 % smaller on average compared to the WA-TF. When evaluated against historic tide-gauge measurements, the multi-proxy B-TF most accurately reconstructs the RSL changes observed in the instrumental record (mean square error = 0.003 m2). The Bayesian hierarchical model provides a single, unifying framework for reconstructing and analyzing sea-level change through time. This approach is suitable for reconstructing other paleoenvironmental variables (e.g., temperature) using biological proxies.
Stable reconstruction of Arctic sea level for the 1950-2010 period
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2016-01-01
on the combination of tide gauge records and a new 20-year reprocessed satellite altimetry derived sea level pattern. Hence the study is limited to the area covered by satellite altimetry (68ºN and 82ºN). It is found that timestep cumulative reconstruction as suggested by Church and White (2000) may yield widely...
A level set based algorithm to reconstruct the urinary bladder from multiple views.
Ma, Zhen; Jorge, Renato Natal; Mascarenhas, T; Tavares, João Manuel R S
2013-12-01
The urinary bladder can be visualized from different views by imaging facilities such as computerized tomography and magnetic resonance imaging. Multi-view imaging can present more details of this pelvic organ and contribute to a more reliable reconstruction. Based on the information from multi-view planes, a level set based algorithm is proposed to reconstruct the 3D shape of the bladder using the cross-sectional boundaries. The algorithm provides a flexible solution to handle the discrepancies from different view planes and can obtain an accurate bladder surface with more geometric details. Copyright © 2013 IPEM. Published by Elsevier Ltd. All rights reserved.
Two-Level Bregman Method for MRI Reconstruction with Graph Regularized Sparse Coding
刘且根; 卢红阳; 张明辉
2016-01-01
In this paper, a two-level Bregman method is presented with graph regularized sparse coding for highly undersampled magnetic resonance image reconstruction. The graph regularized sparse coding is incorporated with the two-level Bregman iterative procedure which enforces the sampled data constraints in the outer level and up-dates dictionary and sparse representation in the inner level. Graph regularized sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge with a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can consistently reconstruct both simulated MR images and real MR data efficiently, and outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.
Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger
Rohr David
2016-01-01
at the Large Hadron Collider (LHC at CERN. The High Level Trigger (HLT is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC and the Inner Tracking System (ITS. The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
Online Reconstruction and Calibration with Feedback Loop in the ALICE High Level Trigger
Rohr, David; Shahoyan, Ruben; Zampolli, Chiara; Krzewicki, Mikolaj; Wiechula, Jens; Gorbunov, Sergey; Chauvin, Alex; Schweda, Kai; Lindenstruth, Volker
2016-11-01
ALICE (A Large Heavy Ion Experiment) is one of the four large scale experiments at the Large Hadron Collider (LHC) at CERN. The High Level Trigger (HLT) is an online computing farm, which reconstructs events recorded by the ALICE detector in real-time. The most computing-intensive task is the reconstruction of the particle trajectories. The main tracking devices in ALICE are the Time Projection Chamber (TPC) and the Inner Tracking System (ITS). The HLT uses a fast GPU-accelerated algorithm for the TPC tracking based on the Cellular Automaton principle and the Kalman filter. ALICE employs gaseous subdetectors which are sensitive to environmental conditions such as ambient pressure and temperature and the TPC is one of these. A precise reconstruction of particle trajectories requires the calibration of these detectors. As our first topic, we present some recent optimizations to our GPU-based TPC tracking using the new GPU models we employ for the ongoing and upcoming data taking period at LHC. We also show our new approach to fast ITS standalone tracking. As our second topic, we present improvements to the HLT for facilitating online reconstruction including a new flat data model and a new data flow chain. The calibration output is fed back to the reconstruction components of the HLT via a feedback loop. We conclude with an analysis of a first online calibration test under real conditions during the Pb-Pb run in November 2015, which was based on these new features.
Wang Quan
2012-06-01
Full Text Available Abstract Leiomyosarcoma of the inferior vena cava (IVCL is a rare retroperitoneal tumor. We report two cases of level II (middle level, renal veins to hepatic veins IVCL, who underwent en bloc resection with reconstruction of bilateral or left renal venous return using prosthetic grafts. In our cases, IVCL is documented to be occluded preoperatively, therefore, radical resection of tumor and/or right kidney was performed and the distal end of inferior vena cava was resected and without caval reconstruction. None of the patients developed edema or acute renal failure postoperatively. After surgical resection, adjuvant radiation therapy was administrated. The patients have been free of recurrence 2 years and 3 months, 9 months after surgery, respectively, indicating the complete surgical resection and radiotherapy contribute to the better survival. The reconstruction of inferior vena cava was not considered mandatory in level II IVCL, if the retroperitoneal venous collateral pathways have been established. In addition to the curative resection of IVCL, the renal vascular reconstruction minimized the risks of procedure-related acute renal failure, and was more physiologically preferable. This concept was reflected in the treatment of the two patients reported on.
Global reconstructed daily surge levels from the 20th Century Reanalysis (1871-2010)
Cid, Alba; Camus, Paula; Castanedo, Sonia; Méndez, Fernando J.; Medina, Raúl
2017-01-01
Studying the effect of global patterns of wind and pressure gradients on the sea level variation (storm surge) is a key issue in understanding the recent climate change effect on the dynamical state of the ocean. The analysis of the spatial and temporal variability of storm surges from observations is a difficult task to accomplish since observations are not homogeneous in time, scarce in space, and moreover, their temporal coverage is limited. A recent global surge database developed by AVISO (DAC, Dynamic Atmospheric Correction) fulfilled the lack of data in terms of spatial coverage, but not regarding time extent, since it only includes the last two decades (1992-2014). In this work, we use the 20th Century Reanalysis V2 (20CR), which spans the years 1871 to 2010, to statistically reconstruct daily maximum surge levels at a global scale. A multivariate linear regression model is fitted between daily mean ERA-interim sea level pressure fields and daily maximum surge levels from DAC. Following, the statistical model is used to reconstruct daily surges using mean sea level pressure fields from 20CR. The verification of the statistical model shows good agreements between DAC levels and the reconstructed surge levels from the 20CR. The validation of the reconstructed surge with tide gauges, distributed throughout the domain, shows good accuracy both in terms of high correlations and small errors. A time series comparison is also depicted at specific tide gauges for the beginning of the 20th century, showing a high concordance. Therefore, this work provides to the scientific community, a daily database of maximum surge levels; which correspond to an extension of the DAC database, from 1871 to 2010. This database can be used to improve the knowledge on historical storm surge conditions, allowing the study of their temporal and spatial variability.
K B Athreya
2009-09-01
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.
W. C. Liu
2017-07-01
Full Text Available Shape and Albedo from Shading (SAfS techniques recover pixel-wise surface details based on the relationship between terrain slopes, illumination and imaging geometry, and the energy response (i.e., image intensity captured by the sensing system. Multiple images with different illumination geometries (i.e., photometric stereo can provide better SAfS surface reconstruction due to the increase in observations. Photometric stereo SAfS is suitable for detailed surface reconstruction of the Moon and other extra-terrestrial bodies due to the availability of photometric stereo and the less complex surface reflecting properties (i.e., albedo of the target bodies as compared to the Earth. Considering only one photometric stereo pair (i.e., two images, pixel-variant albedo is still a major obstacle to satisfactory reconstruction and it needs to be regulated by the SAfS algorithm. The illumination directional difference between the two images also becomes an important factor affecting the reconstruction quality. This paper presents a photometric stereo SAfS algorithm for pixel-level resolution lunar surface reconstruction. The algorithm includes a hierarchical optimization architecture for handling pixel-variant albedo and improving performance. With the use of Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC photometric stereo images, the reconstructed topography (i.e., the DEM is compared with the DEM produced independently by photogrammetric methods. This paper also addresses the effect of illumination directional difference in between one photometric stereo pair on the reconstruction quality of the proposed algorithm by both mathematical and experimental analysis. In this case, LROC NAC images under multiple illumination directions are utilized by the proposed algorithm for experimental comparison. The mathematical derivation suggests an illumination azimuthal difference of 90 degrees between two images is recommended to achieve
Hove, Jens Dahlgaard; Rasmussen, R.; Freiberg, J.
2008-01-01
BACKGROUND: The purpose of this study was to investigate the quantitative properties of ordered-subset expectation maximization (OSEM) on kinetic modeling with nitrogen 13 ammonia compared with filtered backprojection (FBP) in healthy subjects. METHODS AND RESULTS: Cardiac N-13 ammonia positron...
Hove, Jens D; Rasmussen, Rune; Freiberg, Jacob
2008-01-01
emission tomography (PET) studies from 20 normal volunteers at rest and during dipyridamole stimulation were analyzed. Image data were reconstructed with either FBP or OSEM. FBP- and OSEM-derived input functions and tissue curves were compared together with the myocardial blood flow and spillover values...... and OSEM flow values were observed with a flow underestimation of 45% (rest/dipyridamole) in the septum and of 5% (rest) and 15% (dipyridamole) in the lateral myocardial wall. CONCLUSIONS: OSEM reconstruction of myocardial perfusion images with N-13 ammonia and PET produces high-quality images for visual...
Lemmink, K.A.P.M.; Verheijen, R.; Visscher, C.
2004-01-01
AIM: The purpose of this study was to examine the discriminative power of the recently developed Interval Shuttle Run Test (ISRT) and the widely used Maximal Multistage 20 m Shuttle Run Test (MMSRT) for soccer players at different levels of competition. The main difference between the tests is that
Analysis of sea-level reconstruction techniques for the Arctic Ocean
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
Sea-level reconstructions spanning several decades have been examined in numerous studies for most of the world's ocean areas, where satellite missions such as TOPEX/Poseidon and Jason-1 and -2 have provided much-improved knowledge of variability and long-term changes in sea level. However, these...... and -2 and Envisat missions). In addition to EOFs, we also implement an alternative decomposition technique known as minimum/maximum autocorrelation factors (MAF), based on the spatial or temporal autocorrelation within the calibration period, rather than explained variance....... a reasonable amount of tide gauge data available, we focus on a reconstruction timespan of the last five decades, and the implementation of the model is validated by applying it to global sea-level data. We examine the influence of the individual tide gauges on the resulting solution and the ability...
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
Due to the sparsity and often poor quality of data, reconstructing Arctic sea level is highly challenging. We present a reconstruction of Arctic sea level covering 1950 to 2010, using the approaches from Church et al. (2004) and Ray and Douglas (2011). This involves decomposition of an altimetry ...... calibration record into EOFs, and fitting these patterns to a historical tide gauge record....
Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
A Demonstration of Spectral Level Reconstruction of Intrinsic $B$-mode Power
Pal, Barun
2016-01-01
We investigate the prospects and consequences of the spectral level reconstruction of primordial $B$-mode power by solving the systems of linear equations assuming that the lensing potential together with the lensed polarization spectra are already in hand. We find that this reconstruction technique may be very useful to have an estimate of the amplitude of primordial gravity waves or more specifically the value of tensor to scalar ratio. We also see that one can have cosmic variance limited reconstruction of the intrinsic $B$-mode power up to few hundred multipoles ($\\ell\\sim500$) which is more than sufficient to have an estimate of the tensor to scalar ratio. Since the small scale cosmic microwave background (CMB henceforth) anisotropies are not sourced by the primordial gravity waves generated during inflation. We also find that the impact of instrumental noise may be bypassed within this reconstruction algorithm. A simple demonstration for the nullification of the instrumental noise anticipating COrE like...
Using FTK tracks for particle flow reconstruction at the high-level trigger of ATLAS
Jaeger, Benjamin Paul
2016-01-01
The Fast Tracker (FTK) enables the ATLAS high-level trigger (HLT) to have early access to global tracking information. The project of my Summer Student Internship at CERN was to investigate the potential of using particle flow reconstruction with FTK tracks at the ATLAS HLT. This report shortly summarizes my studies, ranging from comparison of FTK tracks with offline tracks to more sophisticated analyses, such as assessing the jet resolution and trigger related properties.
Ardern, Clare L; Taylor, Nicholas F; Feller, Julian A; Whitehead, Timothy S; Webster, Kate E
2013-07-01
Up to two-thirds of athletes may not return to their preinjury level of sport by 12 months after anterior cruciate ligament (ACL) reconstruction surgery, despite being physically recovered. This has led to questions about what other factors may influence return to sport. To determine whether psychological factors predicted return to preinjury level of sport by 12 months after ACL reconstruction surgery. Case control study; Level of evidence, 3. Recreational and competitive-level athletes seen at a private orthopaedic clinic with an ACL injury were consecutively recruited. The primary outcome was return to the preinjury level of sports participation. The psychological factors evaluated were psychological readiness to return to sport, fear of reinjury, mood, emotions, sport locus of control, and recovery expectations. Participants were followed up preoperatively and at 4 and 12 months postoperatively. In total, 187 athletes participated. At 12 months, 56 athletes (31%) had returned to their preinjury level of sports participation. Significant independent contributions to returning to the preinjury level by 12 months after surgery were made by psychological readiness to return to sport, fear of reinjury, sport locus of control, and the athlete's estimate of the number of months it would take to return to sport, as measured preoperatively (χ(2) 2 = 18.3, P sport at 12 months, suggesting that attention to psychological recovery in addition to physical recovery after ACL injury and reconstruction surgery may be warranted. Clinical screening for maladaptive psychological responses in athletes before and soon after surgery may help clinicians identify athletes at risk of not returning to their preinjury level of sport by 12 months.
On the ability of global sea level reconstructions to determine trends and variability
Calafat, F. M.; Chambers, D. P.; Tsimplis, M. N.
2014-03-01
We investigate how well methods based on empirical orthogonal functions (EOFs) can reconstruct global mean sea level (GMSL). We first explore the analytical solution of the method and then perform a series of numerical experiments using modeled data. In addition, we present a new GMSL reconstruction for the period 1900-2011 computed both with and without a spatially uniform EOF (EOF0). The method without the EOF0 uses global information, which leads to a better reconstruction of the variability, though with some underestimation. The trend, however, is not captured, which motivates the use of the EOF0. When the EOF0 is used the method reduces to the generalized weighted mean with regularization of altimetry records at tide-gauge locations, and thus it uses no global information. This results in a poor reconstruction of the variability. Although the trend is better captured (biases smaller than ±25%) with the EOF0, using the covariance matrix of deseasonalized monthly time series as the basis for determining the contribution of each tide gauge to the trend is dubious because it assumes that the interannual variability and the trend are driven by the same mechanisms. A significant fraction of the interannual to decadal variability (˜4 mm peak-to-peak and ˜2 mm standard error) in the new GMSL reconstruction without the EOF0 is consistent with land hydrology changes associated with the El Niño-Southern Oscillation (ENSO). When the EOF0 is used, we find no correlation with either the ENSO or land hydrology changes, and decadal fluctuations are ˜5 times greater.
IMPACT OF LEVEL OF DETAILS IN THE 3D RECONSTRUCTION OF TREES FOR MICROCLIMATE MODELING
E. Bournez
2016-06-01
Full Text Available In the 21st century, urban areas undergo specific climatic conditions like urban heat islands which frequency and intensity increase over the years. Towards the understanding and the monitoring of these conditions, vegetation effects on urban climate are studied. It appears that a natural phenomenon, the evapotranspiration of trees, generates a cooling effect in urban environment. In this work, a 3D microclimate model is used to quantify the evapotranspiration of trees in relation with their architecture, their physiology and the climate. These three characteristics are determined with field measurements and data processing. Based on point clouds acquired with terrestrial laser scanner (TLS, the 3D reconstruction of the tree wood architecture is performed. Then the 3D reconstruction of leaves is carried out from the 3D skeleton of vegetative shoots and allometric statistics. With the aim of extending the simulation on several trees simultaneously, it is necessary to apply the 3D reconstruction process on each tree individually. However, as well for the acquisition as for the processing, the 3D reconstruction approach is time consuming. Mobile laser scanners could provide point clouds in a faster way than static TLS, but this implies a lower point density. Also the processing time could be shortened, but under the assumption that a coarser 3D model is sufficient for the simulation. In this context, the criterion of level of details and accuracy of the tree 3D reconstructed model must be studied. In this paper first tests to assess their impact on the determination of the evapotranspiration are presented.
Impact of Level of Details in the 3d Reconstruction of Trees for Microclimate Modeling
Bournez, E.; Landes, T.; Saudreau, M.; Kastendeuch, P.; Najjar, G.
2016-06-01
In the 21st century, urban areas undergo specific climatic conditions like urban heat islands which frequency and intensity increase over the years. Towards the understanding and the monitoring of these conditions, vegetation effects on urban climate are studied. It appears that a natural phenomenon, the evapotranspiration of trees, generates a cooling effect in urban environment. In this work, a 3D microclimate model is used to quantify the evapotranspiration of trees in relation with their architecture, their physiology and the climate. These three characteristics are determined with field measurements and data processing. Based on point clouds acquired with terrestrial laser scanner (TLS), the 3D reconstruction of the tree wood architecture is performed. Then the 3D reconstruction of leaves is carried out from the 3D skeleton of vegetative shoots and allometric statistics. With the aim of extending the simulation on several trees simultaneously, it is necessary to apply the 3D reconstruction process on each tree individually. However, as well for the acquisition as for the processing, the 3D reconstruction approach is time consuming. Mobile laser scanners could provide point clouds in a faster way than static TLS, but this implies a lower point density. Also the processing time could be shortened, but under the assumption that a coarser 3D model is sufficient for the simulation. In this context, the criterion of level of details and accuracy of the tree 3D reconstructed model must be studied. In this paper first tests to assess their impact on the determination of the evapotranspiration are presented.
Quan, Haiyang; Wu, Fan; Hou, Xi
2015-10-01
New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.
Krustrup, Peter; Ortenblad, Niels; Nielsen, Joachim
2011-01-01
The aim of this study was to examine maximal voluntary knee-extensor contraction force (MVC force), sarcoplasmic reticulum (SR) function and muscle glycogen levels in the days after a high-level soccer game when players ingested an optimised diet. Seven high-level male soccer players had a vastus...... lateralis muscle biopsy and a blood sample collected in a control situation and at 0, 24, 48 and 72 h after a competitive soccer game. MVC force, SR function, muscle glycogen, muscle soreness and plasma myoglobin were measured. MVC force sustained over 1 s was 11 and 10% lower (P ...
Li, Xiucan; Wang, Yiguo; Zhao, Yongfei; Liu, Jianheng; Xiao, Songhua; Mao, Keya
2017-05-11
A unique case report. A three-dimensional (3D) printing technology is proposed for reconstructing multi-level cervical spine (C2-C4) after resection of metastatic papillary thyroid carcinoma in a middle-age female patient. Papillary thyroid carcinoma is a malignant neoplasm with a relatively favorable prognosis. A metastatic lesion in multi-level cervical spine (C2-C4) destroys neurological functions and causes local instability. Radical excision of the metastasis and reconstruction of the cervical vertebrae sequence conforms with therapeutic principles, while the special-shaped multi-level upper-cervical spine requires personalized implants. 3D printing is an additive manufacturing technology that produces personalized products by accurately layering material under digital model control via a computer. Reporting of this recent technology for reconstructing multi-level cervical spine (C2-C4) is rare in the literature. Anterior-posterior surgery was performed in one stage. Radical resection of the metastatic lesion (C2-C4) and thyroid gland, along with insertion of a personalized implant manufactured by 3D printing technology, were performed to rebuild the cervical spine sequences. The porous implant was printed in Ti6AL4 V with perfect physicochemical properties and biological performance, such as biocompatibility and osteogenic activity. Finally, lateral mass screw fixation was performed via a posterior approach. Patient neurological function gradually improved after the surgery. The patient received 11/17 on the Japanese Orthopedic Association scale and ambulated with a personalized skull-neck-thorax orthosis on postoperative day 11. She received radioiodine I therapy. The plane X-rays and computed tomography revealed no implant displacement or subsidence at the 12-month follow-up mark. The presented case substantiates the use of 3D printing technology, which enables the personalization of products to solve unconventional problems in spinal surgery. 5.
Metabolism and evolution: A comparative study of reconstructed genome-level metabolic networks
Almaas, Eivind
2008-03-01
The availability of high-quality annotations of sequenced genomes has made it possible to generate organism-specific comprehensive maps of cellular metabolism. Currently, more than twenty such metabolic reconstructions are publicly available, with the majority focused on bacteria. A typical metabolic reconstruction for a bacterium results in a complex network containing hundreds of metabolites (nodes) and reactions (links), while some even contain more than a thousand. The constrain-based optimization approach of flux-balance analysis (FBA) is used to investigate the functional characteristics of such large-scale metabolic networks, making it possible to estimate an organism's growth behavior in a wide variety of nutrient environments, as well as its robustness to gene loss. We have recently completed the genome-level metabolic reconstruction of Yersinia pseudotuberculosis, as well as the three Yersinia pestis biovars Antiqua, Mediaevalis, and Orientalis. While Y. pseudotuberculosis typically only causes fever and abdominal pain that can mimic appendicitis, the evolutionary closely related Y. pestis strains are the aetiological agents of the bubonic plague. In this presentation, I will discuss our results and conclusions from a comparative study on the evolution of metabolic function in the four Yersiniae networks using FBA and related techniques, and I will give particular focus to the interplay between metabolic network topology and evolutionary flexibility.
A Bayesian Hierarchical Model for Reconstructing Sea Levels: From Raw Data to Rates of Change
Cahill, Niamh; Horton, Benjamin P; Parnell, Andrew C
2015-01-01
We present a holistic Bayesian hierarchical model for reconstructing the continuous and dynamic evolution of relative sea-level (RSL) change with fully quantified uncertainty. The reconstruction is produced from biological (foraminifera) and geochemical ({\\delta}13C) sea-level indicators preserved in dated cores of salt-marsh sediment. Our model is comprised of three modules: (1) A Bayesian transfer function for the calibration of foraminifera into tidal elevation, which is flexible enough to formally accommodate additional proxies (in this case bulk-sediment {\\delta}13C values); (2) A chronology developed from an existing Bchron age-depth model, and (3) An existing errors-in-variables integrated Gaussian process (EIV-IGP) model for estimating rates of sea-level change. We illustrate our approach using a case study of Common Era sea-level variability from New Jersey, U.S.A. We develop a new Bayesian transfer function (B-TF), with and without the {\\delta}13C proxy and compare our results to those from a widely...
Niu, Sen; Ke, Jun
2016-10-01
In this paper, block-based compressive ultra low-light-level imaging (BCU-imaging) is studied. Objects are divided into blocks. Features, or linear combinations of block pixels, instead of pixels, are measured for each block to improve system measurement SNR and thus object reconstructions. Thermal noise and shot noise are discussed for object reconstruction. The former is modeled as Gaussian noise. The latter is modeled as Poisson noise. Linear Wiener operator and linearized iterative Bregman algorithm are used to reconstruct objects from measurements corrupted by thermal noise. SPIRAL algorithm is used to reconstruct object from measurements with shot noise. Linear Wiener operator is also studied for measurements with shot noise, because Poisson noise is similar to Gaussian noise at large signal level and feature values are large enough to make this assumption feasible. Root mean square error (RMSE) is used to quantify system reconstruction quality.
Zdrenghea Dumitru Tudor
2014-03-01
Full Text Available Premise. La pacienţii cu insuficienţǎ cardiacǎ, testele de efort submaximale (testul de mers 400 metri şi testul de efort 6 minute reprezintǎ o alternativǎ a testului de efort clasic pe cicloergometru. Scopul studiului este de a compara creşterea la efort a peptidului natriuretic-NT-proBNP dupǎ testul de mers 400 m, respectiv testul de efort 6 minute faţă de testul clasic de efort pe cicloergometru. Material şi metodă. Au fost studiaţi 20 de pacienți cu insuficienţă cardiacă (fracţie de ejecţie <40% , cu vârste între 37 şi 70 de ani, 16 bărbaţi şi 4 femei. Dupǎ retrocedarea fenomenelor congestive, toţii pacienţii au efectuat în trei zile consecutive, cele trei tipuri de teste: testul de efort clasic pe cicloergometru, testul de mers 400 de metri, respectiv testul de efort de 6 minute. Valorile NT-pro BNP au fost determinate utilizând metoda ELISA înainte şi dupǎ cele trei teste de efort. Rezultate. Valorile medii ale NT-proBNP au fost crescute în repaus în toate cele trei zile, crescând apoi semnificativ, indiferent de tipul de test de efort efectuat: de la 688±72 fmol/ml la 1869±91 fmol/ml (p<0.05 în cazul testului de efort clasic, de la 843±90 fmol/ml la 977±93 fmol/ml (15%, p<0.05 în cazul testului de efort 6 minute şi de la 676±63 fmol/ ml la 927±95 fmol/ml (37%, p<0.05 pentru tesul de mers 400 de metri. Totodatǎ au existat corelaţii semnificative între valorile maxime ale NT-proBNP din cursul efortului pe cicloergometru /test de efort 6 minute (r=0.71, cicloergometeru/ test de mers 400 metri, (r=0.71, respectiv test de mers 400 metri/test de efort 6 minute (r=0.81, p<0.01. În concluzie concentraţia NT-proBNP creşte semnificativ şi similar la bolnavii cu insuficienţǎ cardiacǎ, atât în cursul efortului maximal cât şi în cursul efortului submaximal. Atât testul de efort 6 minute cât şi la testul de mers 400 metri, sunt suficiente ca intensitate pentru eliberarea de hormoni
Kang, Jung In; Lee, Yoon Suk; Han, Ye Jin; Kong, Kyoung Ae
2017-01-01
Purpose Serum level of 25-hydroxyvitamin D (25-OHD) is considered as the most appropriate marker of vitamin D status. However, only a few studies have investigated the relationship between 25-OHD and parathyroid hormone (PTH) in children. To this end, this study was aimed at evaluating the lowest 25-OHD level that suppresses the production of parathyroid hormone in children. Methods A retrospective record review was performed for children aged 0.2 to 18 years (n=193; 106 boys and 87 girls) who underwent simultaneous measurements of serum 25-OHD and PTH levels between January 2010 and June 2014. Results The inflection point of serum 25-OHD level for maximal suppression of PTH was at 18.0 ng/mL (95% confidence interval, 14.3–21.7 ng/mL). The median PTH level of the children with 25-OHD levels of <18.0 ng/mL was higher than that of children with 25-OHD levels ≥ 18.0 ng/mL (P<0.0001). The median calcium level of children with 25-OHD levels<18.0 ng/mL was lower than that of children with 25-OHD levels≥18.0 ng/mL (P=0.0001). The frequency of hyperparathyroidism was higher in the children with 25-OHD levels<18.0 ng/mL than in the children with 25-OHD levels≥18.0 ng/mL (P<0.0001). Hypocalcemia was more prevalent in the children with 25-OHD levels<18.0 ng/mL than in the children with 25-OHD levels≥18.0 ng/mL (P<0.0001). Conclusion These data suggest that a vitamin D level of 18.0 ng/mL could be the criterion for 25-OHD deficiency in children at the inflection point of the maximal suppression of PTH. PMID:28289433
Reconstructing sea level from paleo and projected temperatures 200 to 2100 AD
Grinsted, Aslak; Moore, John; Jevrejeva, Svetlana
2010-01-01
. The model has good predictive power when calibrated on the pre-1990 period and validated against the high rates of sea level rise from the satellite altimetry. Future sea level is projected from intergovernmental panel on climate change (IPCC) temperature scenarios and past sea level from established multi......We use a physically plausible four parameter linear response equation to relate 2,000 years of global temperatures and sea level. We estimate likelihood distributions of equation parameters using Monte Carlo inversion, which then allows visualization of past and future sea level scenarios......-proxy reconstructions assuming that the established relationship between temperature and sea level holds from 200 to 2100 ad. Over the last 2,000 years minimum sea level (-19 to -26 cm) occurred around 1730 ad, maximum sea level (12–21 cm) around 1150 AD. Sea level 2090–2099 is projected to be 0.9 to 1.3 m for the A1B...
Statistical selection of tide gauges for Arctic sea-level reconstruction
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2015-01-01
and the use of an ocean circulation model to provide gridded time series of sea level. As a surrogate for satellite altimetry, we have used the Drakkar ocean model to yield the EOFs. We initially evaluate the tide gauges through empirical criteria to reject obvious outlier gauges. Subsequently, we evaluate......In this paper, we seek an appropriate selection of tide gauges for Arctic Ocean sea-level reconstruction based on a combination of empirical criteria and statistical properties (leverages). Tide gauges provide the only in situ observations of sea level prior to the altimetry era. However, tide...... for the period 1950-2010 for the Arctic Ocean, constrained by tide gauge records, using the basic approach of Church et al. (2004). A major challenge is the sparsity of both satellite and tide gauge data beyond what can be covered with interpolation, necessitating a time-variable selection of tide gauges...
Edge-Aware Level Set Diffusion and Bilateral Filtering Reconstruction for Image Magnification
Hua Huang; Yu Zang; Paul L.Rosin; Chun Qi
2009-01-01
In this paper we propose an image magnification reconstruction method. In recent years many interpolation algorithms have been proposed for image magnification, but all of them have defects to some degree, such as jaggies and blurring. To solve these problems, we propose applying post-processing which consists of edge-aware level set diffusion and bilateral filtering. After the initial interpolation, the contours of the image are identified. Next, edge-aware level set diffusion is applied to these significant contours to remove the jaggies, followed by bilateral filtering at the same locations to reduce the blurring created by the initial interpolation and level set diffusion. These processes produce sharp contours without jaggies and preserve the details of the image. Results show that the overall RMS error of our method barely increases while the contour smoothness and sharpness are substantially improved.
Buescu, Cristian Tudor; Onutu, Adela Hilda; Lucaciu, Dan Osvald; Todor, Adrian
2017-03-01
The objective of this study was to compare the pain levels and analgesic consumption after single bundle ACL reconstruction with free quadriceps tendon autograft versus hamstring tendon autograft. A total of 48 patients scheduled for anatomic single-bundle ACL reconstruction were randomized into two groups: the free quadriceps tendon autograft group (24 patients) and the hamstring tendons autograft group (24 patients). A basic multimodal analgesic postoperative program was used for all patients and rescue analgesia was provided with tramadol, at pain scores over 30 on the Visual Analog Scale. The time to the first rescue analgesic, the number of doses of tramadol and pain scores were recorded. The results within the same group were compared with the Wilcoxon signed test. Supplementary analgesic drug administration proved significantly higher in the group of subjects with hamstring grafts, with a median (interquartile range) of 1 (1.3) dose, compared to the group of subjects treated with a quadriceps graft, median = 0.5 (0.1.25) (p = 0.009). A significantly higher number of subjects with a quadriceps graft did not require any supplementary analgesic drug (50%) as compared with subjects with hamstring graft (13%; Z-statistics = 3.01, p = 0.002). The percentage of subjects who required a supplementary analgesic drug was 38% higher in the HT group compared with the FQT group. The use of the free quadriceps tendon autograft for ACL reconstruction leads to less pain and analgesic consumption in the immediate postoperative period compared with the use of hamstrings autograft. Level I Therapeutic study. Copyright © 2017 Turkish Association of Orthopaedics and Traumatology. Production and hosting by Elsevier B.V. All rights reserved.
Koelling, M.
2009-12-01
SEALEX, a simple reef growth model (Koelling et al. 2009) has been used to test the feasability of different sea-level reconstructions. The forward model is driven by a user definable sea-level curve. Other adjustable model parameters include maximum coral growth rate, coral growth rate depth dependence and light attenuation, subaerial erosion and subsidence. A time lag for the establishment of significant reef accretion may also be set. During the model run, both, the external shape and the internal chronologic structure of the growing reef as well as the paleo-water-depths are continuously displayed and recorded. The effects of driving the reef growth model with different sea-level reconstructions such as those of Lambeck & Chappell (Science, 2001), Waelbroeck et al (QSR,2002), Siddall et al (nature,2003) and Bintanja et al (nature,2005) for different tectonic settings such as both on slowly subsiding islands like Tahiti (subsidence rates of 0.25 m ka-1) and rapidly subsiding islands like Hawaii (subsidence rate of 2.5 mka-1) as well as rapidly uplifting coastal settings like Huon Peninsula (uplift rates of 0.5 to 4 m ka-1). The model runs show the sensitivity of the resulting overall morphology and internal age structure to the sea level reconstruction used to drive the model.These results can then be compared to observed data allowing different hypothesis concerning reef development to be tested. The model may also be used to assist in finding sampling locations in reef bodies that are likely to contain critical information for sea level studies. Model run of a Huon Peninsula-type reef driven by a spliced sea level curve (200 ka-120 ka:Waelbroeck et al., 2002,120 ka to present: Lambeck and Chappell, 2001 as shown in the inset) and an uplift rate of 2.8 mka-1. For comparison the idealized Huon Peninsula profile is inset together with terrace ages compiled from a Esat et al., 1999, Stein et al., 1993, b Potter et al. (2004), c Chappell et al. (1996) and d
Bieberle, M; Hampel, U
2015-06-13
Tomographic image reconstruction is based on recovering an object distribution from its projections, which have been acquired from all angular views around the object. If the angular range is limited to less than 180° of parallel projections, typical reconstruction artefacts arise when using standard algorithms. To compensate for this, specialized algorithms using a priori information about the object need to be applied. The application behind this work is ultrafast limited-angle X-ray computed tomography of two-phase flows. Here, only a binary distribution of the two phases needs to be reconstructed, which reduces the complexity of the inverse problem. To solve it, a new reconstruction algorithm (LSR) based on the level-set method is proposed. It includes one force function term accounting for matching the projection data and one incorporating a curvature-dependent smoothing of the phase boundary. The algorithm has been validated using simulated as well as measured projections of known structures, and its performance has been compared to the algebraic reconstruction technique and a binary derivative of it. The validation as well as the application of the level-set reconstruction on a dynamic two-phase flow demonstrated its applicability and its advantages over other reconstruction algorithms. © 2015 The Author(s) Published by the Royal Society. All rights reserved.
Hammouda, Omar; Chtourou, Hamdi; Chaouachi, Anis; Chahed, Henda; Ferchichi, Salyma; Kallel, Choumous; Chamari, Karim; Souissi, Nizar
2012-01-01
Purpose Prolonged physical exercise results in transient elevations of biochemical markers of muscular damage. This study examined the effect of short-term maximal exercise on these markers, homocysteine levels (Hcy), and total antioxidant status (TAS) in trained subjects. Methods Eighteen male football players participated in this study. Blood samples were collected 5-min before and 3-min after a 30-s Wingate test. Results The results indicated that plasma biochemical markers of muscle injury increased significantly after the Wingate test (Pbilirubin, and TAS increased significantly after exercise (P<0.05). However, Hcy levels were unaffected by the Wingate test (for 3-min post-exercise measurement). Conclusions Short-term maximal exercise (e.g. 30-s Wingate test) is of sufficient intensity and duration to increase markers of muscle damage, and TAS; but not Hcy levels. Increases in the selected enzymes probably come primarily from muscle damage, rather than liver damage. Moreover, increase of TAS confirms the Wingate test induced oxidative stress. PMID:23342222
Particle Size Evidence of Intertidal Elevation: A Basis for Quantitative Sea-level Reconstruction
Plater, Andrew; Mills, Hayley; Zhang, Weiguo; Dong, Chenyin
2014-05-01
The relationship between particle size distributions and bed elevation within the tidal frame is controlled largely by hydroperiod and proximity to tidal ingress. Here, the upper part of the intertidal zone is characterised by poorly sorted, near symmetrical, platy- to mesokurtic, fine-grained particle size distributions due to particle settling from suspension as the tidal flow velocity decreases to high tide slack water. Indeed, an elevational or spatial gradient in particle size distribution can be observed whereby shorter hydroperiods (higher elevations) are accompanied by slower and more variable flow velocities. However, this gradient may become complicated by creek networks, whereby particle size can be observed to decrease away from creek margins, or extant vegetation that increases bed friction. Unvegetated, planar tidal flats in the Yangtze estuary offer an ideal test bed to explore evidence for a quantitative relationship between particle size distributions and bed elevation within the tidal frame. Such a relationship would then serve as an effective proxy for tidal level preserved within sediment cores, and thus a means for reconstructing past sea level. This principle is based largely on ecological transfer function-based reconstructions of Holocene sea level from foraminifera and diatoms. Surface sediment samples were collected along three transects extending eastwards from Chongming Island in South Branch channel of the Yangtze estuary. Sample positions relative to the high water mark were determined using RTK surveying, and particle size analysis was undertaken using laser granulometry. Unconstrained cluster analysis, based on unweighted Euclidean distance, was undertaken on the particle size classes at 0.25 phi intervals (up to 50 size bins) as well as Udden-Wentworth size classes (6-7 size bins). All three transects demonstrate a good clustering of particle size classes with distance and elevation, i.e. sites that are higher within the tidal frame
Higgins, M F; Tallis, J; Price, M J; James, R S
2013-05-01
This study examined the effects of elevated buffer capacity [~32 mM HCO₃(-)] through administration of sodium bicarbonate (NaHCO₃) on maximally stimulated isolated mouse soleus (SOL) and extensor digitorum longus (EDL) muscles undergoing cyclical length changes at 37 °C. The elevated buffering capacity was of an equivalent level to that achieved in humans with acute oral supplementation. We evaluated the acute effects of elevated [HCO₃(-)] on (1) maximal acute power output (PO) and (2) time to fatigue to 60 % of maximum control PO (TLIM60), the level of decline in muscle PO observed in humans undertaking similar exercise, using the work loop technique. Acute PO was on average 7.0 ± 4.8 % greater for NaHCO₃-treated EDL muscles (P muscles (P muscle performance was variable, suggesting that there might be inter-individual differences in response to NaHCO₃ supplementation. These results present the best indication to date that NaHCO₃ has direct peripheral effects on mammalian skeletal muscle resulting in increased acute power output.
Simulation of droplet impact on a solid surface using the level contour reconstruction method
Shin, Seung Won [Hongik University, Seoul (Korea, Republic of); Juric, Damir [Laboratoire d' Informatique pour la Mecanique et les Sciences de l' Ingenieur, Orsay (France)
2009-09-15
We simulate the three-dimensional impact of a droplet onto a solid surface using the level contour reconstruction method (LCRM). A Navier-slip dynamic contact line model is implemented in this method and contact angle hysteresis is accounted for by fixing the contact angle limits to prescribed advancing or receding angles. Computation of a distance function directly from the tracked interface enables a straightforward implementation of the contact line dynamic model in the LCRM. More general and sophisticated contact line models are readily applicable in this front tracking approach with few modifications, since complete knowledge of the geometrical information of the interface in the vicinity of the wall contact region is available. Several validation tests are performed including 2D planar droplet, 2D axisymmetric droplet, and full three-dimensional droplet splashing problems. The results show good agreement compared with existing numerical and experimental solutions
Phylogeographic reconstruction of a bacterial species with high levels of lateral gene transfer
Kaul Rajinder
2009-11-01
Full Text Available Abstract Background Phylogeographic reconstruction of some bacterial populations is hindered by low diversity coupled with high levels of lateral gene transfer. A comparison of recombination levels and diversity at seven housekeeping genes for eleven bacterial species, most of which are commonly cited as having high levels of lateral gene transfer shows that the relative contributions of homologous recombination versus mutation for Burkholderia pseudomallei is over two times higher than for Streptococcus pneumoniae and is thus the highest value yet reported in bacteria. Despite the potential for homologous recombination to increase diversity, B. pseudomallei exhibits a relative lack of diversity at these loci. In these situations, whole genome genotyping of orthologous shared single nucleotide polymorphism loci, discovered using next generation sequencing technologies, can provide very large data sets capable of estimating core phylogenetic relationships. We compared and searched 43 whole genome sequences of B. pseudomallei and its closest relatives for single nucleotide polymorphisms in orthologous shared regions to use in phylogenetic reconstruction. Results Bayesian phylogenetic analyses of >14,000 single nucleotide polymorphisms yielded completely resolved trees for these 43 strains with high levels of statistical support. These results enable a better understanding of a separate analysis of population differentiation among >1,700 B. pseudomallei isolates as defined by sequence data from seven housekeeping genes. We analyzed this larger data set for population structure and allele sharing that can be attributed to lateral gene transfer. Our results suggest that despite an almost panmictic population, we can detect two distinct populations of B. pseudomallei that conform to biogeographic patterns found in many plant and animal species. That is, separation along Wallace's Line, a biogeographic boundary between Southeast Asia and Australia
Reconstructing Late Holocene Relative Sea-level Changes on the Gulf Coast of Florida
Gerlach, M. J.; Engelhart, S. E.; Kemp, A.; Moyer, R. P.; Smoak, J. M.; Bernhardt, C. E.
2015-12-01
Little is known about late Holocene relative sea-level (RSL) along the Gulf Coast of Florida. A RSL reconstruction from this region is needed to fill a spatial gap in sea-level records which can be used to support coastal management, contribute geologic data for Earth-Ice models estimating late Holocene land-level change and serve as the basis for which future projections of sea-level rise must be superimposed. Further, this dataset is crucial to understanding the presence/absence and non-synchronous timing of small sea-level oscillations (e.g. rise at ~ 1000 A.D.; fall at ~ 1400 A.D.) during the past 2000 years on the Atlantic and Gulf Coasts of the United States that may be linked to climate anomalies. We present the results of a high-resolution RSL reconstruction based on the sediment record of two salt marshes on the eastern margin of the Gulf of Mexico. Two ~1.3m cores primarily composed of Juncus roemeranius peat reveal RSL changes over the past ~2000 years in the southern end of Tampa Bay and in Charlotte Harbor, Florida. Two study sites were used to isolate localized factors affecting RSL at either location. Lithostratigraphic analysis at both sites identifies a transition from sandy-silt layers into salt-marsh peat at the bottom of each core. The two records show continuous accumulation of salt-marsh peat with Juncus roemeranius macrofossils and intermittent sand horizons likely reflecting inundation events. We used vertically zoned assemblages of modern foraminifera to assign the indicative meaning. The high marsh is dominated by Ammoastuta inepta, Haplophragmoides wilberti, and Arenoparella mexicana, with low marsh and tidal flats identified by Ammobaculites spp. and Miliammina fusca. Chronologies for these study sites were established using AMS radiocarbon dating of in-situ plant macrofossils, Cs137, Pb210 and pollen and pollution chronohorizons. Our regional RSL curve will add additional data for constraining the mechanisms causing RSL change.
Gavryusev, V.; Signoles, A.; Ferreira-Cao, M.; Zürn, G.; Hofmann, C. S.; Günter, G.; Schempp, H.; Robert-de-Saint-Vincent, M.; Whitlock, S.; Weidemüller, M.
2016-08-01
We present combined measurements of the spatially resolved optical spectrum and the total excited-atom number in an ultracold gas of three-level atoms under electromagnetically induced transparency conditions involving high-lying Rydberg states. The observed optical transmission of a weak probe laser at the center of the coupling region exhibits a double peaked spectrum as a function of detuning, while the Rydberg atom number shows a comparatively narrow single resonance. By imaging the transmitted light onto a charge-coupled-device camera, we record hundreds of spectra in parallel, which are used to map out the spatial profile of Rabi frequencies of the coupling laser. Using all the information available we can reconstruct the full one-body density matrix of the three-level system, which provides the optical susceptibility and the Rydberg density as a function of spatial position. These results help elucidate the connection between three-level interference phenomena, including the interplay of matter and light degrees of freedom and will facilitate new studies of many-body effects in optically driven Rydberg gases.
Gavryusev, V; Ferreira-Cao, M; Zürn, G; Hofmann, C S; Günter, G; Schempp, H; Robert-de-Saint-Vincent, M; Whitlock, S; Weidemüller, M
2016-01-01
We present combined measurements of the spatially-resolved optical spectrum and the total excited-atom number in an ultracold gas of three-level atoms under electromagnetically induced transparency conditions involving high-lying Rydberg states. The observed optical transmission of a weak probe laser at the center of the coupling region exhibits a double peaked spectrum as a function of detuning, whilst the Rydberg atom number shows a comparatively narrow single resonance. By imaging the transmitted light onto a charge-coupled-device camera, we record hundreds of spectra in parallel, which are used to map out the spatial profile of Rabi frequencies of the coupling laser. Using all the information available we can reconstruct the full one-body density matrix of the three-level system, which provides the optical susceptibility and the Rydberg density as a function of spatial position. These results help elucidate the connection between three-level interference phenomena, including the interplay of matter and li...
Global reconstructed daily storm surge levels from the 20th century reanalysis (1871-2010)
Cid, Alba; Camus, Paula; Castanedo, Sonia; Mendez, Fernando; Medina, Raul
2015-04-01
The study of global patterns of wind and pressure gradients, and more specifically, their effect on the sea level variation (storm surge), is a key issue in the understanding of recent climate changes. The local effect of storm surges on coastal areas (zones particularly vulnerable to climate variability and changes in sea level), is also of great interest in, for instance, flooding risk assessment. Studying the spatial and temporal variability of storm surges from observations is a difficult task to accomplish since observations are not homogeneous in time and scarce in space, and moreover, their temporal coverage is limited. The development of a global storm surge database (DAC, Dynamic Atmospheric Correction by Aviso, Carrère and Lyard, 2003) fulfils the lack of data in terms of spatial coverage, but not regarding time extent since it only includes last couple of decades (1992-2014). In this work, we propose the use of the 20CR ensemble (Compo et al., 2011) which spans from 1871 to 2010 to statistically reconstruct storm surge at a global scale and for a long period of time. Therefore, the temporal and spatial variability of storm surges can be fully studied and with much less effort than performing a dynamical downscaling. The statistical method chosen to carry out the reconstruction is based on multiple linear regression between an atmospheric predictor and the storm surge level at daily scale (Camus et al., 2014). The linear regression model is calibrated and validated using daily mean sea level pressure fields (and gradients) from the ERA-interim reanalysis and daily maxima surges from DAC. The obtained daily database of maximum daily surges has allowed us to estimate global trends at a centennial scale and analyse the effect of the changing climate on storm surges during the 20th century. Hence, this work improves the knowledge on historical storm-surge conditions and provides helpful information to the community concern on marine climate evolution and
Brain, Matthew J.; Kemp, Andrew C.; Hawkes, Andrea D.; Engelhart, Simon E.; Vane, Christopher H.; Cahill, Niamh; Hill, Troy D.; Donnelly, Jeffrey P.; Horton, Benjamin P.
2017-07-01
Salt-marsh sediments provide precise and near-continuous reconstructions of Common Era relative sea level (RSL). However, organic and low-density salt-marsh sediments are prone to compaction processes that cause post-depositional distortion of the stratigraphic column used to reconstruct RSL. We compared two RSL reconstructions from East River Marsh (Connecticut, USA) to assess the contribution of mechanical compression and biodegradation to compaction of salt-marsh sediments and their subsequent influence on RSL reconstructions. The first, existing reconstruction ('trench') was produced from a continuous sequence of basal salt-marsh sediment and is unaffected by compaction. The second, new reconstruction is from a compaction-susceptible core taken at the same location. We highlight that sediment compaction is the only feasible mechanism for explaining the observed differences in RSL reconstructed from the trench and core. Both reconstructions display long-term RSL rise of ∼1 mm/yr, followed by a ∼19th Century acceleration to ∼3 mm/yr. A statistically-significant difference between the records at ∼1100 to 1800 CE could not be explained by a compression-only geotechnical model. We suggest that the warmer and drier conditions of the Medieval Climate Anomaly (MCA) resulted in an increase in sediment compressibility during this time period. We adapted the geotechnical model by reducing the compressive strength of MCA sediments to simulate this softening of sediments. 'Decompaction' of the core reconstruction with this modified model accounted for the difference between the two RSL reconstructions. Our results demonstrate that compression-only geotechnical models may be inadequate for estimating compaction and post-depositional lowering of susceptible organic salt-marsh sediments in some settings. This has important implications for our understanding of the drivers of sea-level change. Further, our results suggest that future climate changes may make salt
Cichy, Krzysztof [DESY, Zeuthen (Germany). NIC; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [DESY, Zeuthen (Germany). NIC; Korcyl, Piotr [DESY, Zeuthen (Germany). NIC; Jagiellonian Univ., Krakow (Poland). M. Smoluchowski Inst. of Physics
2012-07-15
We present results of a lattice QCD application of a coordinate space renormalization scheme for the extraction of renormalization constants for flavour non-singlet bilinear quark operators. The method consists in the analysis of the small-distance behaviour of correlation functions in Euclidean space and has several theoretical and practical advantages, in particular: it is gauge invariant, easy to implement and has relatively low computational cost. The values of renormalization constants in the X-space scheme can be converted to the MS scheme via 4-loop continuum perturbative formulae. Our results for N{sub f}=2 maximally twisted mass fermions with tree-level Symanzik improved gauge action are compared to the ones from the RI-MOM scheme and show full agreement with this method. (orig.)
Kemp, Andrew C.; Horton, Benjamin P.; Vann, David R.; Engelhart, Simon E.; Grand Pre, Candace A.; Vane, Christopher H.; Nikitina, Daria; Anisfeld, Shimon C.
2012-10-01
We present a quantitative technique to reconstruct sea level from assemblages of salt-marsh foraminifera using partitioning around medoids (PAM) and linear discriminant functions (LDF). The modern distribution of foraminifera was described from 62 surface samples at three salt marshes in southern New Jersey. PAM objectively estimated the number and composition of assemblages present at each site and showed that foraminifera adhered to the concept of elevation-dependent ecological zones, making them appropriate sea-level indicators. Application of PAM to a combined dataset identified five distinctive biozones occupying defined elevation ranges, which were similar to those identified elsewhere on the U.S. mid-Atlantic coast. Biozone A had high abundances of Jadammina macrescens and Trochammina inflata; biozone B was dominated by Miliammina fusca; biozone C was associated with Arenoparrella mexicana; biozone D was dominated by Tiphotrocha comprimata and biozone E was dominated by Haplophragmoides manilaensis. Foraminiferal assemblages from transitional and high salt-marsh environments occupied the narrowest elevational range and are the most precise sea-level indicators. Recognition of biozones in sequences of salt-marsh sediment using LDFs provides a probabilistic means to reconstruct sea level. We collected a core to investigate the practical application of this approach. LDFs indicated the faunal origin of 38 core samples and in cross-validation tests were accurate in 54 of 56 cases. We compared reconstructions from LDFs and a transfer function. The transfer function provides smaller error terms and can reconstruct smaller RSL changes, but LDFs are well suited to RSL reconstructions focused on larger changes and using varied assemblages. Agreement between these techniques suggests that the approach we describe can be used as an independent means to reconstruct sea level or, importantly, to check the ecological plausibility of results from other techniques.
Challenges of Holocene sea-level reconstructions in area of low uplift rate
Grudzinska, Ieva; Vassiljev, Jüri; Stivrins, Normunds
2017-04-01
Isolated coastal water bodies provide an excellent sedimentary archive of the evolutionary stages of the coastal regions. It is relatively easy to determine lake isolation threshold, time and contact, where marine and brackish diatoms are replaced by halophilous and subsequently by freshwater diatoms, in areas with high land uplift rates and hard bedrock. Whereas, in areas where the land uplift rate is near zero and sedimentary cover of sand, silt and/or clay exists, determination of the lake isolation threshold and time is a rather complicated task. Such an area is the coast of the Gulf of Riga, where the apparent land uplift is about 1 mm yr-1 in the northern part and near zero in the southern part of the area. The aim of the study is to improve the understanding of the nature and extent of the Holocene sea level changes in the eastern Baltic Sea region, in the area with low land uplift rate. This study marks the first attempt to reconstruct sea level changes for a wide variety of settings based on high-resolution bio-, litho-, and chronostratigraphical evidence from sediment records of isolation basins in Latvia. In total, eight lakes were studied in order to revise the relative sea level (RSL) changes at the southern coast of the Gulf of Riga based on new litho- and biostratigraphical data and radiocarbon datings. The palaeogeographical reconstruction was challenging because we had to take into account that the process of isolation was influenced by various factors, such as gradual eustatic sea level (ESL) rise, river delta infilling by sediments and long-shore sediment transport. The water level in the Baltic Sea basin until 8,500 cal BP was influenced primarily by deglaciation dynamics, whereas in the last 8,500 years, the main factor was complicated interplay between the ESL rise and the land uplift rate. According to diatom composition and radiocarbon dates, the Litorina Sea transgression was a long-lasting event (ca. 2,200 years) in the southern part of
Musson-Genon, Luc; Dupont, Eric; Wendum, Denis
2007-08-01
We present a comparison between several methods used to reconstruct fluxes and vertical profiles of wind, temperature and humidity from measurements at two levels in the atmospheric surface layer for different practical applications. An analytical method and an iterative method are tested by evaluating the quality of estimations of surface fluxes from detailed field measurements obtained during a campaign on the site of Lannemezan in the south-west of France. The iterative method yields better results, but the analytical one can give results of the same level of accuracy provided that specific constants in its formulation are modified. Then these techniques are applied to wind and temperature reconstruction for an experiment dedicated to wind power estimates over flat terrain. If turbulent fluxes are not needed, a simple power law appears to be sufficient, as the method based on Monin-Obukhov theory does not improve the accuracy of the vertical profile reconstruction.
Influence of reconstruction water-bearing levels on surface displacement of post-mining areas
Milczarek, Wojciech; Blachowski, Jan; Grzempowski, Piotr
2014-05-01
The phenomenon of secondary deformation characteristic of the post-mining areas is not sufficiently recognized. For ground surfaces phenomenon may be continuous or discontinuous. There is no sufficient information that describes behavior of the rock mass in the long term after end of exploitation. It is considered that this phenomenon is gradually disappears with end of exploitation. Reliable quantitative data comes only from the analysis of direct measurements in selected areas: geodetic and satellites measurements. Analyzing current situation of operating mines can be said that in the near years, more centers will limit the mining of coal mining. This will contribute to separation further of post-mining areas, in which will be required to maintaining a permanent monitoring and making predictions on the impact of ended exploitation of the rock mass surface. This will be particularly important for highly urbanized areas. This study used finite element method (FEM) to describe phenomenon of reconstruction water-bearing levels and its impact on displacement on the ground surface. It was assumed that significant factors that influence the occurrence and size of secondary deformations are: reconstruction of water-bearing levels in the prior drainer rock mass, size of past exploitation, spatial distribution of coal seams and geological and tectonic structure has been assumed. The transversally isotropic model of six elastic constants: E1 = E2, E3, ν = ν12, ν13, G12, G13 has been assumed to describe of rock mass in the numerical calculations. Geometrical models used in the numerical calculations have been developed using GIS tools. For the study two-dimensional and three-dimensional models characterized by different geological conditions and different configuration of mining data have been developed. The results obtained displacements of the ground surface for the period of mining activity has been verified with the results based on the Knothe theory. The results of
Sarıkabak, Murat; Yaman, Çetin; Tok, Serdar; Binboga, Erdal
2016-11-02
We investigated the effect of positive and negative feedback on maximal voluntary contraction (MVC) of the biceps brachii muscle and explored the mediating effects of gender and conscientiousness. During elbow flexion, MVCs were measured in positive, negative, and no-feedback conditions. Participants were divided into high- and low-conscientiousness groups based on the median split of their scores on Tatar's five-factor personality inventory. Considering all participants 46 college student athletes (21 female, 28 male), positive feedback led to a greater MVC percentage change (-5.76%) than did negative feedback (2.2%). MVC percentage change in the positive feedback condition differed significantly by gender, but the negative feedback condition did not. Thus, positive feedback increased female athletes' MVC level by 3.49%, but decreased male athletes' MVC level by 15.6%. For conscientiousness, MVC percentage change in the positive feedback condition did not differ according to high and low conscientiousness. However, conscientiousness interacted with gender in the positive feedback condition, increasing MVC in high-conscientiousness female athletes and decreasing MVC in low-conscientiousness female athletes. Positive feedback decreased MVC in both high- and low-conscientiousness male athletes.
Bendle, James A. P.; Rosell-Melé, Antoni; Cox, Nicholas J.; Shennan, Ian
2009-12-01
Reconstruction of late Quaternary sea level history in areas of glacioisostatic uplift often relies on sediment archives from coastal isolation basins, natural coastal rock depressions previously isolated from or connected to the sea at different times. Proxy indicators for marine, brackish, or lacustrine conditions combined with precise dating can constrain the time when the sea crossed the sill threshold and isolated (or connected) the basin. The utility of isolation basins in investigations of sea level change is well known, but investigations have been mostly limited to microfossil proxies, the application of which can be limited by preservation and nonanalog problems. Here we investigate the potential of long-chain alkenones, alkenoates, and bulk organic parameters (TOC, Corg/N) for reconstructing past sea level changes in isolation basins in NW Scotland. We analyze organic biomarkers and bulk parameters from both modern basins (at different stages of isolation from the sea) and fossil basins (with sea level histories reconstructed from established proxies). Logit regression analysis was employed to find which of the biomarker metrics or bulk organic measurements could reliably characterize the sediment samples in terms of a marine/brackish or isolated/lacustrine origin. The results suggested a good efficiency for the alkenone index %C37:4 at predicting the depositional origin of the sediments. This study suggests that alkenones could be used as a novel proxy for sea level change in fossil isolation basins especially when microfossil preservation is poor.
Profit maximization mitigates competition
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...
Ninković Srđan
2015-01-01
Full Text Available Introduction. The goal of this study was to examine the nature and presence of influence of different levels of sports activity on the life quality of the patients a year after the reconstruction of anterior cruciate ligament. Material and Methods. The study included 185 patients operated at the Department of Orthopedic Surgery and Traumatology of the Clinical Centre of Vojvodina, who were followed for twelve months. Data were collected using the modified Knee Injury and Osteoarthritis Outcome Score questionnaire which included the Lysholm scale. Results. This study included 146 male and 39 female subjects. The reconstruction of anterior cruciate ligament was equally successful in both gender groups. In relation to different types of sports activity, there were no differences in the overall life quality measured by the questionnaire and its subscales, regardless of the level (professional or recreational. However, regarding the level of sports activities, there were differences among the subjects engaged in sports activities at the national level as compared with those going in for sports activities at the recreational level, and particularly in comparison with physically inactive population. A significant correlation was not found by examining the aforementioned relationship between sports activities. Conclusions. This study has shown that the overall life quality a year after the reconstruction of the anterior cruciate ligament does not differ in relation to either the gender of the subjects or the type of sports activity, while the level of sports activity does have some influence on the quality of life. Professional athletes have proved to train significantly more intensively after this reconstruction than those going in for sports recreationally.
SHI Dong-liang; WANG Yu-bin; AI Zi-sheng
2010-01-01
Background The anterior cruciate ligament (ACL) is one of the most commonly injured knee ligaments. Even following ACL reconstruction, significant articular cartilage degeneration can be observed and most patients suffer from premature osteoarthritis. Articular cartilage degeneration and osteoarthritis development after ACL injury are regarded as progressive process that are affected by cyclic loading during frequently performed low-intensity daily activities. The purpose of this study was to perform a meta analysis on studies assessing the effects of ACL reconstruction on kinematics, kinetics and proprioception of knee during level walking.Methods This meta analysis was conducted according to the methodological guidelines outlined by the Cochrane Collaboration. An electronic search of the literature was performed and all trials published between January 1966 and July 2010 comparing gait and proprioception of a reconstructed-ACL group with an intact-ACL group were pooled for this review. Thirteen studies were included in the final meta analysis.Results There was no significant difference in step length, walking speed, maximum knee flexion angle during loading response, joint position sense and threshold to detect passive motion between the reconstructed-ACL group and the intact-ACL group (P ＞0.05). However, there was a significant difference in peak knee flexion angle, maximum angular knee flexion excursion during stance, peak knee flexion moment during walking and maximum external tibial rotation angle throughout the gait cycle between the reconstructed-ACL group and the intact-ACL group (P ＜0.05).Conclusions Step length, walking speed, maximum knee flexion angle during loading response, joint position sense and threshold to detect passive motion usually observed with ACL deficiency were restored after the ACL reconstruction and rehabilitation, but no significant improvements were observed for peak knee flexion angle, maximum angular knee flexion excursion
Grain-size based sea-level reconstruction in the south Bohai Sea during the past 135 kyr
Yi, Liang; Chen, Yanping
2013-04-01
Future anthropogenic sea-level rise and its impact on coastal regions is an important issue facing human civilizations. Due to the short nature of the instrumental record of sea-level change, development of proxies for sea-level change prior to the advent of instrumental records is essential to reconstruct long-term background sea-level changes on local, regional and global scales. Two of the most widely used approaches for past sea-level changes are: (1) exploitation of dated geomorphologic features such as coastal sands (e.g. Mauz and Hassler, 2000), salt marsh (e.g. Madsen et al., 2007), terraces (e.g. Chappell et al., 1996), and other coastal sediments (e.g. Zong et al., 2003); and (2) sea-level transfer functions based on faunal assemblages such as testate amoebae (e.g. Charman et al., 2002), foraminifera (e.g. Chappell and Shackleton, 1986; Horton, 1997), and diatoms (e.g. Horton et al., 2006). While a variety of methods has been developed to reconstruct palaeo-changes in sea level, many regions, including the Bohai Sea, China, still lack detailed relative sea-level curves extending back to the Pleistocene (Yi et al., 2012). For example, coral terraces are absent in the Bohai Sea, and the poor preservation of faunal assemblages makes development of a transfer function for a relative sea-level reconstruction unfeasible. In contrast, frequent alternations between transgression and regression has presumably imprinted sea-level change on the grain size distribution of Bohai Sea sediments, which varies from medium silt to coarse sand during the late Quaternary (IOCAS, 1985). Advantages of grainsize-based relative sea-level transfer function approaches are that they require smaller sample sizes, allowing for replication, faster measurement and higher spatial or temporal resolution at a fraction of the cost of detail micro-palaeontological analysis (Yi et al., 2012). Here, we employ numerical methods to partition sediment grain size using a combined database of
Kaffarnik, Magnus F; Ahmadi, Navid; Lock, Johan F; Wuensch, Tilo; Pratschke, Johann; Stockmann, Martin; Malinowski, Maciej
2017-01-01
To investigate the relationship between the degree of liver dysfunction, quantified by maximal liver function capacity (LiMAx test) and endothelin-1, TNF-α and IL-6 in septic surgical patients. 28 septic patients (8 female, 20 male, age range 35-80y) were prospectively investigated on a surgical intensive care unit. Liver function, defined by LiMAx test, and measurements of plasma levels of endothelin-1, TNF-α and IL-6 were carried out within the first 24 hours after onset of septic symptoms, followed by day 2, 5 and 10. Patients were divided into 2 groups (group A: LiMAx ≥100 μg/kg/h, moderate liver dysfunction; group B: LiMAx <100 μg/kg/h, severe liver dysfunction) for analysis and investigated regarding the correlation between endothelin-1 and the severity of liver failure, quantified by LiMAx test. Group B showed significant higher results for endothelin-1 than patients in group A (P = 0.01, d5; 0.02, d10). For TNF-α, group B revealed higher results than group A, with a significant difference on day 10 (P = 0.005). IL-6 showed a non-significant trend to higher results in group B. The Spearman's rank correlation coefficient revealed a significant correlation between LiMAx and endothelin-1 (-0.434; P <0.001), TNF-α (-0.515; P <0.001) and IL-6 (-0.590; P <0.001). Sepsis-related hepatic dysfunction is associated with elevated plasma levels of endothelin-1, TNF-α and IL-6. Low LiMAx results combined with increased endothelin-1 and TNF-α and a favourable correlation between LiMAx and cytokine values support the findings of a crucial role of Endothelin-1 and TNF-α in development of septic liver failure.
Lensing Reconstruction using redshifted 21cm Fluctuations
Zahn, O; Zahn, Oliver; Zaldarriaga, Matias
2005-01-01
We investigate the potential of second generation measurements of redshifted 21 cm radiation from the epoch of reionization (EOR) to reconstruct the matter density fluctuations along the line of sight. To do so we generalize the quadratic methods developed for the Cosmic Microwave Background (CMB) to 21cm fluctuations. The three dimensional signal can be analyzed into a finite number of line of sight Fourier modes that contribute to the lensing reconstruction. In comparison with reconstruction using the CMB, 21cm fluctuations have a disadvantage of relative featurelessness, which can be compensated for by the fact that there are multiple uncorrelated backgrounds. The multiple redshift information allows to reconstruct relatively small scales even if one is limited by angular resolution. We estimate that a square kilometer of collecting area is needed with a maximal baseline of 3 km to achieve lensing reconstruction noise levels an order of magnitude below CMB quadratic estimator constraints at $l=1000$, and c...
CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data
Sharma, Ojaswa; Anton, François
2009-01-01
Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species...
Application of conifer needles in the reconstruction of Holocene CO2 levels
Kouwenberg, L.L.R.
1973-01-01
To clarify the nature of the link between CO2 and climate on relatively short time-scales, precise, high-resolution reconstructions of the pre-industrial evolution of atmospheric CO2 are required. Adjustment of stomatal frequency to changes in atmospheric CO2 allows plants of many species to retain
Improving the microbial community reconstruction at the genus level by multiple 16S rRNA regions.
Wang, Shengqin; Sun, Beili; Tu, Jing; Lu, Zuhong
2016-06-07
16S rRNA genes have been widely used for phylogenetic reconstruction and the quantification of microbial diversity through the application of next-generation sequencing technology. However, long-read sequencing is still costly, while short-read sequencing carries less information for complex microbial community profiling; therefore, the applications of high throughput sequencing platforms still remain challenging in microbial community reconstruction analysis. Here, we developed a method to investigate the profile of aligned 16S rRNA gene sequences and to measure the proper region for microbial community reconstruction, as a step in creating a more efficient way to detect microorganism at the genus level. Finally, we found that each genus has its own preferential genus-specific amplicons for a genus assignment, which are not always located in hyper variable regions (HVRs). It was also noted that the rare genera should contribute less than dominant ones to the common profile of the aligned 16S rRNA sequences and have lower affinity to the common universal primer. Therefore, using multiple 16S rRNA regions rather than one "universal" region can significantly improve the ability of microbial community reconstruction. In addition, we found that a short fragment is suitable for most genera identifications, and the proper conserved regions used for primer design are larger than before. Copyright © 2016 Elsevier Ltd. All rights reserved.
K. Stattegger; Tjallingii, R.; Saito, Y; Michelli, M.; Thanh, N.T.; Wetzel, A.
2013-01-01
AbstractBeachrocks, beach ridge, washover and backshore deposits along the tectonically stable south-eastern Vietnamese coast document Holocene sea level changes. In combination with data from the final marine flooding phase of the incised Mekong River valley, the sea-level history of South Vietnam could be reconstructed for the last 8000 years. Connecting saltmarsh, mangrove and beachrock deposits the record covers the last phase of deglacial sea-level rise from - 5 to + 1.4 m between 8.1 to...
Kemp, Andrew C.; Horton, Benjamin P.; Vann, David R.; Engelhart, Simon E.; Grand Pre, Candace A.; Vane, Christopher H.; Nikitina, Daria; Anisfeld, Shimon C.
2012-01-01
We present a quantitative technique to reconstruct sea level from assemblages of salt-marsh foraminifera using partitioning around medoids (PAM) and linear discriminant functions (LDF). The modern distribution of foraminifera was described from 62 surface samples at three salt marshes in southern New Jersey. PAM objectively estimated the number and composition of assemblages present at each site and showed that foraminifera adhered to the concept of elevation-dependent ecological zones, makin...
Wiedenmann, W; Baines, J T M; Bee, C P; Biglietti, M; Bogaerts, A; Boisvert, V; Bosman, M; Brandt, S; Caron, B; Casado, M P; Cataldi, G; Cavalli, D; Cervetto, M; Comune, G; Corso-Radu, A; Di Mattia, A; Díaz-Gómez, M; Dos Anjos, A; Drohan, J; Ellis, Nick; Elsing, M; Epp, B; Etienne, F; Falciano, S; Farilla, A; George, S; Ghete, V M; González, S; Grothe, M; Kaczmarska, A; Karr, K M; Khomich, A; Konstantinidis, N P; Krasny, W; Li, W; Lowe, A; Luminari, L; Meessen, C; Mello, A G; Merino, G; Morettini, P; Moyse, E; Nairz, A; Negri, A; Nikitin, N V; Nisati, A; Padilla, C; Parodi, F; Pérez-Réale, V; Pinfold, J L; Pinto, P; Polesello, G; Qian, Z; Resconi, S; Rosati, S; Scannicchio, D A; Schiavi, C; Schörner-Sadenius, T; Segura, E; De Seixas, J M; Shears, T G; Sivoklokov, S Yu; Smizanska, M; Soluk, R A; Stanescu, C; Tapprogge, Stefan; Touchard, F; Vercesi, V; Watson, A T; Wengler, T; Werner, P; Wheeler, S; Wickens, F J; Wielers, M; Zobernig, G; NSS-MIC 2003 - IEEE Nuclear Science Symposium and Medical Imaging Conference, Part 1
2004-01-01
The Atlas High Level Trigger's primary function of event selection will be accomplished with a Level-2 trigger farm and an Event Filter farm, both running software components developed in the Atlas offline reconstruction framework. While this approach provides a unified software framework for event selection, it poses strict requirements on offline components critical for the Level-2 trigger. A Level-2 decision in Atlas must typically be accomplished within 10 ms and with multiple event processing in concurrent threads. In order to address these constraints, prototypes have been developed that incorporate elements of the Atlas Data Flow -, High Level Trigger -, and offline framework software. To realize a homogeneous software environment for offline components in the High Level Trigger, the Level-2 Steering Controller was developed. With electron/gamma- and muon-selection slices it has been shown that the required performance can be reached, if the offline components used are carefully designed and optimized ...
Maximizing and customer loyalty: Are maximizers less loyal?
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Reconstruction of epidemic curves for pandemic influenza A (H1N1 2009 at city and sub-city levels
Wong Ngai Sze
2010-11-01
Full Text Available Abstract To better describe the epidemiology of influenza at local level, the time course of pandemic influenza A (H1N1 2009 in the city of Hong Kong was reconstructed from notification data after decomposition procedure and time series analysis. GIS (geographic information system methodology was incorporated for assessing spatial variation. Between May and September 2009, a total of 24415 cases were successfully geocoded, out of 25473 (95.8% reports in the original dataset. The reconstructed epidemic curve was characterized by a small initial peak, a nadir followed by rapid rise to the ultimate plateau. The full course of the epidemic had lasted for about 6 months. Despite the small geographic area of only 1000 Km2, distinctive spatial variation was observed in the configuration of the curves across 6 geographic regions. With the relatively uniform physical and climatic environment within Hong Kong, the temporo-spatial variability of influenza spread could only be explained by the heterogeneous population structure and mobility patterns. Our study illustrated how an epidemic curve could be reconstructed using regularly collected surveillance data, which would be useful in informing intervention at local levels.
Maximally incompatible quantum observables
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Performance of Tracking, b-tagging and Jet/MET reconstruction at the CMS High Level Trigger
Tosi, Mia
2015-12-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. In 2015, the center-of-mass energy of proton-proton collisions will reach 13 TeV up to an unprecedented luminosity of 1 × 1034 cm-2s-1. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capabilities. The CMS experiment has been designed with a two-level trigger system: the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Tracking algorithms are widely used in the HLT in the object reconstruction through particle-flow techniques as well as in the identification of b-jets and lepton isolation. Reconstructed tracks are also used to distinguish the primary vertex, which identifies the hard interaction process, from the pileup ones. This task is particularly important in the LHC environment given the large number of interactions per bunch crossing: on average 25 in 2012, and expected to be around 40 in Run II with a large contribution from out-of-time particles. In order to cope with tougher conditions the tracking and vertexing techniques used in 2012 have been largely improved in terms of timing and efficiency in order to keep the physics reach at the level of Run I conditions. We will present the performance of these newly developed algorithms, discussing their impact on the b-tagging performances as well as on the jet and missing transverse energy reconstruction.
Sander, Lasse; Hede, Mikkel Ulfeldt; Fruergaard, Mikkel
2016-01-01
, Denmark. The reconstruction of the initial mid-Holocene sea-level rise is based on the sedimentary infill from topography-confined coastal lagoons (Sander et al., Boreas, 2015b). Sea-level index points over the mid- to late Holocene period of sea-level stability and fall are retrieved from the internal......Coastal lagoons and beach ridges are genetically independent, though non-continuous, sedimentary archives. We here combine the results from two recently published studies in order to produce an 8000-year-long record of Holocene relative sea-level changes on the island of Samsø, southern Kattegat...... proximate occurrence of coastal lagoons and beach ridges allows us to produce seamless time series of relative sea-level changes from field sites in SW Scandinavia and in similar coastal environments....
Colette, W. Arden; Almas, Lal K.; Robinson, Clay
2008-01-01
The declining availability of irrigation water from the Ogallala aquifer combined with increasing energy costs make irrigation strategies much more critical. Maximizing yield reduces profit by between $22 and $158 per acre depending on the combination of corn and natural gas prices.
Yang, Lin; Liang, Changhong [Southern Medical University, Guangzhou (China); Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China); Zhuang, Jian [Guangdong Academy of Medical Sciences, Dept. of Cardiac Surgery, Guangdong Cardiovascular Inst., Guangdong Provincial Key Lab. of South China Structural Heart Disease, Guangdong General Hospital, Guangzhou (China); Huang, Meiping [Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China); Guangdong Academy of Medical Sciences, Dept. of Catheterization Lab, Guangdong Cardiovascular Inst., Guangdong Provincial Key Lab. of South China Structural Heart Disease, Guangdong General Hospital, Guangzhou (China); Liu, Hui [Guangdong Academy of Medical Sciences, Dept. of Radiology, Guangdong General Hospital, Guangzhou (China)
2017-01-15
Hybrid iterative reconstruction can reduce image noise and produce better image quality compared with filtered back-projection (FBP), but few reports describe optimization of the iteration level. We optimized the iteration level of iDose{sup 4} and evaluated image quality for pediatric cardiac CT angiography. Children (n = 160) with congenital heart disease were enrolled and divided into full-dose (n = 84) and half-dose (n = 76) groups. Four series were reconstructed using FBP, and iDose{sup 4} levels 2, 4 and 6; we evaluated subjective quality of the series using a 5-grade scale and compared the series using a Kruskal-Wallis H test. For FBP and iDose{sup 4}-optimal images, we compared contrast-to-noise ratios (CNR) and size-specific dose estimates (SSDE) using a Student's t-test. We also compared diagnostic-accuracy of each group using a Kruskal-Wallis H test. Mean scores for iDose{sup 4} level 4 were the best in both dose groups (all P < 0.05). CNR was improved in both groups with iDose{sup 4} level 4 as compared with FBP. Mean decrease in SSDE was 53% in the half-dose group. Diagnostic accuracy for the four datasets were in the range 92.6-96.2% (no statistical difference). iDose{sup 4} level 4 was optimal for both the full- and half-dose groups. Protocols with iDose{sup 4} level 4 allowed 53% reduction in SSDE without significantly affecting image quality and diagnostic accuracy. (orig.)
Performance of Tracking, b-tagging and Jet/MET reconstruction at the CMS High Level Trigger
Tosi, Mia
2015-01-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of experiments. In 2015, the center-of-mass energy of proton-proton collisions will reach 13 TeV up to an unprecedented luminosity of 1e34 cm-2s-1. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with detector readout, offline storage and analysis capabilities. The CMS experiment has been designed with a two-level trigger system the Level-1 Trigger (L1T), implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the offline reconstruction software running on a computer farm. A software trigger system requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. With the computing power available during the 2012 data taking the maximum reconstruction time at HLT was about 200 ms per event, at the nominal L1T rate of 100 kHz. Tracking algorithms are widely us...
Parker, Andrew M.; Wandi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...
Stattegger, Karl; Tjallingii, Rik; Saito, Yoshiki; Michelli, Maximiliano; Trung Thanh, Nguyen; Wetzel, Andreas
2013-11-01
Beachrocks, beach ridge, washover and backshore deposits along the tectonically stable south-eastern Vietnamese coast document Holocene sea level changes. In combination with data from the final marine flooding phase of the incised Mekong River valley, the sea-level history of South Vietnam could be reconstructed for the last 8000 years. Connecting saltmarsh, mangrove and beachrock deposits the record covers the last phase of deglacial sea-level rise from - 5 to + 1.4 m between 8.1 to 6.4 ka. The rates of sea-level rise decreased sharply after the rapid early Holocene rise and stabilized at a rate of 4.5 mm/year between 8.0 and 6.9 ka. Southeast Vietnam beachrocks reveal that the mid-Holocene sea-level highstand slightly above + 1.4 m was reached between 6.7 and 5.0 ka, with a peak value close to + 1.5 m around 6.0 ka. This highstand is further limited by a backshore and beachridge deposit that marks the maximum springtide sea-level just below the base of the overlying beach ridge. After 5.0 ka sea level dropped below + 1.4 m and fell almost linearly at a rate of 0.24 mm/year until 0.63 ka and + 0.2 m as evidenced by the youngest beachrocks.
Ming Yi WANG; Guo ZHAO
2005-01-01
A right R-module E over a ring R is said to be maximally injective in case for any maximal right ideal m of R, every R-homomorphism f : m → E can be extended to an R-homomorphism f' : R → E. In this paper, we first construct an example to show that maximal injectivity is a proper generalization of injectivity. Then we prove that any right R-module over a left perfect ring R is maximally injective if and only if it is injective. We also give a partial affirmative answer to Faith's conjecture by further investigating the property of maximally injective rings. Finally, we get an approximation to Faith's conjecture, which asserts that every injective right R-module over any left perfect right self-injective ring R is the injective hull of a projective submodule.
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
Brüstle, Thomas; Pérotin, Matthieu
2012-01-01
Maximal green sequences are particular sequences of quiver mutations which were introduced by Keller in the context of quantum dilogarithm identities and independently by Cecotti-Cordova-Vafa in the context of supersymmetric gauge theory. Our aim is to initiate a systematic study of these sequences from a combinatorial point of view. Interpreting maximal green sequences as paths in various natural posets arising in representation theory, we prove the finiteness of the number of maximal green sequences for cluster finite quivers, affine quivers and acyclic quivers with at most three vertices. We also give results concerning the possible numbers and lengths of these maximal green sequences. Finally we describe an algorithm for computing maximal green sequences for arbitrary valued quivers which we used to obtain numerous explicit examples that we present.
Ernst van der Maaten
Full Text Available In this study, we explore the potential to reconstruct lake-level (and groundwater fluctuations from tree-ring chronologies of black alder (Alnus glutinosa L. for three study lakes in the Mecklenburg Lake District, northeastern Germany. As gauging records for lakes in this region are generally short, long-term reconstructions of lake-level fluctuations could provide valuable information on past hydrological conditions, which, in turn, are useful to assess dynamics of climate and landscape evolution. We selected black alder as our study species as alder typically thrives as riparian vegetation along lakeshores. For the study lakes, we tested whether a regional signal in lake-level fluctuations and in the growth of alder exists that could be used for long-term regional hydrological reconstructions, but found that local (i.e. site-specific signals in lake level and tree-ring chronologies prevailed. Hence, we built lake/groundwater-level reconstruction models for the three study lakes individually. Two sets of models were considered based on (1 local tree-ring series of black alder, and (2 site-specific Standardized Precipitation Evapotranspiration Indices (SPEI. Although the SPEI-based models performed statistically well, we critically reflect on the reliability of these reconstructions, as SPEI cannot account for human influence. Tree-ring based reconstruction models, on the other hand, performed poor. Combined, our results suggest that, for our study area, long-term regional reconstructions of lake-level fluctuations that consider both recent and ancient (e.g., archaeological wood of black alder seem extremely challenging, if not impossible.
Hong, Sun Suk; Lee, Jong-Woong; Seo, Jeong Beom; Jung, Jae-Eun; Choi, Jiwon; Kweon, Dae Cheol
2013-12-01
The purpose of this research is to determine the adaptive statistical iterative reconstruction (ASIR) level that enables optimal image quality and dose reduction in the chest computed tomography (CT) protocol with ASIR. A chest phantom with 0-50 % ASIR levels was scanned and then noise power spectrum (NPS), signal and noise and the degree of distortion of peak signal-to-noise ratio (PSNR) and the root-mean-square error (RMSE) were measured. In addition, the objectivity of the experiment was measured using the American College of Radiology (ACR) phantom. Moreover, on a qualitative basis, five lesions' resolution, latitude and distortion degree of chest phantom and their compiled statistics were evaluated. The NPS value decreased as the frequency increased. The lowest noise and deviation were at the 20 % ASIR level, mean 126.15 ± 22.21. As a result of the degree of distortion, signal-to-noise ratio and PSNR at 20 % ASIR level were at the highest value as 31.0 and 41.52. However, maximum absolute error and RMSE showed the lowest deviation value as 11.2 and 16. In the ACR phantom study, all ASIR levels were within acceptable allowance of guidelines. The 20 % ASIR level performed best in qualitative evaluation at five lesions of chest phantom as resolution score 4.3, latitude 3.47 and the degree of distortion 4.25. The 20 % ASIR level was proved to be the best in all experiments, noise, distortion evaluation using ImageJ and qualitative evaluation of five lesions of a chest phantom. Therefore, optimal images as well as reduce radiation dose would be acquired when 20 % ASIR level in thoracic CT is applied.
Arctic Sea Level Change over the altimetry era and reconstructed over the last 60 years
Andersen, Ole Baltazar; Svendsen, Peter Limkilde; Nielsen, Allan Aasbjerg
The Arctic Ocean process severe limitations on the use of altimetry and tide gauge data for sea level studies and prediction due to the presence of seasonal or permanent sea ice. In order to overcome this issue we reprocessed all altimetry data with editing tailored to Arctic conditions, hereby...
Rudiger Bubner
1998-12-01
Full Text Available Even though the maxims' theory is not at thecenter of Kant's ethics, it is the unavoidable basis of the categoric imperative's formulation. Kant leanson the transmitted representations of modem moral theory. During the last decades, the notion of maxims has deserved more attention, due to the philosophy of language's debates on rules, and due to action theory's interest in this notion. I here by brietly expound my views in these discussions.
3D palaeogeographic reconstructions of the Phanerozoic versus sea-level and Sr-ratio variations
Christian Vérard; Cyril Hochard; Peter O. Baumgartner; Gérard M. Stamplfi
2015-01-01
AbstractA full global geodynamical model over 600 million years (Ma) has been de-veloped at the University of Lausanne during the past 20 years. We show herein how the 2D maps were converted into 3D (i.e., full hypsometry and bathymetry), using a heuristic-based approach. Although the synthetic topography may be viewed as relatively crude, it has the advantage of being applicable anywhere on the globe and at any geological time. The model allows estimating the sea-level changes throughout the Phanerozoic, with the possibility, for the ifrst time, to lfood accordingly continental areas. One of the most striking results is the good correlation with “measured” sea-level changes, implying that long-term variations are predominantly tectonically-driven. Volumes of mountain relief are also estimated through time and compared with strontium isotopic ratio (Sr-ratio), commonly thought to relfect mountain belt erosion. The tectonic impact upon the general Sr-ratio trend is shown herein for the ifrst time, although such inlfuence was long been inferred.
FengNing Li
2015-01-01
Full Text Available To compare the clinical efficacy and radiological outcome of treating 4-level cervical spondylotic myelopathy (CSM with either anterior cervical discectomy and fusion (ACDF or “skip” corpectomy and fusion, 48 patients with 4-level CSM who had undergone ACDF or SCF at our hospital were analyzed retrospectively between January 2008 and June 2011. Twenty-seven patients received ACDF (Group A and 21 patients received SCF. Japanese Orthopaedic Association (JOA score, Neck Disability Index (NDI score, and Cobb’s angles of the fused segments and C2-7 segments were compared in the two groups. The minimum patient follow-up was 2 years. No significant differences between the groups were found in demographic and baseline disease characteristics, duration of surgery, or follow-up time. Our study demonstrates that there was no significant difference in the clinical efficacy of ACDF and SCF, but ACDF involves less intraoperative blood loss, better cervical spine alignment, and fewer postoperative complications than SCF.
Gori, Valentina
2014-01-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of the experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with the detector readout, offline storage and analysis capabilities. The CMS experiment has been designed with a two-level trigger system: the Level 1 (L1) Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS reconstruction and analysis software running on a computer farm. The software-base HLT requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. This is going to be even more challenging during Run II, with a higher centre-of-mass energy, a higher instantaneous luminosity and pileup, and the impact of out-of-time pileup due to the 25 ns bunch spacing. The online algorithms need to be optimised for such complex environment in order to keep the output rate under...
Gori, Valentina
2014-01-01
The trigger systems of the LHC detectors play a crucial role in determining the physics capabilities of the experiments. A reduction of several orders of magnitude of the event rate is needed to reach values compatible with the detector readout, offline storage and analysis capabilities. The CMS experiment has been designed with a two-level trigger system, the Level 1 (L1) Trigger, implemented on custom-designed electronics, and the High Level Trigger (HLT), a streamlined version of the CMS reconstruction and analysis software running on a computer farm. The software-base HLT requires a trade-off between the complexity of the algorithms, the sustainable output rate, and the selection efficiency. This is going to be even more challenging during Run II, with a higher centre-of-mass energy, a higher instantaneous luminosity and pileup, and the impact of out-of-time pileup due to the 25 ns bunch spacing. The online algorithms need to be optimised for such a complex environment in order to keep the output rate un...
Hill, Malcolm S.; Hill, April L.; Lopez, Jose; Peterson, Kevin J.; Pomponi, Shirley; Diaz, Maria C.; Thacker, Robert W.; Adamska, Maja; Boury-Esnault, Nicole; Cárdenas, Paco; Chaves-Fonnegra, Andia; Danka, Elizabeth; De Laine, Bre-Onna; Formica, Dawn; Hajdu, Eduardo; Lobo-Hajdu, Gisele; Klontz, Sarah; Morrow, Christine C.; Patel, Jignasa; Picton, Bernard; Pisani, Davide; Pohlmann, Deborah; Redmond, Niamh E.; Reed, John; Richey, Stacy; Riesgo, Ana; Rubin, Ewelina; Russell, Zach; Rützler, Klaus; Sperling, Erik A.; di Stefano, Michael; Tarver, James E.; Collins, Allen G.
2013-01-01
Background Demosponges are challenging for phylogenetic systematics because of their plastic and relatively simple morphologies and many deep divergences between major clades. To improve understanding of the phylogenetic relationships within Demospongiae, we sequenced and analyzed seven nuclear housekeeping genes involved in a variety of cellular functions from a diverse group of sponges. Methodology/Principal Findings We generated data from each of the four sponge classes (i.e., Calcarea, Demospongiae, Hexactinellida, and Homoscleromorpha), but focused on family-level relationships within demosponges. With data for 21 newly sampled families, our Maximum Likelihood and Bayesian-based approaches recovered previously phylogenetically defined taxa: Keratosap, Myxospongiaep, Spongillidap, Haploscleromorphap (the marine haplosclerids) and Democlaviap. We found conflicting results concerning the relationships of Keratosap and Myxospongiaep to the remaining demosponges, but our results strongly supported a clade of Haploscleromorphap+Spongillidap+Democlaviap. In contrast to hypotheses based on mitochondrial genome and ribosomal data, nuclear housekeeping gene data suggested that freshwater sponges (Spongillidap) are sister to Haploscleromorphap rather than part of Democlaviap. Within Keratosap, we found equivocal results as to the monophyly of Dictyoceratida. Within Myxospongiaep, Chondrosida and Verongida were monophyletic. A well-supported clade within Democlaviap, Tetractinellidap, composed of all sampled members of Astrophorina and Spirophorina (including the only lithistid in our analysis), was consistently revealed as the sister group to all other members of Democlaviap. Within Tetractinellidap, we did not recover monophyletic Astrophorina or Spirophorina. Our results also reaffirmed the monophyly of order Poecilosclerida (excluding Desmacellidae and Raspailiidae), and polyphyly of Hadromerida and Halichondrida. Conclusions/Significance These results, using an
Hill, Malcolm S; Hill, April L; Lopez, Jose; Peterson, Kevin J; Pomponi, Shirley; Diaz, Maria C; Thacker, Robert W; Adamska, Maja; Boury-Esnault, Nicole; Cárdenas, Paco; Chaves-Fonnegra, Andia; Danka, Elizabeth; De Laine, Bre-Onna; Formica, Dawn; Hajdu, Eduardo; Lobo-Hajdu, Gisele; Klontz, Sarah; Morrow, Christine C; Patel, Jignasa; Picton, Bernard; Pisani, Davide; Pohlmann, Deborah; Redmond, Niamh E; Reed, John; Richey, Stacy; Riesgo, Ana; Rubin, Ewelina; Russell, Zach; Rützler, Klaus; Sperling, Erik A; di Stefano, Michael; Tarver, James E; Collins, Allen G
2013-01-01
Demosponges are challenging for phylogenetic systematics because of their plastic and relatively simple morphologies and many deep divergences between major clades. To improve understanding of the phylogenetic relationships within Demospongiae, we sequenced and analyzed seven nuclear housekeeping genes involved in a variety of cellular functions from a diverse group of sponges. We generated data from each of the four sponge classes (i.e., Calcarea, Demospongiae, Hexactinellida, and Homoscleromorpha), but focused on family-level relationships within demosponges. With data for 21 newly sampled families, our Maximum Likelihood and Bayesian-based approaches recovered previously phylogenetically defined taxa: Keratosa(p), Myxospongiae(p), Spongillida(p), Haploscleromorpha(p) (the marine haplosclerids) and Democlavia(p). We found conflicting results concerning the relationships of Keratosa(p) and Myxospongiae(p) to the remaining demosponges, but our results strongly supported a clade of Haploscleromorpha(p)+Spongillida(p)+Democlavia(p). In contrast to hypotheses based on mitochondrial genome and ribosomal data, nuclear housekeeping gene data suggested that freshwater sponges (Spongillida(p)) are sister to Haploscleromorpha(p) rather than part of Democlavia(p). Within Keratosa(p), we found equivocal results as to the monophyly of Dictyoceratida. Within Myxospongiae(p), Chondrosida and Verongida were monophyletic. A well-supported clade within Democlavia(p), Tetractinellida(p), composed of all sampled members of Astrophorina and Spirophorina (including the only lithistid in our analysis), was consistently revealed as the sister group to all other members of Democlavia(p). Within Tetractinellida(p), we did not recover monophyletic Astrophorina or Spirophorina. Our results also reaffirmed the monophyly of order Poecilosclerida (excluding Desmacellidae and Raspailiidae), and polyphyly of Hadromerida and Halichondrida. These results, using an independent nuclear gene set
Malcolm S Hill
Full Text Available BACKGROUND: Demosponges are challenging for phylogenetic systematics because of their plastic and relatively simple morphologies and many deep divergences between major clades. To improve understanding of the phylogenetic relationships within Demospongiae, we sequenced and analyzed seven nuclear housekeeping genes involved in a variety of cellular functions from a diverse group of sponges. METHODOLOGY/PRINCIPAL FINDINGS: We generated data from each of the four sponge classes (i.e., Calcarea, Demospongiae, Hexactinellida, and Homoscleromorpha, but focused on family-level relationships within demosponges. With data for 21 newly sampled families, our Maximum Likelihood and Bayesian-based approaches recovered previously phylogenetically defined taxa: Keratosa(p, Myxospongiae(p, Spongillida(p, Haploscleromorpha(p (the marine haplosclerids and Democlavia(p. We found conflicting results concerning the relationships of Keratosa(p and Myxospongiae(p to the remaining demosponges, but our results strongly supported a clade of Haploscleromorpha(p+Spongillida(p+Democlavia(p. In contrast to hypotheses based on mitochondrial genome and ribosomal data, nuclear housekeeping gene data suggested that freshwater sponges (Spongillida(p are sister to Haploscleromorpha(p rather than part of Democlavia(p. Within Keratosa(p, we found equivocal results as to the monophyly of Dictyoceratida. Within Myxospongiae(p, Chondrosida and Verongida were monophyletic. A well-supported clade within Democlavia(p, Tetractinellida(p, composed of all sampled members of Astrophorina and Spirophorina (including the only lithistid in our analysis, was consistently revealed as the sister group to all other members of Democlavia(p. Within Tetractinellida(p, we did not recover monophyletic Astrophorina or Spirophorina. Our results also reaffirmed the monophyly of order Poecilosclerida (excluding Desmacellidae and Raspailiidae, and polyphyly of Hadromerida and Halichondrida. CONCLUSIONS
Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E
2017-07-07
Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, 'direct reconstruction', incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [(11)C]AFM (serotonin transporter) and [(11)C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T = K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([(11)C]AFM dataset) and 30-36% ([(11)C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [(11
Savitha, D; Sejil, T V; Rao, Shwetha; Roshan, C J; Roshan, C J
2013-01-01
The purpose of the study was to investigate the effect of vocal and instrumental music on various physiological parameters during submaximal exercise. Each subject underwent three sessions of exercise protocol without music, with vocal music, and instrumental versions of same piece of music. The protocol consisted of 10 min treadmill exercise at 70% HR(max) and 20 min of recovery. Minute to minute heart rate and breath by breath recording of respiratory parameters, rate of energy expenditure and perceived exertion levels were measured. Music, irrespective of the presence or absence of lyrics, enabled the subjects to exercise at a significantly lower heart rate and oxygen consumption, reduced the metabolic cost and perceived exertion levels of exercise (P Music having a relaxant effect could have probably increased the parasympathetic activation leading to these effects.
Balsalobre-Fernández, Carlos; Tejero-González, Carlos M; Del Campo-Vecino, Juan; Alonso-Curiel, Dionisio
2013-03-01
The aim of this study was to determine the effects of a power training cycle on maximum strength, maximum power, vertical jump height and acceleration in seven high-level 400-meter hurdlers subjected to a specific training program twice a week for 10 weeks. Each training session consisted of five sets of eight jump-squats with the load at which each athlete produced his maximum power. The repetition maximum in the half squat position (RM), maximum power in the jump-squat (W), a squat jump (SJ), countermovement jump (CSJ), and a 30-meter sprint from a standing position were measured before and after the training program using an accelerometer, an infra-red platform and photo-cells. The results indicated the following statistically significant improvements: a 7.9% increase in RM (Z=-2.03, p=0.021, δc=0.39), a 2.3% improvement in SJ (Z=-1.69, p=0.045, δc=0.29), a 1.43% decrease in the 30-meter sprint (Z=-1.70, p=0.044, δc=0.12), and, where maximum power was produced, a change in the RM percentage from 56 to 62% (Z=-1.75, p=0.039, δc=0.54). As such, it can be concluded that strength training with a maximum power load is an effective means of increasing strength and acceleration in high-level hurdlers.
Janusz Brzozowski
2014-05-01
Full Text Available The atoms of a regular language are non-empty intersections of complemented and uncomplemented quotients of the language. Tight upper bounds on the number of atoms of a language and on the quotient complexities of atoms are known. We introduce a new class of regular languages, called the maximally atomic languages, consisting of all languages meeting these bounds. We prove the following result: If L is a regular language of quotient complexity n and G is the subgroup of permutations in the transition semigroup T of the minimal DFA of L, then L is maximally atomic if and only if G is transitive on k-subsets of 1,...,n for 0 <= k <= n and T contains a transformation of rank n-1.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline with...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Luo, Qingming
2015-10-05
The excessive time required by fluorescence diffuse optical tomography (fDOT) image reconstruction based on path-history fluorescence Monte Carlo model is its primary limiting factor. Herein, we present a method that accelerates fDOT image reconstruction. We employ three-level parallel architecture including multiple nodes in cluster, multiple cores in central processing unit (CPU), and multiple streaming multiprocessors in graphics processing unit (GPU). Different GPU memories are selectively used, the data-writing time is effectively eliminated, and the data transport per iteration is minimized. Simulation experiments demonstrated that this method can utilize general-purpose computing platforms to efficiently implement and accelerate fDOT image reconstruction, thus providing a practical means of using path-history-based fluorescence Monte Carlo model for fDOT imaging.
Germino, Mary; Gallezot, Jean-Dominque; Yan, Jianhua; Carson, Richard E.
2017-07-01
Parametric images for dynamic positron emission tomography (PET) are typically generated by an indirect method, i.e. reconstructing a time series of emission images, then fitting a kinetic model to each voxel time activity curve. Alternatively, ‘direct reconstruction’, incorporates the kinetic model into the reconstruction algorithm itself, directly producing parametric images from projection data. Direct reconstruction has been shown to achieve parametric images with lower standard error than the indirect method. Here, we present direct reconstruction for brain PET using event-by-event motion correction of list-mode data, applied to two tracers. Event-by-event motion correction was implemented for direct reconstruction in the Parametric Motion-compensation OSEM List-mode Algorithm for Resolution-recovery reconstruction. The direct implementation was tested on simulated and human datasets with tracers [11C]AFM (serotonin transporter) and [11C]UCB-J (synaptic density), which follow the 1-tissue compartment model. Rigid head motion was tracked with the Vicra system. Parametric images of K 1 and distribution volume (V T = K 1/k 2) were compared to those generated by the indirect method by regional coefficient of variation (CoV). Performance across count levels was assessed using sub-sampled datasets. For simulated and real datasets at high counts, the two methods estimated K 1 and V T with comparable accuracy. At lower count levels, the direct method was substantially more robust to outliers than the indirect method. Compared to the indirect method, direct reconstruction reduced regional K 1 CoV by 35-48% (simulated dataset), 39-43% ([11C]AFM dataset) and 30-36% ([11C]UCB-J dataset) across count levels (averaged over regions at matched iteration); V T CoV was reduced by 51-58%, 54-60% and 30-46%, respectively. Motion correction played an important role in the dataset with larger motion: correction increased regional V T by 51% on average in the [11C
Papaceit, Montserrat; Orengo, Dorcas; Juan, Elvira
2004-01-01
The evolution of cis-regulatory elements is of particular interest for our understanding of the evolution of gene regulation. The Adh gene of Drosophilidae shows interspecific differences in tissue-specific expression and transcript levels during development. In Scaptodrosophila lebanonensis adults, the level of distal transcripts is maximal between the fourth and eighth day after eclosion and is around five times higher than that in D. melanogaster Adh(S). To examine whether these quantitative differences are regulated by sequences lying upstream of the distal promoter, we performed in vitro deletion mutagenesis of the Adh gene of S. lebanonensis, followed by P-element-mediated germ-line transformation. All constructs included, as a cotransgene, a modified Adh gene of D. melanogaster (dAdh) in a fixed position and orientation that acted as a chromosomal position control. Using this approach, we have identified a fragment of 1.5 kb in the 5' region, 830 bp upstream of the distal start site, which is required to achieve maximal levels of distal transcript in S. lebanonensis. The presence of this fragment produces a 3.5-fold higher level of distal mRNA (as determined by real time quantitative PCR) compared with the D. melanogaster dAdh cotransgene. This region contains the degenerated end of a minisatellite sequence expanding farther upstream and does not correspond to the Adh adult enhancer (AAE) of D. melanogaster. Indeed, the cis-regulatory elements of the AAE have been identified by phylogenetic footprinting within the region 830 bp upstream of the distal start site of S. lebanonensis. Furthermore, the deletions Delta-830 and Delta-2358 yield the same pattern of tissue-specific expression, indicating that all tissue-specific elements are contained within the region 830 bp upstream of the distal start site. PMID:15166155
Multivariate residues and maximal unitarity
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Brandes, U; Gaertler, M; Goerke, R; Hoefer, M; Nikoloski, Z; Wagner, D
2006-01-01
Several algorithms have been proposed to compute partitions of networks into communities that score high on a graph clustering index called modularity. While publications on these algorithms typically contain experimental evaluations to emphasize the plausibility of results, none of these algorithms has been shown to actually compute optimal partitions. We here settle the unknown complexity status of modularity maximization by showing that the corresponding decision version is NP-complete in the strong sense. As a consequence, any efficient, i.e. polynomial-time, algorithm is only heuristic and yields suboptimal partitions on many instances.
Sander, Lasse; Raniolo, Luís Ariel; Alberdi, Ernesto; Pejrup, Morten
2014-05-01
Beach ridge plains are a common feature of prograding coastlines and they have in the past been widely used as geomorphological archives for the reconstruction of past coastal dynamics, event chronologies or late quaternary sea-level change. The most critical parameters for sea-level related research are the consistent definition and confidence of information on surface elevation of the beach ridge deposits. In most parts of the world, the availability of high-resolution geodata is very limited. The measurement of e.g. high-precision GPS (Global Positioning System) data is costly, time-consuming and essentially of limited spatial coverage. The SRTM (Shuttle Radar Topography Mission) dataset is a freely-available digital surface model covering landmasses between approximately 60° N and 56° S at a 90 m (3 arc seconds) resolution. The model elevations are indicated without decimals (integer) and are projected for the WGS84 ellipsoid. On a beach ridge plain at Caleta de los Loros, Río Negro, Argentina, we observed a good correlation of GPS-RTK (GPS-Real Time Kinematic) measurements (estimated vertical accuracy: migration during the approx. 13 years between the date of SRTM data acquisition and our GPS measurement. This interpretation is supported by a multi-decadal sequence of Landsat false-color composites. Vegetation cover and rounding errors are further possible factors in explaining vertical deviation. The consistency of data quality was confirmed by a comparison study using a LiDAR (Light detection and ranging)-based digital elevation model (vertical accuracy: data in near-coastal environments is probably owed to the correction of the original dataset for a fixed value of 0 m along the coastlines of the world (SRTM Water Body Data). Our findings indicate that, at certain scales, a spatial integration of linear GPS data can be attempted using the SRTM dataset. However, the process must be aided by adequate surface information (e.g. Landsat images from close to
Maximizing without difficulty: A modified maximizing scale and its correlates
Linda Lai
2010-01-01
This article presents several studies that replicate and extend previous research on maximizing. A modified scale for measuring individual maximizing tendency is introduced. The scale has adequate psychometric properties and reflects maximizers' aspirations for high standards and their preference for extensive alternative search, but not the decision difficulty aspect included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cogniti...
Post-mastectomy reconstruction: a risk-stratified comparative analysis of outcomes.
Saha, Dujata; Davila, Armando A; Ver Halen, Jon P; Jain, Umang K; Hansen, Nora; Bethke, Kevin; Khan, Seema A; Jeruss, Jacqueline; Fine, Neil; Kim, john Y S
2013-12-01
Although breast reconstruction following mastectomy plays a role in the psychological impact of breast cancer, only one in three women undergo reconstruction. Few multi-institutional studies have compared complication profiles of reconstructive patients to non-reconstructive. Using the National Surgical Quality Improvement database, all patients undergoing mastectomy from 2006 to 2010, with or without reconstruction, were identified and risk-stratified using propensity scored quintiles. The incidence of complications and comorbidities were compared. Of 37,723 mastectomies identified, 30% received immediate breast reconstruction. After quintile matching for comorbidities, complications rates between reconstructive and non-reconstructives were similar. This trend was echoed across all quintiles, except in the sub-group with highest comorbidities. Here, the reconstructive patients had significantly more complications than the non-reconstructive (22.8% versus 7.0%, p < 0.001). Immediate breast reconstruction is a well-tolerated surgical procedure. However, in patients with high comorbidities, surgeons must carefully counterbalance surgical risks with psychosocial benefits to maximize patient outcomes. Level 3.
Boogaard, van den M.; Kuhry, B.
1979-01-01
Extensive frequency data are used for a reconstruction of Devonian cunodont apparatuses. Correspondence analysis and a related clustering method are selected as statistical tools, and are used as informal methods for testing a priori hypotheses rather than as search mechanisms. In our view, the Palm
Wilson, Graham P.; Lamb, Angela L.; Leng, Melanie J.; Gonzalez, Silvia; Huddart, David
2005-09-01
Microfossil analysis (e.g. diatoms, foraminifera and pollen) represents the cornerstone of Holocene relative sea-level (RSL) reconstruction because their distribution in the contemporary inter-tidal zone is principally controlled by ground elevation within the tidal frame. A combination of poor microfossil preservation and a limited range in the sediment record may severely restrict the accuracy of resulting RSL reconstructions. Organic δ13C and C/N analysis of inter-tidal sediments have shown some potential as coastal palaeoenvironmental proxies. Here we assess their viability for reconstructing RSL change by examining patterns of organic δ13C and C/N values in a modern estuarine environment. δ13C and C/N analysis of bulk organic inter-tidal sediments and vegetation, as well as suspended and bedload organic sediments of the Mersey Estuary, U.K., demonstrate that the two main sources of organic carbon to surface saltmarsh sediments (terrestrial vegetation and tidal-derived particulate organic matter) have distinctive δ13C and C/N signatures. The resulting relationship between ground elevation within the tidal frame and surface sediment δ13C and C/N is unaffected by decompositional changes. The potential of this technique for RSL reconstruction is demonstrated by the analysis of part of an early Holocene sediment core from the Mersey Estuary. Organic δ13C and C/N analysis is less time consuming than microfossil analysis and is likely to provide continuous records of RSL change.
HEMI: Hyperedge Majority Influence Maximization
Gangal, Varun; Narayanam, Ramasuri
2016-01-01
In this work, we consider the problem of influence maximization on a hypergraph. We first extend the Independent Cascade (IC) model to hypergraphs, and prove that the traditional influence maximization problem remains submodular. We then present a variant of the influence maximization problem (HEMI) where one seeks to maximize the number of hyperedges, a majority of whose nodes are influenced. We prove that HEMI is non-submodular under the diffusion model proposed.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...... with 30% (v/v) ethanol or saline, respectively. Relative viscosity was used as one measure of physical properties of the emulsion. Higher degrees of sensitization (but not rates) were obtained at the 48 h challenge reading with the oil/propylene glycol and oil/saline + ethanol emulsions compared...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Russo James K
2011-12-01
Full Text Available Abstract Background To define the dosimetric coverage of level I/II axillary volumes and the lung volume irradiated in postmastectomy radiotherapy (PMRT following tissue expander placement. Methods and Materials Twenty-three patients were identified who had undergone postmastectomy radiotherapy with tangent only fields. All patients had pre-radiation tissue expander placement and expansion. Thirteen patients had bilateral expander reconstruction. The level I/II axillary volumes were contoured using the RTOG contouring atlas. The patient-specific variables of expander volume, superior-to-inferior location of expander, distance between expanders, expander angle and axillary volume were analyzed to determine their relationship to the axillary volume and lung volume dose. Results The mean coverage of the level I/II axillary volume by the 95% isodose line (VD95% was 23.9% (range 0.3 - 65.4%. The mean Ipsilateral Lung VD50% was 8.8% (2.2-20.9. Ipsilateral and contralateral expander volume correlated to Axillary VD95% in patients with bilateral reconstruction (p = 0.01 and 0.006, respectively but not those with ipsilateral only reconstruction (p = 0.60. Ipsilateral Lung VD50% correlated with angle of the expander from midline (p = 0.05. Conclusions In patients undergoing PMRT with tissue expanders, incidental doses delivered by tangents to the axilla, as defined by the RTOG contouring atlas, do not provide adequate coverage. The posterior-superior region of level I and II is the region most commonly underdosed. Axillary volume coverage increased with increasing expander volumes in patients with bilateral reconstruction. Lung dose increased with increasing expander angle from midline. This information should be considered both when placing expanders and when designing PMRT tangent only treatment plans by contouring and targeting the axilla volume when axillary treatment is indicated.
2011-01-01
Background To define the dosimetric coverage of level I/II axillary volumes and the lung volume irradiated in postmastectomy radiotherapy (PMRT) following tissue expander placement. Methods and Materials Twenty-three patients were identified who had undergone postmastectomy radiotherapy with tangent only fields. All patients had pre-radiation tissue expander placement and expansion. Thirteen patients had bilateral expander reconstruction. The level I/II axillary volumes were contoured using the RTOG contouring atlas. The patient-specific variables of expander volume, superior-to-inferior location of expander, distance between expanders, expander angle and axillary volume were analyzed to determine their relationship to the axillary volume and lung volume dose. Results The mean coverage of the level I/II axillary volume by the 95% isodose line (VD95%) was 23.9% (range 0.3 - 65.4%). The mean Ipsilateral Lung VD50% was 8.8% (2.2-20.9). Ipsilateral and contralateral expander volume correlated to Axillary VD95% in patients with bilateral reconstruction (p = 0.01 and 0.006, respectively) but not those with ipsilateral only reconstruction (p = 0.60). Ipsilateral Lung VD50% correlated with angle of the expander from midline (p = 0.05). Conclusions In patients undergoing PMRT with tissue expanders, incidental doses delivered by tangents to the axilla, as defined by the RTOG contouring atlas, do not provide adequate coverage. The posterior-superior region of level I and II is the region most commonly underdosed. Axillary volume coverage increased with increasing expander volumes in patients with bilateral reconstruction. Lung dose increased with increasing expander angle from midline. This information should be considered both when placing expanders and when designing PMRT tangent only treatment plans by contouring and targeting the axilla volume when axillary treatment is indicated. PMID:22204504
... this page: //medlineplus.gov/ency/article/007208.htm ACL reconstruction To use the sharing features on this page, please enable JavaScript. ACL reconstruction is surgery to reconstruct the ligament in ...
Bendiksen, Mads; Ahler, Thomas; Clausen, Helle
2013-01-01
ABSTRACT: We evaluated a sub-maximal and maximal version of the Yo-Yo IR1 childrens test (YYIR1C) and the Andersen test for fitness and maximal HR assessments of children aged 6-10. Two repetitions of the YYIR1C and Andersen tests were carried out within one week by 6-7 and 8-9 year olds (grade 0...
MAXIMS VIOLATIONS IN LITERARY WORK
Widya Hanum Sari Pertiwi
2015-12-01
Full Text Available This study was qualitative research action that focuses to find out the flouting of Gricean maxims and the functions of the flouting in the tales which are included in collection of children literature entitled My Giant Treasury of Stories and Rhymes. The objective of the study is generally to identify the violation of maxims of quantity, quality, relevance, and manner in the data sources and also to analyze the use of the flouting in the tales which are included in the book. Qualitative design using categorizing strategies, specifically coding strategy, was applied. Thus, the researcher as the instrument in this investigation was selecting the tales, reading them, and gathering every item which reflects the violation of Gricean maxims based on some conditions of flouting maxims. On the basis of the data analysis, it was found that the some utterances in the tales, both narration and conversation, flouting the four maxims of conversation, namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner. The researcher has also found that the flouting of maxims has one basic function that is to encourage the readers’ imagination toward the tales. This one basic function is developed by six others functions: (1 generating specific situation, (2 developing the plot, (3 enlivening the characters’ utterance, (4 implicating message, (5 indirectly characterizing characters, and (6 creating ambiguous setting. Keywords: children literature, tales, flouting maxims
Swanepoel, Konrad J
2011-01-01
A subset of a normed space X is called equilateral if the distance between any two points is the same. Let m(X) be the smallest possible size of an equilateral subset of X maximal with respect to inclusion. We first observe that Petty's construction of a d-dimensional X of any finite dimension d >= 4 with m(X)=4 can be generalised to show that m(X\\oplus_1\\R)=4 for any X of dimension at least 2 which has a smooth point on its unit sphere. By a construction involving Hadamard matrices we then show that both m(\\ell_p) and m(\\ell_p^d) are finite and bounded above by a function of p, for all 1 1 such that m(X) <= d+1 for all d-dimensional X with Banach-Mazur distance less than c from \\ell_p^d. Using Brouwer's fixed-point theorem we show that m(X) <= d+1 for all d-\\dimensional X with Banach-Mazur distance less than 3/2 from \\ell_\\infty^d. A graph-theoretical argument furthermore shows that m(\\ell_\\infty^d)=d+1. The above results lead us to conjecture that m(X) <= 1+\\dim X.
Unified Maximally Natural Supersymmetry
Huang, Junwu
2016-01-01
Maximally Natural Supersymmetry, an unusual weak-scale supersymmetric extension of the Standard Model based upon the inherently higher-dimensional mechanism of Scherk-Schwarz supersymmetry breaking (SSSB), possesses remarkably good fine tuning given present LHC limits. Here we construct a version with precision $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ unification: $\\sin^2 \\theta_W(M_Z) \\simeq 0.231$ is predicted to $\\pm 2\\%$ by unifying $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ into a 5D $SU(3)_{\\rm EW}$ theory at a Kaluza-Klein scale of $1/R_5 \\sim 4.4\\,{\\rm TeV}$, where SSSB is simultaneously realised. Full unification with $SU(3)_{\\rm C}$ is accommodated by extending the 5D theory to a $N=4$ supersymmetric $SU(6)$ gauge theory on a 6D rectangular orbifold at $1/R_6 \\sim 40 \\,{\\rm TeV}$. TeV-scale states beyond the SM include exotic charged fermions implied by $SU(3)_{\\rm EW}$ with masses lighter than $\\sim 1.2\\,{\\rm TeV}$, and squarks in the mass range $1.4\\,{\\rm TeV} - 2.3\\,{\\rm TeV}$, providing distinct signature...
4D image reconstruction for emission tomography
Reader, Andrew J.; Verhaeghe, Jeroen
2014-11-01
An overview of the theory of 4D image reconstruction for emission tomography is given along with a review of the current state of the art, covering both positron emission tomography and single photon emission computed tomography (SPECT). By viewing 4D image reconstruction as a matter of either linear or non-linear parameter estimation for a set of spatiotemporal functions chosen to approximately represent the radiotracer distribution, the areas of so-called ‘fully 4D’ image reconstruction and ‘direct kinetic parameter estimation’ are unified within a common framework. Many choices of linear and non-linear parameterization of these functions are considered (including the important case where the parameters have direct biological meaning), along with a review of the algorithms which are able to estimate these often non-linear parameters from emission tomography data. The other crucial components to image reconstruction (the objective function, the system model and the raw data format) are also covered, but in less detail due to the relatively straightforward extension from their corresponding components in conventional 3D image reconstruction. The key unifying concept is that maximum likelihood or maximum a posteriori (MAP) estimation of either linear or non-linear model parameters can be achieved in image space after carrying out a conventional expectation maximization (EM) update of the dynamic image series, using a Kullback-Leibler distance metric (comparing the modeled image values with the EM image values), to optimize the desired parameters. For MAP, an image-space penalty for regularization purposes is required. The benefits of 4D and direct reconstruction reported in the literature are reviewed, and furthermore demonstrated with simple simulation examples. It is clear that the future of reconstructing dynamic or functional emission tomography images, which often exhibit high levels of spatially correlated noise, should ideally exploit these 4D
Cross-Order Integral Relations from Maximal Cuts
Johansson, Henrik; Larsen, Kasper J.; Søgaard, Mads
2015-01-01
We study the ABDK relation using maximal cuts of one- and two-loop integrals with up to five external legs. We show how to find a special combination of integrals that allows the relation to exist, and how to reconstruct the terms with one-loop integrals squared. The reconstruction relies on the observation that integrals across different loop orders can have support on the same generalized unitarity cuts and can share global poles. We discuss the appearance of nonhomologous integration contours in multivariate residues. Their origin can be understood in simple terms, and their existence enables us to distinguish contributions from different integrals. Our analysis suggests that maximal and near-maximal cuts can be used to infer the existence of integral identities more generally.
Project Scheduling to Maximize Positive Impacts of Reconstruction Operations
2009-03-01
invaluable assistance in translating the problem formulation to LINGO . And I cannot give enough thanks to my wife, my family, my friends, and my Savior...stance necessitates the use of a matrix generator. LINGO was used to generate and solve this instance; however, there are other equally capable IP...with 960 MB of RAM using LINGO 11.0.0.20. The formulation includes 1203 variables and 197 constraints. While the model is large even for this small
Maximal subgroups of finite groups
S. Srinivasan
1990-01-01
Full Text Available In finite groups maximal subgroups play a very important role. Results in the literature show that if the maximal subgroup has a very small index in the whole group then it influences the structure of the group itself. In this paper we study the case when the index of the maximal subgroups of the groups have a special type of relation with the Fitting subgroup of the group.
Finding Maximal Quasiperiodicities in Strings
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...
Maximizing Entropy over Markov Processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Maximizing entropy over Markov processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
H. E. Rieder
2008-01-01
Full Text Available The aim of this study is the reconstruction of past UV-radiation doses for two stations in Austria, Hoher Sonnblick and Vienna, using a physical radiation transfer model. The method uses the modeled UV-radiation under clear-sky conditions, cloud modification factors and a correction factor as input variables. To identify the influence of temporal resolution of input data and modification factors, an ensemble of four different modelling approaches has been calculated, each with hourly or daily resolution. This is especially important because we found no other study describing the influence of the temporal resolution of input data on model performance. Following the results of the statistical analysis of the evaluation period the model with the highest temporal resolution has been chosen for the reconstruction of the UV-radiation doses. This model (HMC uses modelled UV-radiation under clear sky conditions, a cloud modification factor, both with hourly resolution, and a monthly correction factor. A good agreement between modelled and measured values of erythemally effective irradiance was found at both stations. In relation to the reference period 1976–1985 an increase in erythemal UV-irradiance in Vienna of 11 percent is visible in the period 1986–1995 and an increase of 17 percent in the period 1996–2005 can be seen. At Hoher Sonnblick an increase of 2 percent has been calculated for the yearly averages in erythemal UV for the period 1986–1995 and an increase of 9 percent for the period 1996–2005 in comparison to the reference period. For the different seasons the strongest increase in erythemal UV radiation has been found for winter and spring season at both stations.
Gonzalez-Sanchez, Jon
2010-01-01
Let $w = w(x_1,..., x_n)$ be a word, i.e. an element of the free group $F =$ on $n$ generators $x_1,..., x_n$. The verbal subgroup $w(G)$ of a group $G$ is the subgroup generated by the set $\\{w (g_1,...,g_n)^{\\pm 1} | g_i \\in G, 1\\leq i\\leq n \\}$ of all $w$-values in $G$. We say that a (finite) group $G$ is $w$-maximal if $|G:w(G)|> |H:w(H)|$ for all proper subgroups $H$ of $G$ and that $G$ is hereditarily $w$-maximal if every subgroup of $G$ is $w$-maximal. In this text we study $w$-maximal and hereditarily $w$-maximal (finite) groups.
Anatomically-aided PET reconstruction using the kernel method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
... rebuild the shape of the breast. Instead of breast reconstruction, you could choose to wear a breast form ... one woman may not be right for another. Breast reconstruction may be done at the same time as ...
van Soelen, E. E.; Lammertsma, E. I.; Cremer, H.; Donders, T. H.; Sangiorgi, F.; Brooks, G. R.; Larson, R. A.; Sinninghe Damsté, J. S.; Wagner-Cremer, F.; Reichart, G. J.
2010-01-01
A suite of organic geochemical, micropaleontological and palynological proxies was applied to sediments from Southwest Florida, to study the Holocene environmental changes associated with sea-level rise. Sediments were recovered from Hillsborough Bay, part of Tampa Bay, and studied using biomarkers, pollen, organic-walled dinoflagellate cysts and diatoms. Analyses show that the site flooded around 7.5 ka as a consequence of Holocene transgression, progressively turning a fresh/brackish marl-marsh into a shallow, restricted marine environment. Immediately after the marine transgression started, limited water circulation and high amounts of runoff caused stratification of the water column. A shift in dinocysts and diatom assemblages to more marine species, increasing concentrations of marine biomarkers and a shift in the Diol Index indicate increasing salinity between 7.5 ka and the present, which is likely a consequence of progressing sea-level rise. Reconstructed sea surface temperatures for the past 4 kyrs are between 25 and 26 ° C, and indicate stable temperatures during the Late Holocene. A sharp increase in sedimentation rate in the top ˜50 cm of the core is attributed to human impact. The results are in agreement with parallel studies from the area, but this study further refines the environmental reconstructions having the advantage of simultaneously investigating changes in the terrestrial and marine environment.
Maximizing without difficulty: A modified maximizing scale and its correlates
Lai, Linda
2010-01-01
... included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cognition, desire for consistency, risk aversion, intrinsic motivation, self-efficacy and perceived workload, whereas...
Are maximizers really unhappy? The measurement of maximizing tendency,
Dalia L. Diab
2008-06-01
Full Text Available Recent research suggesting that people who maximize are less happy than those who satisfice has received considerable fanfare. The current study investigates whether this conclusion reflects the construct itself or rather how it is measured. We developed an alternative measure of maximizing tendency that is theory-based, has good psychometric properties, and predicts behavioral outcomes. In contrast to the existing maximization measure, our new measure did not correlate with life (dissatisfaction, nor with most maladaptive personality and decision-making traits. We conclude that the interpretation of maximizers as unhappy may be due to poor measurement of the construct. We present a more reliable and valid measure for future researchers to use.
Principles of maximally classical and maximally realistic quantum mechanics
S M Roy
2002-08-01
Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive phase space density. Here I formulate a stationary principle which gives a nonperturbative deﬁnition of a maximally classical as well as maximally realistic phase space density. I show that the maximally classical trajectories are in fact exactly classical in the simple examples of coherent states and bound states of an oscillator and Gaussian free particle states. In contrast, it is known that the de Broglie–Bohm realistic theory gives highly nonclassical trajectories.
Characterizing maximally singular phase-space distributions
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
A task-based comparison of two reconstruction algorithms for digital breast tomosynthesis
Mahadevan, Ravi; Ikejimba, Lynda C.; Lin, Yuan; Samei, Ehsan; Lo, Joseph Y.
2014-03-01
Digital breast tomosynthesis (DBT) generates 3-D reconstructions of the breast by taking X-Ray projections at various angles around the breast. DBT improves cancer detection as it minimizes tissue overlap that is present in traditional 2-D mammography. In this work, two methods of reconstruction, filtered backprojection (FBP) and the Newton-Raphson iterative reconstruction were used to create 3-D reconstructions from phantom images acquired on a breast tomosynthesis system. The task based image analysis method was used to compare the performance of each reconstruction technique. The task simulated a 10mm lesion within the breast containing iodine concentrations between 0.0mg/ml and 8.6mg/ml. The TTF was calculated using the reconstruction of an edge phantom, and the NPS was measured with a structured breast phantom (CIRS 020) over different exposure levels. The detectability index d' was calculated to assess image quality of the reconstructed phantom images. Image quality was assessed for both conventional, single energy and dual energy subtracted reconstructions. Dose allocation between the high and low energy scans was also examined. Over the full range of dose allocations, the iterative reconstruction yielded a higher detectability index than the FBP for single energy reconstructions. For dual energy subtraction, detectability index was maximized when most of the dose was allocated to the high energy image. With that dose allocation, the performance trend for reconstruction algorithms reversed; FBP performed better than the corresponding iterative reconstruction. However, FBP performance varied very erratically with changing dose allocation. Therefore, iterative reconstruction is preferred for both imaging modalities despite underperforming dual energy FBP, as it provides stable results.
Maximizing petrochemicals from refineries
Glover, B.; Foley, T.; Frey, S. [UOP, Des Plaines, IL (United States)
2007-07-01
New fuel quality requirements and high growth rates for petrochemicals are providing both challenges and opportunities for refineries. A key challenge in refineries today is to improve of the value of the products from the FCC unit. In particular, light FCC naphtha and LCO are prime candidates for improved utilization. Processing options have been developed focusing on new opportunities for these traditional fuel components. The Total Petrochemicals/UOP Olefin Cracking Process cracks C4-C8 olefins to produce propylene and ethylene. This process can be integrated into FCC units running at all severity levels to produce valuable light olefins while reducing the olefin content of the light FCC naphtha. Integration of the Olefin Cracking Process with an FCC unit can be accomplished to allow a range of operating modes which can accommodate changing demand for propylene, cracked naphtha and alkylate. Other processes developed by UOP allow for upgrading LCO into a range of products including petrochemical grade xylenes, benzene, high cetane diesel and low sulfur high octane gasoline. Various processing options are available which allow the products from LCO conversion to be adjusted based on the needs and opportunities of an individual refinery, as well as the external petrochemical demand cycles. This presentation will examine recent refining and petrochemical trends and highlight new process technologies that can be used to generate additional revenue from petrochemical production while addressing evolving clean fuel demands. (orig.)
Maximizing ROI with yield management
Neil Snyder
2001-01-01
.... the technology is based on the concept of yield management, which aims to sell the right product to the right customer at the right price and the right time therefore maximizing revenue, or yield...
Stattegger, K.; Tjallingii, R.; Saito, Y.; Michelli, M.; Thanh, N.T.; Wetzel, A.
2013-01-01
AbstractBeachrocks, beach ridge, washover and backshore deposits along the tectonically stable south-eastern Vietnamese coast document Holocene sea level changes. In combination with data from the final marine flooding phase of the incised Mekong River valley, the sea-level history of South Vietnam
Are CEOs Expected Utility Maximizers?
John List; Charles Mason
2009-01-01
Are individuals expected utility maximizers? This question represents much more than academic curiosity. In a normative sense, at stake are the fundamental underpinnings of the bulk of the last half-century's models of choice under uncertainty. From a positive perspective, the ubiquitous use of benefit-cost analysis across government agencies renders the expected utility maximization paradigm literally the only game in town. In this study, we advance the literature by exploring CEO's preferen...
Gaussian maximally multipartite entangled states
Facchi, Paolo; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio
2009-01-01
We introduce the notion of maximally multipartite entangled states (MMES) in the context of Gaussian continuous variable quantum systems. These are bosonic multipartite states that are maximally entangled over all possible bipartitions of the system. By considering multimode Gaussian states with constrained energy, we show that perfect MMESs, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of MMESs and their frustration for n <= 7.
All maximally entangling unitary operators
Cohen, Scott M. [Department of Physics, Duquesne University, Pittsburgh, Pennsylvania 15282 (United States); Department of Physics, Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo
2016-01-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
Exercises in PET Image Reconstruction
Nix, Oliver
These exercises are complementary to the theoretical lectures about positron emission tomography (PET) image reconstruction. They aim at providing some hands on experience in PET image reconstruction and focus on demonstrating the different data preprocessing steps and reconstruction algorithms needed to obtain high quality PET images. Normalisation, geometric-, attenuation- and scatter correction are introduced. To explain the necessity of those some basics about PET scanner hardware, data acquisition and organisation are reviewed. During the course the students use a software application based on the STIR (software for tomographic image reconstruction) library 1,2 which allows them to dynamically select or deselect corrections and reconstruction methods as well as to modify their most important parameters. Following the guided tutorial, the students get an impression on the effect the individual data precorrections have on image quality and what happens if they are forgotten. Several data sets in sinogram format are provided, such as line source data, Jaszczak phantom data sets with high and low statistics and NEMA whole body phantom data. The two most frequently used reconstruction algorithms in PET image reconstruction, filtered back projection (FBP) and the iterative OSEM (ordered subset expectation maximation) approach are used to reconstruct images. The exercise should help the students gaining an understanding what the reasons for inferior image quality and artefacts are and how to improve quality by a clever choice of reconstruction parameters.
LIU Yuan-feng; ZHAO Mei
2005-01-01
An algorithm based on the data-adaptive filtering characteristics of singular spectrum analysis (SSA) is proposed to denoise chaotic data. Firstly, the empirical orthogonal functions ( EOFs ) and principal components ( PCs ) of the signal were calculated, reconstruct the signal using the EOFs and PCs, and choose the optimal reconstructing order based on sigular spectrum to obtain the denoised signal. The noise of the signal can influence the calculating precision of maximal Liapunov exponents. The proposed denoising algorithm was applied to the maximal Liapunov exponents calculations of two chaotic system, Henon map and Logistic map. Some numerical results show that this denoising algorithm could improve the calculating precision of maximal Liapunov exponent.
Rovere, A.; Raymo, M.E.; Vacchi, M.; Lorscheid, T; Stocchi, P.; Gómez-Pujolf, L.; Harris, D.L.; Casella, E.; O'Leary, M.J.; Hearty, P.J.
2016-01-01
The Last Interglacial (MIS 5e, 128–116 ka) is among the most studied past periods in Earth's history. The climate at that time was warmer than today, primarily due to different orbital conditions, with smaller ice sheets and higher sea-level. Field evidence for MIS 5e sea-level was reported from
Rovere, A.; Raymo, M.E.; Vacchi, M.; Lorscheid, T; Stocchi, P.; Gómez-Pujolf, L.; Harris, D.L.; Casella, E.; O'Leary, M.J.; Hearty, P.J.
2016-01-01
The Last Interglacial (MIS 5e, 128–116 ka) is among the most studied past periods in Earth's history. The climate at that time was warmer than today, primarily due to different orbital conditions, with smaller ice sheets and higher sea-level. Field evidence for MIS 5e sea-level was reported from tho
Murotani, Kazuhiro; Kazuhiro, Murotani; Kawai, Nobuyuki; Sato, Morio; Minamiguchi, Hiroki; Nakai, Motoki; Sonomura, Tetsuo; Hosokawa, Seiki; Nishioku, Tadayoshi
2013-07-01
We quantified to clarify the optimum factors for CT image reconstruction of an enhanced hepatocellular carcinoma (HCC) model in a liver phantom obtained by multi-level dynamic computed tomography (M-LDCT) with 64 detector rows. After M-LDCT scanning of a water phantom and an enhanced HCC model, we compared the standard deviation (SD, 1 ± SD), noise power spectrum (NPS) values, contrast-noise ratios (CNR), and the M-LDCT image among the reconstruction parameters, including the convolution kernel (FC11, FC13, and FC15), post-processing quantum filters (2D-Q00, 2D-Q01, and 2D-Q02) and slice thicknesses/slice intervals. The SD and NPS values were lowest with FC11 and 2D-Q02. The CNR values were highest with 2D-Q02. The M-LDCT image quality was highest with FC11 and 2D-Q02, and with slice thicknesses/slice intervals of 0.5 mm/0.5 mm and 0.5 mm/0.25 mm. The optimum factors were the FC11 convolution kernel, 2D-Q02 quantum filter, and 0.5 mm slice thickness/0.5 mm slice interval or less.
Viral quasispecies assembly via maximal clique enumeration.
Töpfer, Armin; Marschall, Tobias; Bull, Rowena A; Luciani, Fabio; Schönhuth, Alexander; Beerenwinkel, Niko
2014-03-01
Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Viral quasispecies assembly via maximal clique enumeration.
Armin Töpfer
2014-03-01
Full Text Available Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Sums of magnetic eigenvalues are maximal on rotationally symmetric domains
Laugesen, Richard S; Roy, Arindam
2011-01-01
The sum of the first n energy levels of the planar Laplacian with constant magnetic field of given total flux is shown to be maximal among triangles for the equilateral triangle, under normalization of the ratio (moment of inertia)/(area)^3 on the domain. The result holds for both Dirichlet and Neumann boundary conditions, with an analogue for Robin (or de Gennes) boundary conditions too. The square similarly maximizes the eigenvalue sum among parallelograms, and the disk maximizes among ellipses. More generally, a domain with rotational symmetry will maximize the magnetic eigenvalue sum among all linear images of that domain. These results are new even for the ground state energy (n=1).
Wilson, Graham P.
2017-10-01
Bulk organic stable carbon isotope (δ13C) and element geochemistry (total organic carbon (TOC) and organic carbon to total nitrogen (C/N)) analysis is a developing technique in Holocene relative sea-level (RSL) research. The uptake of this technique in Northern Europe is limited compared to North America, where the common existence of coastal marshes with isotopically distinctive C3 and C4 vegetation associated with well-defined inundation tolerance permits the reconstruction of RSL in the sediment record. In Northern Europe, the reduced range in δ13C values between organic matter sources in C3 estuaries can make the identification of elevation-dependent environments in the Holocene sediment record challenging and this is compounded by the potential for post-depositional alteration in bulk δ13C values. The use of contemporary regional δ13C, C/N and TOC datasets representing the range of physiographic conditions commonly encountered in coastal wetland sediment sequences opens up the potential of using absolute values of sediment geochemistry to infer depositional environments and associated reference water levels. In this paper, the application of contemporary bulk organic δ13C, C/N and TOC to reconstruct Holocene RSL is further explored. An extended contemporary regional geochemical dataset of published δ13C, C/N and TOC observations (n = 142) from tidal-dominated C3 wetland deposits (representing tidal flat, saltmarsh, reedswamp and fen carr environments) in temperate NW Europe is compiled, and procedures implemented to correct for the 13C Suess effect on contemporary δ13C are detailed. Partitioning around medoids analysis identifies two distinctive geochemical groups in the NW European dataset, with tidal flat/saltmarsh and reedswamp/fen carr environments exhibiting characteristically different sediment δ13C, C/N and TOC values. A logistic regression model is developed from the NW European dataset in order to objectively identify in the sediment record
João P. Rosado
2012-08-01
Full Text Available OBJECTIVES: The aim of the present study was to perform a stereological and biochemical analysis of the foreskin of smoker subjects. MATERIALS AND METHODS: Foreskin samples were obtained from 20 young adults (mean = 27.2 years old submitted to circumcision. Of the patients analyzed, one group (n = 10 had previous history of chronic smoking (a half pack to 3 packs per day for 3 to 13 years (mean = 5.8 ± 3.2. The control group included 10 nonsmoking patients. Masson's trichrome stain was used to quantify the foreskin vascular density. Weigert’s resorcin-fucsin stain was used to assess the elastic system fibers and Picrosirius red stain was applied to study the collagen. Stereological analysis was performed using the Image J software to determine the volumetric densities. For biochemical analysis, the total collagen was determined as µg of hydroxyproline per mg of dry tissue. Means were compared using the unpaired t-test (p < 0.05. RESULTS: Elastic system fibers of smokers was 42.5% higher than in the control group (p = 0.002. In contrast, smooth muscle fibers (p = 0.42 and vascular density (p = 0.16 did not show any significant variation. Qualitative analysis using Picrosirius red stain with polarized light evidenced the presence of type I and III collagen in the foreskin tissue, without significant difference between the groups. Total collagen concentration also did not differ significantly between smokers and non-smokers (73.1µg/mg ± 8.0 vs. 69.2µg/mg ± 5.9, respectively, p = 0.23. CONCLUSIONS: The foreskin tissue of smoking patients had a significant increase of elastic system fibers. Elastic fibers play an important role in this tissue’s turnover and this high concentration in smokers possibly causes high extensibility of the foreskin. The structural alterations in smokers’ foreskins could possibly explain the poor results in smoking patients submitted to foreskin fasciocutaneous flaps in urethral reconstruction surgery.
SAR image target segmentation based on entropy maximization and morphology
柏正尧; 刘洲峰; 何佩琨
2004-01-01
Entropy maximization thresholding is a simple, effective image segmentation method. The relation between the histogram entropy and the gray level of an image is analyzed. An approach, which speeds the computation of optimal threshold based on entropy maximization, is proposed. The suggested method has been applied to the synthetic aperture radar (SAR) image targets segmentation. Mathematical morphology works well in reducing the residual noise.
Oncoplastic Breast Reduction: Maximizing Aesthetics and Surgical Margins
Michelle Milee Chang
2012-01-01
Full Text Available Oncoplastic breast reduction combines oncologically sound concepts of cancer removal with aesthetically maximized approaches for breast reduction. Numerous incision patterns and types of pedicles can be used for purposes of oncoplastic reduction, each tailored for size and location of tumor. A team approach between reconstructive and breast surgeons produces positive long-term oncologic results as well as satisfactory cosmetic and functional outcomes, rendering oncoplastic breast reduction a favorable treatment option for certain patients with breast cancer.
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
A New Approach to Upper Eyelid Reconstruction.
Bulla, A; Vielà, C; Fiorot, L; Bolletta, A; Pancrazi, E; Campus, G V
2017-04-01
Reconstruction of large defects of the upper eyelid is challenging because of its complex anatomy and specialized function. The aim of this work is to develop a single-stage reconstruction procedure based on a new approach. The technique consists of the advancement of an orbicularis oculi myocutaneous flap designed within the blepharoplasty skin excision pattern. After the tumor's excision is completed with clear margins, the borders of the flap are incised down to the submuscular plane inside the classical pattern of upper eyelid blepharoplasty. Two myocutaneous triangles are excised on both sides of the flap to allow its advancement to cover the defect. When it is necessary to repair the posterior lamella, we harvest a mucochondral graft. From 2012 to 2015, we performed upper eyelid reconstruction with this technique on six patients. The flap survived in all the patients, without total or partial necrosis. No patient required surgical revision. The results were aesthetically satisfying, and no tumor recurrence was noted. Our new approach to upper eyelid reconstruction maximizes the cosmetic outcome respecting the principles of radicality. This flap is better suited for lesions involving the median or paramedian eyelid border from the marginal zone up to the palpebral crease. The approach we propose is safe and versatile, and it ensures either a functional or a good aesthetic reconstruction. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Le Brocq, A.; Bentley, M.; Hubbard, A.; Fogwill, C.; Sugden, D.
2008-12-01
A numerical ice sheet model constrained by recent field evidence is employed to reconstruct the Last Glacial Maximum (LGM) ice sheet in the Weddell Sea Embayment (WSE). Previous modelling attempts have predicted an extensive grounding line advance (to the continental shelf break) in the WSE, leading to a large equivalent sea level contribution for the sector. The sector has therefore been considered as a potential source for a period of rapid sea level rise (MWP1a, 20 m rise in ~500 years). Recent field evidence suggests that the elevation change in the Ellsworth mountains at the LGM is lower than previously thought (~400 m). The numerical model applied in this paper suggests that a 400 m thicker ice sheet at the LGM does not support such an extensive grounding line advance. A range of ice sheet surfaces, resulting from different grounding line locations, lead to an equivalent sea level estimate of 1 - 3 m for this sector. It is therefore unlikely that the sector made a significant contribution to sea level rise since the LGM, and in particular to MWP1a. The reduced ice sheet size also has implications for the correction of GRACE data, from which Antarctic mass balance calculations have been derived.
Reformulating and Reconstructing Quantum Theory
Hardy, Lucien
2011-01-01
We provide a reformulation of finite dimensional quantum theory in the circuit framework in terms of mathematical axioms, and a reconstruction of quantum theory from operational postulates. The mathematical axioms for quantum theory are the following: [Axiom 1] Operations correspond to operators. [Axiom 2] Every complete set of positive operators corresponds to a complete set of operations. The following operational postulates are shown to be equivalent to these mathematical axioms: [P1] Definiteness. Associated with any given pure state is a unique maximal effect giving probability equal to one. This maximal effect does not give probability equal to one for any other pure state. [P2] Information locality. A maximal measurement on a composite system is effected if we perform maximal measurements on each of the components. [P3] Tomographic locality. The state of a composite system can be determined from the statistics collected by making measurements on the components. [P4] Compound permutatability. There exis...
Hagdorn, Magnus K M
2003-01-01
The aim of this project is to improve our understanding of the past European and British ice sheets as a basis for forecasting their future. The behaviour of these ice sheets is investigated by simulating them using a numerical model and comparing model results with geological data including relative sea–level change data. In order to achieve this aim, a coupled ice sheet/lithosphere model is developed. Ice sheets form an integral part of the Earth system. They affect the plane...
Algebraic curves of maximal cyclicity
Caubergh, Magdalena; Dumortier, Freddy
2006-01-01
The paper deals with analytic families of planar vector fields, studying methods to detect the cyclicity of a non-isolated closed orbit, i.e. the maximum number of limit cycles that can locally bifurcate from it. It is known that this multi-parameter problem can be reduced to a single-parameter one, in the sense that there exist analytic curves in parameter space along which the maximal cyclicity can be attained. In that case one speaks about a maximal cyclicity curve (mcc) in case only the number is considered and of a maximal multiplicity curve (mmc) in case the multiplicity is also taken into account. In view of obtaining efficient algorithms for detecting the cyclicity, we investigate whether such mcc or mmc can be algebraic or even linear depending on certain general properties of the families or of their associated Bautin ideal. In any case by well chosen examples we show that prudence is appropriate.
Autologous Costochondral Microtia Reconstruction.
Patel, Sapna A; Bhrany, Amit D; Murakami, Craig S; Sie, Kathleen C Y
2016-04-01
Reconstruction with autologous costochondral cartilage is one of the mainstays of surgical management of congenital microtia. We review the literature, present our current technique for microtia reconstruction with autologous costochondral graft, and discuss the evolution of our technique over the past 20 years. We aim to minimize donor site morbidity and create the most durable and natural appearing ear possible using a stacked framework to augment the antihelical fold and antitragal-tragal complex. Assessment of outcomes is challenging due to the paucity of available objective measures with which to evaluate aesthetic outcomes. Various instruments are used to assess outcomes, but none is universally accepted as the standard. The challenges we continue to face are humbling, but ongoing work on tissue engineering, application of 3D models, and use of validated questionnaires can help us get closer to achieving a maximal aesthetic outcome.
BOUNDEDNESS OF MAXIMAL SINGULAR INTEGRALS
CHEN JIECHENG; ZHU XIANGRONG
2005-01-01
The authors study the singular integrals under the Hormander condition and the measure not satisfying the doubling condition. At first, if the corresponding singular integral is bounded from L2 to itseff, it is proved that the maximal singu lar integral is bounded from L∞ to RBMO except that it is infinite μ-a.e. on Rd. A sufficient condition and a necessary condition such that the maximal singular integral is bounded from L2 to itself are also obtained. There is a small gap between the two conditions.
Park, Hyeong-Gyu; Shin, Yeong-Gil; Lee, Ho
2015-12-01
A ray-driven backprojector is based on ray-tracing, which computes the length of the intersection between the ray paths and each voxel to be reconstructed. To reduce the computational burden caused by these exhaustive intersection tests, we propose a fully graphics processing unit (GPU)-based ray-driven backprojector in conjunction with a ray-culling scheme that enables straightforward parallelization without compromising the high computing performance of a GPU. The purpose of the ray-culling scheme is to reduce the number of ray-voxel intersection tests by excluding rays irrelevant to a specific voxel computation. This rejection step is based on an axis-aligned bounding box (AABB) enclosing a region of voxel projection, where eight vertices of each voxel are projected onto the detector plane. The range of the rectangular-shaped AABB is determined by min/max operations on the coordinates in the region. Using the indices of pixels inside the AABB, the rays passing through the voxel can be identified and the voxel is weighted as the length of intersection between the voxel and the ray. This procedure makes it possible to reflect voxel-level parallelization, allowing an independent calculation at each voxel, which is feasible for a GPU implementation. To eliminate redundant calculations during ray-culling, a shared-memory optimization is applied to exploit the GPU memory hierarchy. In experimental results using real measurement data with phantoms, the proposed GPU-based ray-culling scheme reconstructed a volume of resolution 28032803176 in 77 seconds from 680 projections of resolution 10243768 , which is 26 times and 7.5 times faster than standard CPU-based and GPU-based ray-driven backprojectors, respectively. Qualitative and quantitative analyses showed that the ray-driven backprojector provides high-quality reconstruction images when compared with those generated by the Feldkamp-Davis-Kress algorithm using a pixel-driven backprojector, with an average of 2.5 times
Trend of maximal inspiratory pressure in mechanically ventilated patients: predictors
Pedro Caruso
2008-01-01
Full Text Available INTRODUCTION: It is known that mechanical ventilation and many of its features may affect the evolution of inspiratory muscle strength during ventilation. However, this evolution has not been described, nor have its predictors been studied. In addition, a probable parallel between inspiratory and limb muscle strength evolution has not been investigated. OBJECTIVE: To describe the variation over time of maximal inspiratory pressure during mechanical ventilation and its predictors. We also studied the possible relationship between the evolution of maximal inspiratory pressure and limb muscle strength. METHODS: A prospective observational study was performed in consecutive patients submitted to mechanical ventilation for > 72 hours. The maximal inspiratory pressure trend was evaluated by the linear regression of the daily maximal inspiratory pressure and a logistic regression analysis was used to look for independent maximal inspiratory pressure trend predictors. Limb muscle strength was evaluated using the Medical Research Council score. RESULTS: One hundred and sixteen patients were studied, forty-four of whom (37.9% presented a decrease in maximal inspiratory pressure over time. The members of the group in which maximal inspiratory pressure decreased underwent deeper sedation, spent less time in pressure support ventilation and were extubated less frequently. The only independent predictor of the maximal inspiratory pressure trend was the level of sedation (OR=1.55, 95% CI 1.003 - 2.408; p = 0.049. There was no relationship between the maximal inspiratory pressure trend and limb muscle strength. CONCLUSIONS: Around forty percent of the mechanically ventilated patients had a decreased maximal inspiratory pressure during mechanical ventilation, which was independently associated with deeper levels of sedation. There was no relationship between the evolution of maximal inspiratory pressure and the muscular strength of the limb.
Setuain, Igor; Izquierdo, Mikel; Idoate, Fernando
2017-01-01
Context- The muscular function restoration related to the type of physical rehabilitation followed after anterior cruciate ligament reconstruction (ACLR) using autologous hamstring tendon graft in terms of strength and cross sectional area (CSA) remain controversial. Objective- To analyze the CSA...... to persist in both rehabilitation groups. However, OCBR after ACLR lead to substantial gains on maximal knee flexor strength and ensured more symmetrical anterior-posterior laxity levels at the knee joint....
Yasufumi Iryu
2007-07-01
Full Text Available The timing and course of the last deglaciation (19,000–6,000 years BP are essential components for understanding the dynamics of large ice sheets (Lindstrom and MacAyeal, 1993 and their effects on Earth’s isostasy (Nakada and Lambeck, 1989; Lambeck, 1993; Peltier, 1994, as well as the complex relationship between freshwater fluxes to the ocean, thermohaline circulation, and, hence, global climate during the Late Pleistocene and the Holocene. Moreover, the lastdeglaciation is generally seen as a possible analogue for the environmental changes and increased sea level that Earth may experience because of the greenhouse effect, related thermal expansion of oceans, and the melting of polar ice sheets.
Giulio Garaffa; Salvatore Sansalone; David J Ralph
2013-01-01
During the most recent years,a variety of new techniques of penile reconstruction have been described in the literature.This paper focuses on the most recent advances in male genital reconstruction after trauma,excision of benign and malignant disease,in gender reassignment surgery and aphallia with emphasis on surgical technique,cosmetic and functional outcome.
Sparsity-constrained PET image reconstruction with learned dictionaries
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Understanding maximal repetitions in strings
Crochemore, Maxime
2008-01-01
The cornerstone of any algorithm computing all repetitions in a string of length n in O(n) time is the fact that the number of runs (or maximal repetitions) is O(n). We give a simple proof of this result. As a consequence of our approach, the stronger result concerning the linearity of the sum of exponents of all runs follows easily.
Su, Kuan-Hao; Yen, Tzu-Chen; Fang, Yu-Hua Dean
2013-10-01
The aim of this study is to develop and evaluate a novel direct reconstruction method to improve the signal-to-noise ratio (SNR) of parametric images in dynamic positron-emission tomography (PET), especially for applications in myocardial perfusion studies. Simulation studies were used to test the performance in SNR and computational efficiency for different methods. The NCAT phantom was used to generate simulated dynamic data. Noise realization was performed in the sinogram domain and repeated for 30 times with four different noise levels by varying the injection dose (ID) from standard ID to 1/8 of it. The parametric images were calculated by (1) three direct methods that compute the kinetic parameters from the sinogram and (2) an indirect method, which computes the kinetic parameter with pixel-by-pixel curve fitting in image space using weighted least-squares. The first direct reconstruction maximizes the likelihood function using trust-region-reflective (TRR) algorithm. The second approach uses tabulated parameter sets to generate precomputed time-activity curves for maximizing the likelihood functions. The third approach, as a newly proposed method, assumes separable complete data to derive the M-step for maximizing the likelihood. The proposed method with the separable complete data performs similarly to the other two direct reconstruction methods in terms of the SNR, providing a 5%-10% improvement as compared to the indirect parametric reconstruction under the standard ID. The improvement of SNR becomes more obvious as the noise level increases, reaching more than 30% improvement under 1/8 ID. Advantage of the proposed method lies in the computation efficiency by shortening the time requirement to 25% of the indirect approach and 3%-6% of other direct reconstruction methods. With results provided from this simulation study, direct reconstruction of myocardial blood flow shows a high potential for improving the parametric image quality for clinical use.
Rissolo, D.; Reinhardt, E. G.; Collins, S.; Kovacs, S. E.; Beddows, P. A.; Chatters, J. C.; Nava Blank, A.; Luna Erreguerena, P.
2014-12-01
A massive pit deep within the now submerged cave system of Sac Actun, located along the central east coast of the Yucatan Peninsula, contains a diverse fossil assemblage of extinct megafauna as well as a nearly complete human skeleton. The inundated site of Hoyo Negro presents a unique and promising opportunity for interdisciplinary Paleoamerican and paleoenvironmental research in the region. Investigations have thus far revealed a range of associated features and deposits which make possible a multi-proxy approach to identifying and reconstructing the natural and cultural processes that have formed and transformed the site over millennia. Understanding water-level fluctuations (both related to, and independent from, eustatic sea level changes), with respect to cave morphology is central to understanding the movement of humans and animals into and through the cave system. Recent and ongoing studies involve absolute dating of human, faunal, macrobotanical, and geological samples; taphonomic analyses; and a characterization of site hydrogeology and sedimentological facies, including microfossil assemblages and calcite raft deposits.
Maximal respiratory pressures among adolescent swimmers.
Rocha Crispino Santos, M A; Pinto, M L; Couto Sant'Anna, C; Bernhoeft, M
2011-01-01
Maximal inspiratory pressures (MIP) and maximal expiratory pressures (MEP) are useful indices of respiratory muscle strength in athletes. The aims of this study were: to describe the strength of the respiratory muscles of Olympic junior swim team, at baseline and after a standard physical training; and to determine if there is a differential inspiratory and expiratory pressure response to the physical training. A cross-sectional study evaluated 28 international-level swimmers with ages ranging from 15 to 17 years, 19 (61 %) being males. At baseline, MIP was found to be lower in females (P = .001). The mean values reached by males and females were: MIP(cmH2O) = M: 100.4 (± 26.5)/F: 67.8 (± 23.2); MEP (cmH2O) = M: 87.4 (± 20.7)/F: 73.9 (± 17.3). After the physical training they reached: MIP (cmH2O) = M: 95.3 (± 30.3)/F: 71.8 (± 35.6); MEP (cmH2O) = M: 82.8 (± 26.2)/F: 70.4 (± 8.3). No differential pressure responses were observed in either males or females. These results suggest that swimmers can sustain the magnitude of the initial maximal pressures. Other studies should be developed to clarify if MIP and MEP could be used as a marker of an athlete's performance.
Glickel, Steven Z; Gupta, Salil
2006-05-01
Volar ligament reconstruction is an effective technique for treating symptomatic laxity of the CMC joint of the thumb. The laxity may bea manifestation of generalized ligament laxity,post-traumatic, or metabolic (Ehler-Danlos). There construction reduces the shear forces on the joint that contribute to the development and persistence of inflammation. Although there have been only a few reports of the results of volar ligament reconstruction, the use of the procedure to treat Stage I and Stage II disease gives good to excellent results consistently. More advanced stages of disease are best treated by trapeziectomy, with or without ligament reconstruction.
Note on maximal distance separable codes
YANG Jian-sheng; WANG De-xiu; JIN Qing-fang
2009-01-01
In this paper, the maximal length of maximal distance separable(MDS)codes is studied, and a new upper bound formula of the maximal length of MDS codes is obtained. Especially, the exact values of the maximal length of MDS codes in some parameters are given.
The Negative Consequences of Maximizing in Friendship Selection.
Newman, David B; Schug, Joanna; Yuki, Masaki; Yamada, Junko; Nezlek, John B
2017-02-27
Previous studies have shown that the maximizing orientation, reflecting a motivation to select the best option among a given set of choices, is associated with various negative psychological outcomes. In the present studies, we examined whether these relationships extend to friendship selection and how the number of options for friends moderated these effects. Across 5 studies, maximizing in selecting friends was negatively related to life satisfaction, positive affect, and self-esteem, and was positively related to negative affect and regret. In Study 1, a maximizing in selecting friends scale was created, and regret mediated the relationships between maximizing and well-being. In a naturalistic setting in Studies 2a and 2b, the tendency to maximize among those who participated in the fraternity and sorority recruitment process was negatively related to satisfaction with their selection, and positively related to regret and negative affect. In Study 3, daily levels of maximizing were negatively related to daily well-being, and these relationships were mediated by daily regret. In Study 4, we extended the findings to samples from the U.S. and Japan. When participants who tended to maximize were faced with many choices, operationalized as the daily number of friends met (Study 3) and relational mobility (Study 4), the opportunities to regret a decision increased and further diminished well-being. These findings imply that, paradoxically, attempts to maximize when selecting potential friends is detrimental to one's well-being. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Thomas, Brian C.; Goracke, Byron D.; Dalton, Sean M.
2016-11-01
Chemical and morphological features of spores and pollens have been linked to changes in solar ultraviolet radiation (specifically UVB, 280-315 nm) at Earth's surface. Variation in UVB exposure as inferred from these features has been suggested as a proxy for paleoaltitude; such proxies are important in understanding the uplift history of high altitude plateaus, which in turn is important for testing models of the tectonic processes responsible for such uplift. While UVB irradiance does increase with altitude above sea level, a number of other factors affect the irradiance at any given place and time. In this modeling study we use the TUV atmospheric radiative transfer model to investigate dependence of surface-level UVB irradiance and relative biological impact on a number of constituents in Earth's atmosphere that are variable over long and short time periods. We consider changes in O3 column density, and SO2 and sulfate aerosols due to periods of volcanic activity, including that associated with the formation of the Siberian Traps. We find that UVB irradiance may be highly variable under volcanic conditions and variations in several of these atmospheric constituents can easily mimic or overwhelm changes in UVB irradiance due to changes in altitude. On the other hand, we find that relative change with altitude is not very sensitive to different sets of atmospheric conditions. Any paleoaltitude proxy based on UVB exposure requires confidence that the samples under comparison were located at roughly the same latitude, under very similar O3 and SO2 columns, with similar atmospheric aerosol conditions. In general, accurate estimates of the surface-level UVB exposure at any time and location require detailed radiative transfer modeling taking into account a number of atmospheric factors; this result is important for paleoaltitude proxies as well as attempts to reconstruct the UV environment through geologic time and to tie extinctions, such as the end-Permian mass
... senos Preguntas Para el Médico Datos Para la Vida Komen El cuidado de sus senos:Consejos útiles ... can help . Cost Federal law requires most insurance plans cover the cost of breast reconstruction. Learn more ...
National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Paleoclimatology Program archives reconstructions of past climatic conditions derived from paleoclimate proxies, in addition to the Program's large holdings...
Surgical reconstruction strategy of high level sacral tumors after tumor resection%高位骶骨肿瘤切除后的外科重建策略
吴强; 邵增务; 杨述华; 王佰川; 范磊
2013-01-01
目的探讨高位骶骨肿瘤切除后个性化重建方案。方法回顾分析自2000年9月至2011年12月，手术治疗的高位骶骨肿瘤11例，肿瘤类型包括骨巨细胞瘤、脊索瘤、软骨肉瘤、骨肉瘤及神经源性肿瘤。所有病例中，骶椎S1均受累。根据骶骨及骶髂关节侵犯范围，选择最佳手术方案，进行个性化外科重建。结果全部11例无术中死亡病例，术中平均出血量3200 ml。10例获得8个月至6年的随访，平均24个月，术后近期并发症1例为切口皮缘坏死和伤口延迟愈合；1例术后出现排尿困难，1例脑脊液漏。患者早期功能恢复良好，神经功能障碍改善率达66.7%。局部复发2例，分别为1例骨巨细胞瘤和1例软骨肉瘤，未出现远处转移病例。随访病例均未发现钉棒松动、断裂，以及同种异体腓骨植入后骨端吸收现象。结论良好的手术计划以及个性化的切除及重建方案可以保证手术的成功性。减少术中出血、合适地保留马尾神经功能以及骨盆环的重建是手术考虑的重点。%Objective The surgery of high level sacrum is a challenge in the ifeld of bone tumor therapy because of its special anatomic structure, large quantity of hemorrhage during the operation and the difficulty in reconstruction. This study is to investigate the individual reconstruction strategy of high level sacral tumor surgery. Methods The retrospective study included 11 patients from September 2000 to December 2011. The tumor type included bone giant cell tumor, chordoma, chondrosarcoma, osteosarcoma and neurogenic tumor. In all cases, the sacral vertebrae S1 was involved. The individualized reconstruction strategy was conducted according to the invasion area of the sacrum and sacroiliac joint. Results No patients died during the operation. The average amount of bleeding was 3200 ml. 10 cases were followed up for 8 months to 6 years, 24 months in average. The recent
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Roy, Martin; Veillette, Jean; Daubois, Virginie
2014-05-01
The reconstruction of the history of former glacial lakes is commonly based on the study of strandlines that generally consist of boulder ridges, sandy beaches and other near-shore deposits. This approach, however, is limited in some regions where the surficial geology consists of thick accumulation of fine-grained glaciolacustrine sediments that mask most deglacial landforms. This situation is particularly relevant to the study of Lake Ojibway, a large proglacial lake that developed in northern Ontario and Quebec following the retreat of the southern Laurentide ice sheet margin during the last deglaciation. The history of Ojibway lake levels remains poorly known, mainly due to the fact that this lake occupied a deep and featureless basin that favored the sedimentation of thick sequences of rhythmites and prevented the formation of well-developed strandlines. Nonetheless, detailed mapping revealed a complex sequence of discontinuous small-scale cliffs that are scattered over the flat-lying Ojibway clay plain. These terrace-like features range in size from 4 to 7 m in height and can be followed for 10 to 100's of meters. These small-scale geomorphic features are interpreted to represent raised shorelines that were cut into glaciolacustrine sediments by lakeshore erosional processes (i.e., wave action). These so-called wave-cut scarps (WCS) occur at elevations ranging from 3 to 30 m above the present level of Lake Abitibi (267 m), one of the lowest landmarks in the area. Here we evaluate the feasibility of using this type of relict shorelines to constrain the evolution of Ojibway lake levels. For this purpose, a series of WCS were measured along four transects of about 40 km in length in the Lake Abitibi region. The absolute elevation of 154 WCS was determined with a Digital Video Plotter software package using 1:15K air-photos, coupled with precise measurements of control points, which were measured with a high-precision Global Navigation Satellite System tied up to
Carroll, Linda J; Rothe, J Peter
2010-09-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson's metaphysical work on the 'ways of knowing'. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions.
Ammendola, R.; Biagioni, A.; Chiozzi, S.; Cretaro, P.; Cotta Ramusino, A.; Di Lorenzo, S.; Fantechi, R.; Fiorini, M.; Frezza, O.; Gianoli, A.; Lamanna, G.; Lo Cicero, F.; Lonardo, A.; Martinelli, M.; Neri, I.; Paolucci, P. S.; Pastorelli, E.; Piandani, R.; Piccini, M.; Pontisso, L.; Rossetti, D.; Simula, F.; Sozzi, M.; Vicini, P.
2017-03-01
This project aims to exploit the parallel computing power of a commercial Graphics Processing Unit (GPU) to implement fast pattern matching in the Ring Imaging Cherenkov (RICH) detector for the level 0 (L0) trigger of the NA62 experiment. In this approach, the ring-fitting algorithm is seedless, being fed with raw RICH data, with no previous information on the ring position from other detectors. Moreover, since the L0 trigger is provided with a more elaborated information than a simple multiplicity number, it results in a higher selection power. Two methods have been studied in order to reduce the data transfer latency from the readout boards of the detector to the GPU, i.e., the use of a dedicated NIC device driver with very low latency and a direct data transfer protocol from a custom FPGA-based NIC to the GPU. The performance of the system, developed through the FPGA approach, for multi-ring Cherenkov online reconstruction obtained during the NA62 physics runs is presented.
Carroll, Linda J.; Rothe, J. Peter
2010-01-01
Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson’s metaphysical work on the ‘ways of knowing’. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions. PMID:20948937
Computing Maximally Supersymmetric Scattering Amplitudes
Stankowicz, James Michael, Jr.
This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
Asymptotics of robust utility maximization
Knispel, Thomas
2012-01-01
For a stochastic factor model we maximize the long-term growth rate of robust expected power utility with parameter $\\lambda\\in(0,1)$. Using duality methods the problem is reformulated as an infinite time horizon, risk-sensitive control problem. Our results characterize the optimal growth rate, an optimal long-term trading strategy and an asymptotic worst-case model in terms of an ergodic Bellman equation. With these results we propose a duality approach to a "robust large deviations" criterion for optimal long-term investment.
Beeping a Maximal Independent Set
Afek, Yehuda; Alon, Noga; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot...
Maximal Congruences on Some Semigroups
Jintana Sanwong; R.P. Sullivan
2007-01-01
In 1976 Howie proved that a finite congruence-free semigroup is a simple group if it has at least three elements but no zero elementInfinite congruence-free semigroups are far more complicated to describe, but some have been constructed using semigroups of transformations (for example, by Howie in 1981 and by Marques in 1983)Here, forcertain semigroups S of numbers and of transformations, we determine all congruences p on S such that S/p is congruence-free, that is, we describe all maximal congruences on such semigroups S.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to compute nodes and minimize reconstruction times.
Knowledge discovery by accuracy maximization.
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-04-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.
Inapproximability of maximal strip recovery
Jiang, Minghui
2009-01-01
In comparative genomic, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given $d$ genomic maps as sequences of gene markers, the objective of \\msr{d} is to find $d$ subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant $d \\ge 2$, a polynomial-time 2d-approximation for \\msr{d} was previously known. In this paper, we show that for any $d \\ge 2$, \\msr{d} is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provi...
Maximal right smooth extension chains
Huang, Yun Bao
2010-01-01
If $w=u\\alpha$ for $\\alpha\\in \\Sigma=\\{1,2\\}$ and $u\\in \\Sigma^*$, then $w$ is said to be a \\textit{simple right extension}of $u$ and denoted by $u\\prec w$. Let $k$ be a positive integer and $P^k(\\epsilon)$ denote the set of all $C^\\infty$-words of height $k$. Set $u_{1},\\,u_{2},..., u_{m}\\in P^{k}(\\epsilon)$, if $u_{1}\\prec u_{2}\\prec ...\\prec u_{m}$ and there is no element $v$ of $P^{k}(\\epsilon)$ such that $v\\prec u_{1}\\text{or} u_{m}\\prec v$, then $u_{1}\\prec u_{2}\\prec...\\prec u_{m}$ is said to be a \\textit{maximal right smooth extension (MRSE) chains}of height $k$. In this paper, we show that \\textit{MRSE} chains of height $k$ constitutes a partition of smooth words of height $k$ and give the formula of the number of \\textit{MRSE} chains of height $k$ for each positive integer $k$. Moreover, since there exist the minimal height $h_1$ and maximal height $h_2$ of smooth words of length $n$ for each positive integer $n$, we find that \\textit{MRSE} chains of heights $h_1-1$ and $h_2+1$ are good candidates t...
Zhu, Hong-Ming; Pen, Ue-Li; Chen, Xuelei; Yu, Hao-Ran
2016-01-01
We present a direct approach to non-parametrically reconstruct the linear density field from an observed non-linear map. We solve for the unique displacement potential consistent with the non-linear density and positive definite coordinate transformation using a multigrid algorithm. We show that we recover the linear initial conditions up to $k\\sim 1\\ h/\\mathrm{Mpc}$ with minimal computational cost. This reconstruction approach generalizes the linear displacement theory to fully non-linear fields, potentially substantially expanding the BAO and RSD information content of dense large scale structure surveys, including for example SDSS main sample and 21cm intensity mapping.
Siddeq, M. M.; Rodrigues, M. A.
2015-09-01
Image compression techniques are widely used on 2D image 2D video 3D images and 3D video. There are many types of compression techniques and among the most popular are JPEG and JPEG2000. In this research, we introduce a new compression method based on applying a two level discrete cosine transform (DCT) and a two level discrete wavelet transform (DWT) in connection with novel compression steps for high-resolution images. The proposed image compression algorithm consists of four steps. (1) Transform an image by a two level DWT followed by a DCT to produce two matrices: DC- and AC-Matrix, or low and high frequency matrix, respectively, (2) apply a second level DCT on the DC-Matrix to generate two arrays, namely nonzero-array and zero-array, (3) apply the Minimize-Matrix-Size algorithm to the AC-Matrix and to the other high-frequencies generated by the second level DWT, (4) apply arithmetic coding to the output of previous steps. A novel decompression algorithm, Fast-Match-Search algorithm (FMS), is used to reconstruct all high-frequency matrices. The FMS-algorithm computes all compressed data probabilities by using a table of data, and then using a binary search algorithm for finding decompressed data inside the table. Thereafter, all decoded DC-values with the decoded AC-coefficients are combined in one matrix followed by inverse two levels DCT with two levels DWT. The technique is tested by compression and reconstruction of 3D surface patches. Additionally, this technique is compared with JPEG and JPEG2000 algorithm through 2D and 3D root-mean-square-error following reconstruction. The results demonstrate that the proposed compression method has better visual properties than JPEG and JPEG2000 and is able to more accurately reconstruct surface patches in 3D.
Kelsey, E. P.; Wake, C. P.; Osterberg, E. C.
2012-12-01
A deeper understanding of the behavior of North Pacific extratropical cyclones and anticyclones prior to the instrumental era is needed to advance our understanding of North Pacific climate variability. To help achieve this objective, we develop and use a new nonlinear ice core calibration procedure with the Eclipse (3017 m a.s.l.) and Mt. Logan (5400 m a.s.l.) ice core records from Yukon, Canada to isolate the ranges of ice core values that are consistently associated with North Pacific wintertime sea-level pressure (SLP) anomalies. Over the calibration period (1872-2001), each ice core record is ranked and divided into 10 groups of 13 years. Then for each group, the frequency of positive and negative SLP anomalies at each grid point is contoured and the composite mean SLP anomaly values are shaded. These plots elucidate areas where statistically significant SLP anomalies occur frequently in association with groups of ice core values. This new calibration procedure shows that the lowest and the two highest groups of Mt. Logan annual [Na+] are sensitive to SLP anomalies in the central and eastern Pacific and the second lowest [Na+] group is sensitive to western Pacific SLP anomalies. The highest and lowest Eclipse cold-season accumulation groups are most sensitive to SLP anomalies more distant in the western and central Pacific. This result is surprising in light of stable isotope studies suggesting a more distant moisture source for Mt. Logan. A reconstruction using these calibrated records indicates the Aleutian Low was predominantly weaker than average between 1699-1871. Our results highlight that having these geographically close ice core records is important to developing a deeper understanding of North Pacific climate variability.
Expectation Maximization for Hard X-ray Count Modulation Profiles
Benvenuto, Federico; Piana, Michele; Massone, Anna Maria
2013-01-01
This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI)} instrument. Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized for the analysis of count modulation profiles in solar hard X-ray imaging based on Rotating Modulation Collimators. The algorithm described in this paper solves the maximum likelihood problem iteratively and encoding a positivity constraint into the iterative optimization scheme. The result is therefore a classical Expectation Maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, ...
The maximal D = 4 supergravities
Wit, Bernard de [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, Postbus 80.195, NL-3508 TD Utrecht (Netherlands); Samtleben, Henning [Laboratoire de Physique, ENS Lyon, 46 allee d' Italie, F-69364 Lyon CEDEX 07 (France); Trigiante, Mario [Dept. of Physics, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Turin (Italy)
2007-06-15
All maximal supergravities in four space-time dimensions are presented. The ungauged Lagrangians can be encoded in an E{sub 7(7)}-Sp(56; R)/GL(28) matrix associated with the freedom of performing electric/magnetic duality transformations. The gauging is defined in terms of an embedding tensor {theta} which encodes the subgroup of E{sub 7(7)} that is realized as a local invariance. This embedding tensor may imply the presence of magnetic charges which require corresponding dual gauge fields. The latter can be incorporated by using a recently proposed formulation that involves tensor gauge fields in the adjoint representation of E{sub 7(7)}. In this formulation the results take a universal form irrespective of the electric/magnetic duality basis. We present the general class of supersymmetric and gauge invariant Lagrangians and discuss a number of applications.
Maximizing profit using recommender systems
Das, Aparna; Ricketts, Daniel
2009-01-01
Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
The maximal D=5 supergravities
de Wit, Bernard; Trigiante, M; Wit, Bernard de; Samtleben, Henning; Trigiante, Mario
2007-01-01
The general Lagrangian for maximal supergravity in five spacetime dimensions is presented with vector potentials in the \\bar{27} and tensor fields in the 27 representation of E_6. This novel tensor-vector system is subject to an intricate set of gauge transformations, describing 3(27-t) massless helicity degrees of freedom for the vector fields and 3t massive spin degrees of freedom for the tensor fields, where the (even) value of t depends on the gauging. The kinetic term of the tensor fields is accompanied by a unique Chern-Simons coupling which involves both vector and tensor fields. The Lagrangians are completely encoded in terms of the embedding tensor which defines the E_6 subgroup that is gauged by the vectors. The embedding tensor is subject to two constraints which ensure the consistency of the combined vector-tensor gauge transformations and the supersymmetry of the full Lagrangian. This new formulation encompasses all possible gaugings.
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
Daubois, V.; Roy, M.; Veillette, J. J.
2012-12-01
different phases in the Lake Abitibi region, at elevations of 290 m, 297 m, and 313 m. For comparison, the near-maximum phase of Lake Ojibway lies at 460 m, about 250 km to the NE of the study area. Overall, the elevation and position of these wave-cut terraces suggest they were formed during episodes of long stands associated with late-stage phases of glacial Lake Ojibway. An additional lake level is indicated by the lowest set WCBs that lie about 6 m above modern Lake Abitibi. These terraces are also restricted to area surrounding this lake, likely reflecting the occurrence of a paleolake Abitibi. These preliminary results thus underlie the strong potential of using these lakeshore features to reconstruct former lake level history. However, the data gathered so far do not allow firm conclusions on the number, exact elevation, and regional extent of the lake-level phases documented. Additional data are required at the scale of the area submerged by Lake Ojibway. The continuation of this work should also provide constraints on the origin of these lake-level phases, thereby potentially reinforcing our understanding of the role of meltwater discharges in climate fluctuations that marked the early Holocene.
Improvement of tomographic reconstruction in bone SPECT
Schuenemann, M.; Sahlmann, C.O.; Siefker, U.; Luig, H.; Meller, J. [Abteilungen fuer Nuklearmedizin, Georg-August-Univ. Goettingen (Germany); Heidrich, G. [Abteilungen fuer Diagnostische Radiologie, Georg-August-Univ., Goettingen (Germany); Werner, C.; Brunner, E. [Abteilungen fuer Medizinische Statistik, Georg-August-Univ., Goettingen (Germany)
2006-07-01
Aim: the comparison between iterative reconstruction and filtered backprojection in the reconstruction of bone SPECT in the diagnosis of skeletal metastases. Patients, methods: 47 consecutive patients (vertebral segments: n = 435), with suspected malignancy of the vertebral column, were examined by bone scintigraphy and MRI (maximal interval between the two procedures {+-} 5 weeks). The SPECT-data were reconstructed with an iterative algorithm (ISA) and with filtered backprojection. We defined semiquantitative criteria in order to assess the quality of the tomograms. Conventional reconstruction was performed both by a Wiener-filter and a low-pass-filter. Iterative reconstruction was performed by the ISA algorithm. The clinical evaluation of the different reconstruction algorithms was performed by MRI as the gold-standard. Results: sensitivity (%): 87.3 (ISA), 86.4 (low-pass), 79.7 (Wiener); specificity (%): 95.3 (ISA), 95 (low-pass), 85.4 (Wiener). The sensitivity of iterative reconstructed SPECT and low-pass reconstructed SPECT was significantly higher (p < 0.05) compared with the sensitivity of SPECT reconstructed by the Wiener-filter. The specificity of iterative reconstruction ISA and low-pass-filter reconstructed SPECT were significantly higher compared with the SPECT data reconstructed by the Wiener-filter. ISA was significantly superior to the Wiener-SPECT relating to all criteria of quality. Iterative reconstruction was significantly superior to the low-pass-SPECT relating to 2 of 3 criteria. In addition the Wiener-SPECT was significantly inferior to the low-pass-SPECT regarding to 2 of 3 criteria. Conclusion: in our series the iterative algorithm ISA was the method of choice in the reconstruction of bone SPECT data. In comparison with conventional algorithms ISA offers a significantly higher quality of the tomograms and yields a high diagnostic accuracy. (orig.)
ACL reconstruction - discharge
Anterior cruciate ligament reconstruction - discharge; ACL reconstruction - discharge ... had surgery to reconstruct your anterior cruciate ligament (ACL). The surgeon drilled holes in the bones of ...
Dynamical networks reconstructed from time series
Levnajić, Zoran
2012-01-01
Novel method of reconstructing dynamical networks from empirically measured time series is proposed. By statistically examining the correlations between motions displayed by network nodes, we derive a simple equation that directly yields the adjacency matrix, assuming the intra-network interaction functions to be known. We illustrate the method's implementation on a simple example and discuss the dependence of the reconstruction precision on the properties of time series. Our method is applicable to any network, allowing for reconstruction precision to be maximized, and errors to be estimated.
Electronic noise modeling in statistical iterative reconstruction.
Xu, Jingyan; Tsui, Benjamin M W
2009-06-01
We consider electronic noise modeling in tomographic image reconstruction when the measured signal is the sum of a Gaussian distributed electronic noise component and another random variable whose log-likelihood function satisfies a certain linearity condition. Examples of such likelihood functions include the Poisson distribution and an exponential dispersion (ED) model that can approximate the signal statistics in integration mode X-ray detectors. We formulate the image reconstruction problem as a maximum-likelihood estimation problem. Using an expectation-maximization approach, we demonstrate that a reconstruction algorithm can be obtained following a simple substitution rule from the one previously derived without electronic noise considerations. To illustrate the applicability of the substitution rule, we present examples of a fully iterative reconstruction algorithm and a sinogram smoothing algorithm both in transmission CT reconstruction when the measured signal contains additive electronic noise. Our simulation studies show the potential usefulness of accurate electronic noise modeling in low-dose CT applications.
[Deferred breast reconstruction - soul surgery?].
Kydlíček, Tomáš; Třešková, Inka; Třeška, Vladislav; Holubec, Luboš
2013-01-01
The loss or mutilation of a breast as a result of surgical treatment of neoplastic disease always represents a negative impact on a woman's psyche and negatively influences the quality of the woman's remaining life. The goal of our work was to implement deferred breast reconstruction into routine practice and the objectification of the influence of reconstruction on bodily integrity, quality of life, and the feeling of satisfaction in women. Women in remission from neoplastic disease after a radical mastectomy were indicated for breast reconstruction. Between January 2002 and December 2011 deferred breast reconstruction was carried out 174 × on 163 women, with an average age of 49.2 and an age range of 29-67 years. The most frequently used reconstruction method was a simple gel augmentation of the breast or a Becker expander/implant - 51 (29.3%) and 37 (21.3%), or in combination with a lateral thoracodorsal flap (31; 17.8% and 47; 27%); reconstruction using a free DIEP flap was carried out 7 × (4%). Complications occurred in 19 operations (10.9%) with a dominance of inflammation and pericapsular fibrosis, in a subjective analysis, satisfaction with the results prevailed, along with an increased quality of life after reconstruction. A growing number of deferred breast reconstructions, women's satisfaction with the results, the positive influence of renewed bodily integrity on the feeling of life satisfaction and the quality of life have elevated breast reconstructions to a qualitatively higher level.
Breast Reconstruction Alternatives
... Breast Reconstruction Surgery Breast Cancer Breast Reconstruction Surgery Breast Reconstruction Alternatives Some women who have had a ... chest. What if I choose not to get breast reconstruction? Some women decide not to have any ...
Beeping a Maximal Independent Set
Afek, Yehuda; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possi...
Maximal switchability of centralized networks
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
2009-01-01
Eighty percent of the reconstruction projects in Sichuan Province will be completed by the end of the year Despite ruins still seen everywhere in the earthquake-hit areas in Sichuan (Province, new buildings have been completed, and many people have moved into new houses. Through cameras of the media, the faces, once painful and melancholy after last year’s earthquake, now look confident and firm, gratifying people all over the
Augusto, O
2012-01-01
The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than $4 \\times 10^{32} cm^{-2} s^{-1}$ and the integrated luminosity reached the value of 1.02 $fb^{-1}$ on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test pertubative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space $\\eta \\times \\phi$ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the calo...
Brown James
2007-12-01
Full Text Available This article aims to discuss the various defects that occur with maxillectomy with a full review of the literature and discussion of the advantages and disadvantages of the various techniques described. Reconstruction of the maxilla can be relatively simple for the standard low maxillectomy that does not involve the orbital floor (Class 2. In this situation the structure of the face is less damaged and the there are multiple reconstructive options for the restoration of the maxilla and dental alveolus. If the maxillectomy includes the orbit (Class 4 then problems involving the eye (enopthalmos, orbital dystopia, ectropion and diplopia are avoided which simplifies the reconstruction. Most controversy is associated with the maxillectomy that involves the orbital floor and dental alveolus (Class 3. A case is made for the use of the iliac crest with internal oblique as an ideal option but there are other methods, which may provide a similar result. A multidisciplinary approach to these patients is emphasised which should include a prosthodontist with a special expertise for these defects.
O'Neill, Barry J
2011-07-20
Abstract Background Tensioning of anterior cruciate ligament (ACL) reconstruction grafts affects the clinical outcome of the procedure. As yet, no consensus has been reached regarding the optimum initial tension in an ACL graft. Most surgeons rely on the maximal sustained one-handed pull technique for graft tension. We aim to determine if this technique is reproducible from patient to patient. Findings We created a device to simulate ACL reconstruction surgery using Ilizarov components and porcine flexor tendons. Six experienced ACL reconstruction surgeons volunteered to tension porcine grafts using the device to see if they could produce a consistent tension. None of the surgeons involved were able to accurately reproduce graft tension over a series of repeat trials. Conclusions We conclude that the maximal sustained one-handed pull technique of ACL graft tensioning is not reproducible from trial to trial. We also conclude that the initial tension placed on an ACL graft varies from surgeon to surgeon.
Hirpara Kieran M
2011-07-01
Full Text Available Abstract Background Tensioning of anterior cruciate ligament (ACL reconstruction grafts affects the clinical outcome of the procedure. As yet, no consensus has been reached regarding the optimum initial tension in an ACL graft. Most surgeons rely on the maximal sustained one-handed pull technique for graft tension. We aim to determine if this technique is reproducible from patient to patient. Findings We created a device to simulate ACL reconstruction surgery using Ilizarov components and porcine flexor tendons. Six experienced ACL reconstruction surgeons volunteered to tension porcine grafts using the device to see if they could produce a consistent tension. None of the surgeons involved were able to accurately reproduce graft tension over a series of repeat trials. Conclusions We conclude that the maximal sustained one-handed pull technique of ACL graft tensioning is not reproducible from trial to trial. We also conclude that the initial tension placed on an ACL graft varies from surgeon to surgeon.
Galavis, Paulina E.; Jallow, Ngoneh; Paliwal, Bhudatt; Jeraj, Robert (Dept. of Medical Physics, Univ. of Wisconsin, Madison, WI (United States)), E-mail: galavis@wisc.edu; Hollensen, Christian (Dept. of Informatics and Mathematical Models, Technical Univ. of Denmark, Copenhagen (Denmark))
2010-10-15
Background. Characterization of textural features (spatial distributions of image intensity levels) has been considered as a tool for automatic tumor segmentation. The purpose of this work is to study the variability of the textural features in PET images due to different acquisition modes and reconstruction parameters. Material and methods. Twenty patients with solid tumors underwent PET/CT scans on a GE Discovery VCT scanner, 45-60 minutes post-injection of 10 mCi of [18F]FDG. Scans were acquired in both 2D and 3D modes. For each acquisition the raw PET data was reconstructed using five different reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range = 5%) were entropy-first order, energy, maximal correlation coefficient (second order feature) and low-gray level run emphasis (high-order feature). The features with intermediate variability (10% = range = 25%) were entropy-GLCM, sum entropy, high gray level run emphasis, gray level non-uniformity, small number emphasis, and entropy-NGL. Forty remaining features presented large variations (range > 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small variations due to different acquisition modes and reconstruction parameters. Features with low level of variations are better candidates for reproducible tumor segmentation. Even though features such as contrast-NGTD, coarseness, homogeneity, and busyness have been previously used, our data indicated that these features presented large variations, therefore they could not be
Maximal inequalities for demimartingales and their applications
WANG XueJun; HU ShuHe
2009-01-01
In this paper,we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides.The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob's type maximal inequality for demimartingales,strong laws of large numbers and growth rate for demimartingales and associated random variables.At last,we give an equivalent condition of uniform integrability for demisubmartingales.
Maximal inequalities for demimartingales and their applications
无
2009-01-01
In this paper, we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides. The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob’s type maximal inequality for demimartingales, strong laws of large numbers and growth rate for demimartingales and associated random variables. At last, we give an equivalent condition of uniform integrability for demisubmartingales.
Fu Xiaoqiang
2006-01-01
@@ The Karzai regime has made some progress over the past four years and a half in the post-war reconstruction.However, Taliban's destruction and drug economy are still having serious impacts on the security and stability of Afghanistan.Hence the settlement of the two problems has become a crux of affecting the country' s future.Moreover, the Karzai regime is yet to handle a series of hot potatoes in the fields of central government' s authority, military and police building-up and foreign relations as well.
Task-oriented maximally entangled states
Agrawal, Pankaj; Pradhan, B, E-mail: agrawal@iopb.res.i, E-mail: bpradhan@iopb.res.i [Institute of Physics, Sachivalaya Marg, Bhubaneswar, Orissa 751 005 (India)
2010-06-11
We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.
Inflation in maximal gauged supergravities
Kodama, Hideo [Theory Center, KEK,Tsukuba 305-0801 (Japan); Department of Particles and Nuclear Physics,The Graduate University for Advanced Studies,Tsukuba 305-0801 (Japan); Nozawa, Masato [Dipartimento di Fisica, Università di Milano, and INFN, Sezione di Milano,Via Celoria 16, 20133 Milano (Italy)
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Nursing Students' Awareness and Intentional Maximization of Their Learning Styles
Mayfield, Linda Riggs
2012-01-01
This small, descriptive, pilot study addressed survey data from four levels of nursing students who had been taught to maximize their learning styles in a first-semester freshman success skills course. Bandura's Agency Theory supports the design. The hypothesis was that without reinforcing instruction, the students' recall and application of that…
M Ahsanul Islam
Full Text Available Organohalide respiration, mediated by Dehalococcoides mccartyi, is a useful bioremediation process that transforms ground water pollutants and known human carcinogens such as trichloroethene and vinyl chloride into benign ethenes. Successful application of this process depends on the fundamental understanding of the respiration and metabolism of D. mccartyi. Reductive dehalogenases, encoded by rdhA genes of these anaerobic bacteria, exclusively catalyze organohalide respiration and drive metabolism. To better elucidate D. mccartyi metabolism and physiology, we analyzed available transcriptomic data for a pure isolate (Dehalococcoides mccartyi strain 195 and a mixed microbial consortium (KB-1 using the previously developed pan-genome-scale reconstructed metabolic network of D. mccartyi. The transcriptomic data, together with available proteomic data helped confirm transcription and expression of the majority genes in D. mccartyi genomes. A composite genome of two highly similar D. mccartyi strains (KB-1 Dhc from the KB-1 metagenome sequence was constructed, and operon prediction was conducted for this composite genome and other single genomes. This operon analysis, together with the quality threshold clustering analysis of transcriptomic data helped generate experimentally testable hypotheses regarding the function of a number of hypothetical proteins and the poorly understood mechanism of energy conservation in D. mccartyi. We also identified functionally enriched important clusters (13 for strain 195 and 11 for KB-1 Dhc of co-expressed metabolic genes using information from the reconstructed metabolic network. This analysis highlighted some metabolic genes and processes, including lipid metabolism, energy metabolism, and transport that potentially play important roles in organohalide respiration. Overall, this study shows the importance of an organism's metabolic reconstruction in analyzing various "omics" data to obtain improved understanding
Maximizing Teaching through Brain Research
Pattridge, Gregory C.
2009-01-01
Teachers and parents who read about the brain on the Internet should do so critically to determine fact from opinion. Are the assertions real about certain methods/strategies that claim to be based on brain research? Will they make a difference in their teaching and in achievement levels? Turning theory into fact take time and replication of solid…
Are all maximally entangled states pure?
Cavalcanti, D; Terra-Cunha, M O
2005-01-01
In this Letter we study if all maximally entangled states are pure through several entanglement monotones. Our conclusions allow us to generalize the idea of monogamy of entanglement. Then we propose a polygamy of entanglement, which express that if a general multipartite state is maximally entangled it is necessarily factorized by any other system.
Sampling and Representation Complexity of Revenue Maximization
Dughmi, Shaddin; Han, Li; Nisan, Noam
2014-01-01
We consider (approximate) revenue maximization in auctions where the distribution on input valuations is given via "black box" access to samples from the distribution. We observe that the number of samples required -- the sample complexity -- is tightly related to the representation complexity of an approximately revenue-maximizing auction. Our main results are upper bounds and an exponential lower bound on these complexities.
Lisonek, Petr
1996-01-01
our classifications confirmthe maximality of previously known sets, the results in E^7 and E^8are new. Their counterpart in dimension larger than 10is a set of unit vectors with only two values of inner products in the Lorentz space R^{d,1}.The maximality of this set again follows from a bound due...
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing b...
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
Cohomology of Weakly Reducible Maximal Triangular Algebras
董浙; 鲁世杰
2000-01-01
In this paper, we introduce the concept of weakly reducible maximal triangular algebras φwhich form a large class of maximal triangular algebras. Let B be a weakly closed algebra containing 5φ, we prove that the cohomology spaces Hn(φ, B) (n≥1) are trivial.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
Maximal Hypersurfaces in Spacetimes with Translational Symmetry
Bulawa, Andrew
2016-01-01
We consider four-dimensional vacuum spacetimes which admit a free isometric spacelike R-action. Taking a quotient with respect to the R-action produces a three-dimensional quotient spacetime. We establish several results regarding maximal hypersurfaces (spacelike hypersurfaces of zero mean curvature) in quotient spacetimes. First, we show that complete noncompact maximal hypersurfaces must either be flat cylinders S^1 x R or conformal to the Euclidean plane. Second, we establish a positive mass theorem for certain maximal hypersurfaces. Finally, while it is meaningful to use a bounded lapse when adopting the maximal hypersurface gauge condition in the four-dimensional (asymptotically flat) setting, it is shown here that nontrivial quotient spacetimes admit the maximal hypersurface gauge only with an unbounded lapse.
Teleportation of an arbitrary two-qudit state based on the non-maximally four-qudit cluster state
2008-01-01
Two different schemes are presented for quantum teleportation of an arbitrary two-qudit state using a non-maximally four-qudit cluster state as the quantum channel. The first scheme is based on the Bell-basis measurements and the re-ceiver may probabilistically reconstruct the original state by performing proper transformation on her particles and an auxiliary two-level particle; the second scheme is based on the generalized Bell-basis measurements and the probability of successfully teleporting the unknown state depends on those measurements which are adjusted by Alice. A comparison of the two schemes shows that the latter has a smaller probability than that of the former and contrary to the former, the channel information and auxiliary qubit are not necessary for the receiver in the latter.
Mikos, Patryk
2015-01-01
The fortran version of the AcerDET package has been published in [1], and used in the multiple publications on the predictions for physics at LHC. The package provides, starting from list of particles in the event, the list of reconstructed jets, isolated electrons, muons, photons and reconstructed missing transverse energy. The AcerDET represents a simplified version of the package called ATLFAST, used since several years within ATLAS Collaboration. In the fast simulation implemented in AcerDET, some functionalities of ATLFAST are absent, but the most crucial detector effects are implemented and the parametrisations are largely simplified. Therefore it is not representing details neither of ATLAS nor CMS detectors. This short paper documents a new C++ implementation of the same algorithms as used in [1]. We believe that the package can be well adequate for some feasibility studies of the high p_T physics at LHC and at planned ppFCC. The further evolution of this code is planned. [1] E. Richter-Was, AcerDET: ...
Breast Reconstruction with Implants
Breast reconstruction with implants Overview By Mayo Clinic Staff Breast reconstruction is a surgical procedure that restores shape to ... treat or prevent breast cancer. One type of breast reconstruction uses breast implants — silicone devices filled with silicone ...
Maximizing Career-Oriented Academic Advising at the Departmental Level.
Munski, Douglas C.
1983-01-01
A course developed within an academic department to expose students to discipline-specific career information and decision making is described, including course assignments and student reactions. A side benefit has been closer career-oriented connections between faculty and practitioners. (MSE)
Galavis, P.E.; Hollensen, Christian; Jallow, N.
2010-01-01
reconstruction parameters. Lesions were segmented on a default image using the threshold of 40% of maximum SUV. Fifty different texture features were calculated inside the tumors. The range of variations of the features were calculated with respect to the average value. Results. Fifty textural features were...... classified based on the range of variation in three categories: small, intermediate and large variability. Features with small variability (range 30%). Conclusion. Textural features such as entropy-first order, energy, maximal correlation coefficient, and low-gray level run emphasis exhibited small...
Are all maximally entangled states pure?
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...
Robust utility maximization in a discontinuous filtration
Jeanblanc, Monique; Ngoupeyou, Armand
2012-01-01
We study a problem of utility maximization under model uncertainty with information including jumps. We prove first that the value process of the robust stochastic control problem is described by the solution of a quadratic-exponential backward stochastic differential equation with jumps. Then, we establish a dynamic maximum principle for the optimal control of the maximization problem. The characterization of the optimal model and the optimal control (consumption-investment) is given via a forward-backward system which generalizes the result of Duffie and Skiadas (1994) and El Karoui, Peng and Quenez (2001) in the case of maximization of recursive utilities including model with jumps.
Fikret Fatih Önol
2014-11-01
Full Text Available In the treatment of urethral stricture, Buccal Mucosa Graft (BMG and reconstruction is applied with different patch techniques. Recently often prefered, this approach is, in bulber urethra strictures of BMG’s; by “ventral onley”, in pendulous urethra because of thinner spingiosis body, which provides support and nutrition of graft; by means of “dorsal inley” being anastomosis. In the research that Cordon et al. did, they compared conventional BMJ “onley” urethroplast and “pseudo-spongioplasty” which base on periurethral vascular tissues to be nourished by closing onto graft. In repairment of front urethras that spongiosis supportive tissue is insufficient, this method is defined as peripheral dartos [çevre dartos?] and buck’s fascia being mobilized and being combined on BMG patch. Between the years 2007 and 2012, assessment of 56 patients with conventional “ventral onley” BMG urethroplast and 46 patients with “pseudo-spongioplasty” were reported to have similar success rates (80% to 84% in 3.5 year follow-up on average. While 74% of the patients that were applied pseudo-spongioplasty had disease present at distal urethra (pendulous, bulbopendulous, 82% of the patients which were applied conventional onley urethroplast had stricture at proximal (bulber urethra yet. Also lenght of the stricture at the pseudo-spongioplasty group was longer in a statistically significant way (5.8 cm to 4.7 cm on average, p=0.028. This study which Cordon et al. did, shows that conditions in which conventional sponjiyoplasti is not possible, periurethral vascular tissues are adequate to nourish BMG. Even it is an important technique in terms of bringing a new point of view to today’s practice, data especially about complications that may show up after pseudo-spongioplasty usage on long distal strictures (e.g. appearance of urethral diverticulum is not reported. Along with this we think that, providing an oppurtinity to patch directly
Mustafa Sertbaş
2014-01-01
Full Text Available Network-oriented analysis is essential to identify those parts of a cell affected by a given perturbation. The effect of neurodegenerative perturbations in the form of diseases of brain metabolism was investigated by using a newly reconstructed brain-specific metabolic network. The developed stoichiometric model correctly represents healthy brain metabolism, and includes 630 metabolic reactions in and between astrocytes and neurons, which are controlled by 570 genes. The integration of transcriptome data of six neurodegenerative diseases (Alzheimer’s disease, Parkinson’s disease, Huntington’s disease, amyotrophic lateral sclerosis, multiple sclerosis, schizophrenia with the model was performed to identify reporter features specific and common for these diseases, which revealed metabolites and pathways around which the most significant changes occur. The identified metabolites are potential biomarkers for the pathology of the related diseases. Our model indicated perturbations in oxidative stress, energy metabolism including TCA cycle and lipid metabolism as well as several amino acid related pathways, in agreement with the role of these pathways in the studied diseases. The computational prediction of transcription factors that commonly regulate the reporter metabolites was achieved through binding-site analysis. Literature support for the identified transcription factors such as USF1, SP1 and those from FOX families are known from the literature to have regulatory roles in the identified reporter metabolic pathways as well as in the neurodegenerative diseases. In essence, the reconstructed brain model enables the elucidation of effects of a perturbation on brain metabolism and the illumination of possible machineries in which a specific metabolite or pathway acts as a regulatory spot for cellular reorganization.
L. Conte
2010-09-01
Full Text Available
ABSTRACT
Strength and flexibility, important components of a training and their maximal values are obtained through specific tests. However, little information about the damage effect in a skeletal muscle is known. The aim was to verify a serum CK changes 24 h after a lengthening and static flexibilizing routine and a maximal dynamic strength tests. The sample, was by 14 subjects (man and women, 28 ± 6 yr., control group (CG N = 7 and experimental group (EG N = 4 that was submitted a lengthening routine (EG-LG, a static flexibilizing routine (EG-FLEX and a 1-RM test (EG-1-RM. The anthropometrics were obtained by digital scale with stadiometer. The blood samples were obtained using the IFCC method with reference values 26-155 U/L. The De Lorme and Watkins technique was used to access maximal dynamic strength through bench press and leg press. The static flexibilizing routine consisted in three 20 seconds sets until the point of maximal discomfort. The lengthening was done in normal movement amplitude during 6 seconds. In analysis inter groups was a significant difference (p < .05 in the values of GE-1RM (D = 118,7 U/L, p= .02 when compared to the GC. We concluded that only maximal strength dynamic test was capable to raise the CK serum levels 24 h after exercise
Key Words: Lengthening, Static Flexibilizing, 1-RM, Creatine Kinase
RESUMEN
La fuerza y flexibilidad, muy importantes en entrenamiento, obtienen sus valores máximos a través de test específicos. Se sabe poco sobre sus efectos perjudiciales en el aparato músculo tendinoso. El objetivo fue verificar las modificaciones séricas de CK 24h después de estiramientos, flexibilidad estática y
Cardiovascular consequences of bed rest: effect on maximal oxygen uptake
Convertino, V. A.
1997-01-01
Maximal oxygen uptake (VO2max) is reduced in healthy individuals confined to bed rest, suggesting it is independent of any disease state. The magnitude of reduction in VO2max is dependent on duration of bed rest and the initial level of aerobic fitness (VO2max), but it appears to be independent of age or gender. Bed rest induces an elevated maximal heart rate which, in turn, is associated with decreased cardiac vagal tone, increased sympathetic catecholamine secretion, and greater cardiac beta-receptor sensitivity. Despite the elevation in heart rate, VO2max is reduced primarily from decreased maximal stroke volume and cardiac output. An elevated ejection fraction during exercise following bed rest suggests that the lower stroke volume is not caused by ventricular dysfunction but is primarily the result of decreased venous return associated with lower circulating blood volume, reduced central venous pressure, and higher venous compliance in the lower extremities. VO2max, stroke volume, and cardiac output are further compromised by exercise in the upright posture. The contribution of hypovolemia to reduced cardiac output during exercise following bed rest is supported by the close relationship between the relative magnitude (% delta) and time course of change in blood volume and VO2max during bed rest, and also by the fact that retention of plasma volume is associated with maintenance of VO2max after bed rest. Arteriovenous oxygen difference during maximal exercise is not altered by bed rest, suggesting that peripheral mechanisms may not contribute significantly to the decreased VO2max. However reduction in baseline and maximal muscle blood flow, red blood cell volume, and capillarization in working muscles represent peripheral mechanisms that may contribute to limited oxygen delivery and, subsequently, lowered VO2max. Thus, alterations in cardiac and vascular functions induced by prolonged confinement to bed rest contribute to diminution of maximal oxygen uptake
Direct parametric reconstruction in dynamic PET myocardial perfusion imaging: in vivo studies
Petibon, Yoann; Rakvongthai, Yothin; El Fakhri, Georges; Ouyang, Jinsong
2017-05-01
side, which incorporates a quadratic penalty function. The parametric images were then calculated using voxel-wise weighted least-square fitting of the reconstructed myocardial PET TACs. For the direct method, parametric images were estimated directly from the dynamic PET sinograms using a maximum a posteriori (MAP) parametric reconstruction algorithm which optimizes an objective function comprised of the Poisson log-likelihood term, the kinetic model and a quadratic penalty function. Maximization of the objective function with respect to each set of parameters was achieved using a preconditioned conjugate gradient algorithm with a specifically developed pre-conditioner. The performance of the direct method was evaluated by comparing voxel- and segment-wise estimates of {{K}1} , the tracer transport rate (ml · min-1 · ml-1), to those obtained using the indirect method applied to both OSEM and OSL-MAP dynamic reconstructions. The proposed direct reconstruction method produced {{K}1} maps with visibly lower noise than the indirect method based on OSEM and OSL-MAP reconstructions. At normal count levels, the direct method was shown to outperform the indirect method based on OSL-MAP in the sense that at matched level of bias, reduced regional noise levels were obtained. At lower count levels, the direct method produced {{K}1} estimates with significantly lower standard deviation across noise realizations than the indirect method based on OSL-MAP at matched bias level. In all cases, the direct method yielded lower noise and standard deviation than the indirect method based on OSEM. Overall, the proposed direct reconstruction offered a better bias-variance tradeoff than the indirect method applied to either OSEM and OSL-MAP. Direct parametric reconstruction as applied to in vivo dynamic PET MPI data is therefore a promising method for producing MBF maps with lower variance.
王晓龙; 蔡琦; 陈玉清
2013-01-01
稳压器水位是船用压水堆非常重要的监测参数,是操纵员判断堆运行瞬变的重要依据.然而,稳压器却时常出现假水位、超量程水位及水位测量丢失的问题.为此,根据稳压器水位参数与反应堆进出口平均温度、稳压器压力与温度、主回路系统的冷却剂装量、充排水流量等热工水力参数的耦合关系,提出一种基于支持向量回归的稳压器水位信号重构方法.模拟试验分析表明,该方法能快速、准确、有效地重构出正常运行工况下的稳压器水位信号.%Pressurizer water level is an important monitoring parameter to marine pressurized water reactor for operator to estimate operation transient of the reactor.However,pressurizer often takes on problems of false water level,over-range measurement of water level and the loss of measuring.A method based on support vector regression was used to reconstruct the pressurizer water level according to the coupling relationship between pressurizer water level and other thermal-hydraulic parameters,such as the average temperature between reactor core inlet and outlet,pressure and temperature of pressurizer,coolant inventory of main loop system,and charge and drainage flow.Simulation analysis shows that the method can quickly,accurately and efficiently reconstruct the pressurizer water level signal under normal operating conditions.
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Maximizing throughput by evaluating critical utilization paths
Weeda, P.J.
1991-01-01
Recently the relationship between batch structure, bottleneck machine and maximum throughput has been explored for serial, convergent and divergent process configurations consisting of two machines and three processes. In three of the seven possible configurations a multiple batch structure maximize
Relationship between maximal exercise parameters and individual ...
Relationship between maximal exercise parameters and individual time trial ... It is widely accepted that the ventilatory threshold (VT) is an important ... This study investigated whether the physiological responses during a 20km time trial (TT) ...
Simple technique for maximal thoracic muscle harvest.
Marshall, M Blair; Kaiser, Larry R; Kucharczuk, John C
2004-04-01
We present a modification of technique for standard muscle flap harvest, the placement of cutaneous traction sutures. This technique allows for maximal dissection of the thoracic muscles even through minimal incisions. Through improved exposure and traction, complete dissection of the muscle bed can be performed and the tissue obtained maximized. Because more muscle bulk is obtained with this technique, the need for a second muscle may be prevented.
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
Maximal Subgroups of Skew Linear Groups
M. Mahdavi-Hezavehi
2002-01-01
Let D be an infinite division algebra of finite dimension over its centre Z(D) = F, and n a positive integer. The structure of maximal subgroups of skew linear groups are investigated. In particular, assume N is a normal subgroup of GLn(D) and M is a maximal subgroup of N containing Z(N). It is shown that if M/Z(N) is finite, then N is central.
Additive Approximation Algorithms for Modularity Maximization
Kawase, Yasushi; Matsui, Tomomi; Miyauchi, Atsushi
2016-01-01
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph $G=(V,E)$, we are asked to find a partition $\\mathcal{C}$ of $V$ that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity max...
Khan, N. S.; Vane, C.; Horton, B. P.; Scatena, F. N.
2012-12-01
Reliable, quantitative proxies of former sea-level and paleoenvironmental change are critical in integrating geologic and historical records into predictive models in order to better understand the response of coastal systems to marine inundation. The defining characteristic of a sea-level proxy (or sea-level indicator) is that is must possess a systematic and quantifiable relationship to elevation with respect to the tidal frame. Microfossils (e.g. foraminifera, diatoms) are used to reconstruct Holocene sea level because of their potential for providing high-resolution archives; however, these biological proxies are somewhat limited due to spatial restrictions and poor preservation in the sedimentary record of temperate and more notably tropical environments. In this study, we aim to overcome the confines of existent indicators by adapting the use of stable carbon isotopes and carbon to nitrogen ratios of bulk sedimentary organic material to a tropical coastal setting. We sampled dominant vegetation and surface sediment along 8 transects taken through tidal flat, low mangrove, high mangrove, and freshwater zones from 4 sites located in Puerto Rico. We find unique ranges in δ13C and C/N corresponding to these environmental zones that exhibit a relationship to elevation within the tidal frame. Additionally, a 2.5 m core obtained from one of the sites demonstrates changes in δ13C and C/N representative of a shift in depositional environment from marine to terrestrial conditions that is in agreement with the change in lithology from grey organic-rich mud to brown mangrove peat and the foraminiferal assemblages observed down core. We also provide an assessment of post-depositional change in δ13C and C/N due to subaerial root penetration and microbial degradation from field experiments and find that these processes are related to water table depth. We compare our findings to studies employing this reconstruction technique in temperate settings, where salt marsh
Maximal Frequent Itemset Generation Using Segmentation Apporach
M.Rajalakshmi
2011-09-01
Full Text Available Finding frequent itemsets in a data source is a fundamental operation behind Association Rule Mining.Generally, many algorithms use either the bottom-up or top-down approaches for finding these frequentitemsets. When the length of frequent itemsets to be found is large, the traditional algorithms find all thefrequent itemsets from 1-length to n-length, which is a difficult process. This problem can be solved bymining only the Maximal Frequent Itemsets (MFS. Maximal Frequent Itemsets are frequent itemsets whichhave no proper frequent superset. Thus, the generation of only maximal frequent itemsets reduces thenumber of itemsets and also time needed for the generation of all frequent itemsets as each maximal itemsetof length m implies the presence of 2m-2 frequent itemsets. Furthermore, mining only maximal frequentitemset is sufficient in many data mining applications like minimal key discovery and theory extraction. Inthis paper, we suggest a novel method for finding the maximal frequent itemset from huge data sourcesusing the concept of segmentation of data source and prioritization of segments. Empirical evaluationshows that this method outperforms various other known methods.
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits.
Kames, S.; Tardif, J. C.; Bergeron, Y.
2016-03-01
Plants respond to environmental stimuli through changes in growth and development. Characteristics of wood cells such as the cross-sectional area of vessel elements (hereafter referred to as vessels) may store information about environmental factors present at the time of vessel differentiation. The analysis of vessel characteristics therefore offers a different time resolution than annual ring width because vessels in tree rings differentiate within days to a few weeks. Little research has been conducted on the sensitivity of earlywood vessels in ring-porous species in response to flooding. The general objectives of this study were to determine the plasticity of earlywood vessel to high flows and spring flooding in floodplain black ash (Fraxinus nigra Marsh.) trees and to assess the utility of developing continuous earlywood vessel chronologies in dendrohydrological reconstruction. In contrast, most dendrohydrological studies until now have mainly used vessel anomalies (flood rings) as discrete variables to identify exceptional flood events. The study area is located in the boreal region of northwestern Québec. Vessel and ring-width chronologies were generated from F. nigra trees growing on the floodplain of Lake Duparquet. Spring discharge had among all hydro-climatic variables the strongest impact on vessel formation and this signal was coherent spatially and in the frequency domain. The mean vessel area chronology was significantly and negatively correlated to discharge and both the linearity and the strength of this association were unique. In floodplain F. nigra trees, spring flooding promoted the formation of more abundant but smaller earlywood vessels. Earlywood vessels chronologies were also significantly associated with other hydrological indicators like Lake Duparquet's ice break-up date and both ice-scar frequency and height chronologies. These significant relationships stress the utility of developing continuous vessels chronologies for hydrological
Hu, Chunying; Huang, Qiuchen; Yu, Lili; Ye, Miao
2016-01-01
[Purpose] The purpose of this study was to examine the immediate effects of robot-assisted therapy on functional activity level after anterior cruciate ligament reconstruction. [Subjects and Methods] Participants included 10 patients (8 males and 2 females) following anterior cruciate ligament reconstruction. The subjects participated in robot-assisted therapy and treadmill exercise on different days. The Timed Up-and-Go test, Functional Reach Test, surface electromyography of the vastus lateralis and vastus medialis, and maximal extensor strength of isokinetic movement of the knee joint were evaluated in both groups before and after the experiment. [Results] The results for the Timed Up-and-Go Test and the 10-Meter Walk Test improved in the robot-assisted rehabilitation group. Surface electromyography of the vastus medialis muscle showed significant increases in maximum and average discharge after the intervention. [Conclusion] The results suggest that walking ability and muscle strength can be improved by robotic training. PMID:27512258
A Revenue Maximization Approach for Provisioning Services in Clouds
Li Pan
2015-01-01
Full Text Available With the increased reliability, security, and reduced cost of cloud services, more and more users are attracted to having their jobs and applications outsourced into IAAS data centers. For a cloud provider, deciding how to provision services to clients is far from trivial. The objective of this decision is maximizing the provider’s revenue, while fulfilling its IAAS resource constraints. The above problem is defined as IAAS cloud provider revenue maximization (ICPRM problem in this paper. We formulate a service provision approach to help a cloud provider to determine which combination of clients to admit and in what Quality-of-Service (QoS levels and to maximize provider’s revenue given its available resources. We show that the overall problem is a nondeterministic polynomial- (NP- hard one and develop metaheuristic solutions based on the genetic algorithm to achieve revenue maximization. The experimental simulations and numerical results show that the proposed approach is both effective and efficient in solving ICPRM problems.
A Data-Based Approach to Social Influence Maximization
Goyal, Amit; Lakshmanan, Laks V S
2011-01-01
Influence maximization is the problem of finding a set of users in a social network, such that by targeting this set, one maximizes the expected spread of influence in the network. Most of the literature on this topic has focused exclusively on the social graph, overlooking historical data, i.e., traces of past action propagations. In this paper, we study influence maximization from a novel data-based perspective. In particular, we introduce a new model, which we call credit distribution, that directly leverages available propagation traces to learn how influence flows in the network and uses this to estimate expected influence spread. Our approach also learns the different levels of influenceability of users, and it is time-aware in the sense that it takes the temporal nature of influence into account. We show that influence maximization under the credit distribution model is NP-hard and that the function that defines expected spread under our model is submodular. Based on these, we develop an approximation ...
Non-maximizing output behavior for firms with a cost-constrained technology
Blank, J.L.T.
2008-01-01
In many public service industries, firms are constrained by a cost (budget) and characterized by non-maximizing output behavior, due to bureaucratic behavior, for instance. This paper proposes a model based on the assumption that firms with a cost constraint do not maximize service levels due to
Horacio Coral-Enriquez; John Cortés-Romero; Germán A. Ramos
2013-01-01
This paper proposes an alternative robust observer-based linear control technique to maximize energy capture in a 4.8 MW horizontal-axis variable-speed wind turbine. The proposed strategy uses a generalized proportional integral (GPI) observer to reconstruct the aerodynamic torque in order to obtain a generator speed optimal trajectory. Then, a robust GPI observer-based controller supported by an active disturbance rejection (ADR) approach allows asymptotic tracking of the generator speed opt...
A VARIATIONAL EXPECTATION-MAXIMIZATION METHOD FOR THE INVERSE BLACK BODY RADIATION PROBLEM
Jiantao Cheng; Tie Zhou
2008-01-01
The inverse black body radiation problem, which is to reconstruct the area tempera-ture distribution from the measurement of power spectrum distribution, is a well-known ill-posed problem. In this paper, a variational expectation-maximization (EM) method is developed and its convergence is studied. Numerical experiments demonstrate that the variational EM method is more efficient and accurate than the traditional methods, in-cluding the Tikhonov regularization method, the Landweber method and the conjugate gradient method.
Moss Phylogeny Reconstruction Using Nucleotide Pangenome of Complete Mitogenome Sequences.
Goryunov, D V; Nagaev, B E; Nikolaev, M Yu; Alexeevski, A V; Troitsky, A V
2015-11-01
Stability of composition and sequence of genes was shown earlier in 13 mitochondrial genomes of mosses (Rensing, S. A., et al. (2008) Science, 319, 64-69). It is of interest to study the evolution of mitochondrial genomes not only at the gene level, but also on the level of nucleotide sequences. To do this, we have constructed a "nucleotide pangenome" for mitochondrial genomes of 24 moss species. The nucleotide pangenome is a set of aligned nucleotide sequences of orthologous genome fragments covering the totality of all genomes. The nucleotide pangenome was constructed using specially developed new software, NPG-explorer (NPGe). The stable part of the mitochondrial genome (232 stable blocks) is shown to be, on average, 45% of its length. In the joint alignment of stable blocks, 82% of positions are conserved. The phylogenetic tree constructed with the NPGe program is in good correlation with other phylogenetic reconstructions. With the NPGe program, 30 blocks have been identified with repeats no shorter than 50 bp. The maximal length of a block with repeats is 140 bp. Duplications in the mitochondrial genomes of mosses are rare. On average, the genome contains about 500 bp in large duplications. The total length of insertions and deletions was determined in each genome. The losses and gains of DNA regions are rather active in mitochondrial genomes of mosses, and such rearrangements presumably can be used as additional markers in the reconstruction of phylogeny.
Welfare-maximizing and revenue-maximizing tariffs with a few domestic firms
Bruno Larue; Jean-Philippe Gervais
2002-01-01
In this paper we compare the orthodox optimal tariff formula with the appropriate welfare-maximizing tariff when there are a few producing or importing firms. The welfare-maximizing tariff can be very low, voire negative in some cases, while in others it can even exceed the maximum-revenue tariff. The relationship between the welfare-maximizing tariff and the number of firms need not be monotonically increasing, because the tariff is not strictly used to internalize terms of trade externality...
Maximizing Complementary Quantities by Projective Measurements
M. Souza, Leonardo A.; Bernardes, Nadja K.; Rossi, Romeu
2017-04-01
In this work, we study the so-called quantitative complementarity quantities. We focus in the following physical situation: two qubits ( q A and q B ) are initially in a maximally entangled state. One of them ( q B ) interacts with a N-qubit system ( R). After the interaction, projective measurements are performed on each of the qubits of R, in a basis that is chosen after independent optimization procedures: maximization of the visibility, the concurrence, and the predictability. For a specific maximization procedure, we study in detail how each of the complementary quantities behave, conditioned on the intensity of the coupling between q B and the N qubits. We show that, if the coupling is sufficiently "strong," independent of the maximization procedure, the concurrence tends to decay quickly. Interestingly enough, the behavior of the concurrence in this model is similar to the entanglement dynamics of a two qubit system subjected to a thermal reservoir, despite that we consider finite N. However, the visibility shows a different behavior: its maximization is more efficient for stronger coupling constants. Moreover, we investigate how the distinguishability, or the information stored in different parts of the system, is distributed for different couplings.
Fisher, David J
2009-01-01
It is well-known, even at the most elementary level of scientific knowledge, that free surfaces have properties which make them differ from the underlying bulk material. In the case of liquids, it is common knowledge - even among laymen - that the liquid surface acts as though it were a distinct skin-like material. At a slightly more advanced level, it is known that the liquid surface will seek to minimize its total surface energy by minimizing its surface area; thereby affecting its local vapor-pressure and adsorption behavior. In the case of solids too, it has long been known that different
Merlin, Thibaut, E-mail: thibaut.merlin@telecom-bretagne.eu [Université Bordeaux INCIA, CNRS UMR 5287, Hôpital de Bordeaux , Bordeaux 33 33076 (France); Visvikis, Dimitris [INSERM, UMR1101, LaTIM, Université de Bretagne Occidentale, Brest 29 29609 (France); Fernandez, Philippe; Lamare, Frederic [Université Bordeaux INCIA, CNRS UMR 5287, Hôpital de Bordeaux, Bordeaux 33 33076 (France)
2015-02-15
Purpose: Partial volume effect (PVE) plays an important role in both qualitative and quantitative PET image accuracy, especially for small structures. A previously proposed voxelwise PVE correction method applied on PET reconstructed images involves the use of Lucy–Richardson deconvolution incorporating wavelet-based denoising to limit the associated propagation of noise. The aim of this study is to incorporate the deconvolution, coupled with the denoising step, directly inside the iterative reconstruction process to further improve PVE correction. Methods: The list-mode ordered subset expectation maximization (OSEM) algorithm has been modified accordingly with the application of the Lucy–Richardson deconvolution algorithm to the current estimation of the image, at each reconstruction iteration. Acquisitions of the NEMA NU2-2001 IQ phantom were performed on a GE DRX PET/CT system to study the impact of incorporating the deconvolution inside the reconstruction [with and without the point spread function (PSF) model] in comparison to its application postreconstruction and to standard iterative reconstruction incorporating the PSF model. The impact of the denoising step was also evaluated. Images were semiquantitatively assessed by studying the trade-off between the intensity recovery and the noise level in the background estimated as relative standard deviation. Qualitative assessments of the developed methods were additionally performed on clinical cases. Results: Incorporating the deconvolution without denoising within the reconstruction achieved superior intensity recovery in comparison to both standard OSEM reconstruction integrating a PSF model and application of the deconvolution algorithm in a postreconstruction process. The addition of the denoising step permitted to limit the SNR degradation while preserving the intensity recovery. Conclusions: This study demonstrates the feasibility of incorporating the Lucy–Richardson deconvolution associated with a
Polyploidy Induction of Pteroceltis tatarinowii Maxim
Lin ZHANG; Feng WANG; Zhongkui SUN; Cuicui ZHU; Rongwei CHEN
2015-01-01
3%Objective] This study was conducted to obtain tetraploid Pteroceltis tatari-nowi Maxim. with excel ent ornamental traits. [Method] The stem apex growing points of Pteroceltis tatarinowi Maxim. were treated with different concentrations of colchicine solution for different hours to figure out a proper method and obtain poly-ploids. [Result] The most effective induction was obtained by treatment with 0.6%-0.8% colchicine for 72 h with 34.2% mutation rate. Flow cytometry and chromosome observation of the stem apex growing point of P. tatarinowi Maxim. proved that the tetraploid plants were successful y obtained with chromosome number 2n=4x=36. [Conclusion] The result not only fil s the blank of polyploid breeding of P. tatarinowi , but also provides an effective way to broaden the methods of cultivation of fast-growing, high-quality, disease-resilience, new varieties of Pteroceltis.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
The maximal process of nonlinear shot noise
Eliazar, Iddo; Klafter, Joseph
2009-05-01
In the nonlinear shot noise system-model shots’ statistics are governed by general Poisson processes, and shots’ decay-dynamics are governed by general nonlinear differential equations. In this research we consider a nonlinear shot noise system and explore the process tracking, along time, the system’s maximal shot magnitude. This ‘maximal process’ is a stationary Markov process following a decay-surge evolution; it is highly robust, and it is capable of displaying both a wide spectrum of statistical behaviors and a rich variety of random decay-surge sample-path trajectories. A comprehensive analysis of the maximal process is conducted, including its Markovian structure, its decay-surge structure, and its correlation structure. All results are obtained analytically and in closed-form.
Energy Band Calculations for Maximally Even Superlattices
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Alloplastic adjuncts in breast reconstruction
Cabalag, Miguel S.; Rostek, Marie; Miller, George S.; Chae, Michael P.; Quinn, Tam; Rozen, Warren M.
2016-01-01
Background There has been an increasing role of acellular dermal matrices (ADMs) and synthetic meshes in both single- and two-stage implant/expander breast reconstruction. Numerous alloplastic adjuncts exist, and these vary in material type, processing, storage, surgical preparation, level of sterility, available sizes and cost. However, there is little published data on most, posing a significant challenge to the reconstructive surgeon trying to compare and select the most suitable product. The aims of this systematic review were to identify, summarize and evaluate the outcomes of studies describing the use of alloplastic adjuncts for post-mastectomy breast reconstruction. The secondary aims were to determine their cost-effectiveness and analyze outcomes in patients who also underwent radiotherapy. Methods Using the PRSIMA 2009 statement, a systematic review was conducted to find articles reporting on the outcomes on the use of alloplastic adjuncts in post-mastectomy breast reconstruction. Multiple databases were searched independently by three authors (Cabalag MS, Miller GS and Chae MP), including: Ovid MEDLINE (1950 to present), Embase (1980 to 2015), PubMed and Cochrane Database of Systematic Reviews. Results Current published literature on available alloplastic adjuncts are predominantly centered on ADMs, both allogeneic and xenogeneic, with few outcome studies available for synthetic meshes. Outcomes on the 89 articles, which met the inclusion criteria, were summarized and analyzed. The reported outcomes on alloplastic adjunct-assisted breast reconstruction were varied, with most data available on the use of ADMs, particularly AlloDerm® (LifeCell, Branchburg, New Jersey, USA). The use of ADMs in single-stage direct-to-implant breast reconstruction resulted in lower complication rates (infection, seroma, implant loss and late revision), and was more cost effective when compared to non-ADM, two-stage reconstruction. The majority of studies demonstrated
Absence of parasympathetic reactivation after maximal exercise.
de Oliveira, Tiago Peçanha; de Alvarenga Mattos, Raphael; da Silva, Rhenan Bartels Ferreira; Rezende, Rafael Andrade; de Lima, Jorge Roberto Perrout
2013-03-01
The ability of the human organism to recover its autonomic balance soon after physical exercise cessation has an important impact on the individual's health status. Although the dynamics of heart rate recovery after maximal exercise has been studied, little is known about heart rate variability after this type of exercise. The aim of this study is to analyse the dynamics of heart rate and heart rate variability recovery after maximal exercise in healthy young men. Fifteen healthy male subjects (21·7 ± 3·4 years; 24·0 ± 2·1 kg m(-2) ) participated in the study. The experimental protocol consisted of an incremental maximal exercise test on a cycle ergometer, until maximal voluntary exhaustion. After the test, recovery R-R intervals were recorded for 5 min. From the absolute differences between peak heart rate values and the heart rate values at 1 and 5 min of the recovery, the heart rate recovery was calculated. Postexercise heart rate variability was analysed from calculations of the SDNN and RMSSD indexes, in 30-s windows (SDNN(30s) and RMSSD(30s) ) throughout recovery. One and 5 min after maximal exercise cessation, the heart rate recovered 34·7 (±6·6) and 75·5 (±6·1) bpm, respectively. With regard to HRV recovery, while the SDNN(30s) index had a slight increase, RMSSD(30s) index remained totally suppressed throughout the recovery, suggesting an absence of vagal modulation reactivation and, possibly, a discrete sympathetic withdrawal. Therefore, it is possible that the main mechanism associated with the fall of HR after maximal exercise is sympathetic withdrawal or a vagal tone restoration without vagal modulation recovery. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
Maximizing band gaps in plate structures
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated...
Maximal and Minimal Congruences on Some Semigroups
Jintana SANWONG; Boorapa SINGHA; R.P.SULLIVAN
2009-01-01
In 2006,Sanwong and Sullivan described the maximal congruences on the semigroup N consisting of all non-negative integers under standard multiplication,and on the semigroup T(X) consisting of all total transformations of an infinite set X under composition. Here,we determine all maximal congruences on the semigroup Zn under multiplication modulo n. And,when Y X,we do the same for the semigroup T(X,Y) consisting of all elements of T(X) whose range is contained in Y. We also characterise the minimal congruences on T(X,Y).
Maximizing oil yields may not optimize economics
1987-03-01
The Los Alamos National Laboratory has used the ASPEN computer code to calculate the economics of different hydroretorting conditions. When the oil yield was maximized and a oil shale plant designed around this process, the costs turned out much higher than expected. However, calculations based on runs of less than maximum yields showed lower cost estimates. It is recommended that future efforts should be concentrated on minimizing production costs rather than maximizing yields. An oil shale plant has been designed around minimum production cost, but has not been able to be tested experimentally.
Maximal Inequalities for Dependent Random Variables
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X...
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....
Neuromagnetic source reconstruction
Lewis, P.S.; Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States)
1994-12-31
In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.
Neuromagnetic source reconstruction
Lewis, P.S.; Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States)
1994-12-31
In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.
Saha, Krishnendu; Straus, Kenneth J; Chen, Yu; Glick, Stephen J
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Palaeolimnological reconstruction of recent environmental change ...
environmental reconstructions including pollution and lake level monitoring. This study used a littoral ... ing mechanisms operating at the same time (Battarbee et al.,. 2005). ... Lake Malombe, during the breeding season and some of them migrate up the ...... ronmental reconstruction: the Tees estuary, northeastern England.
Reference values of maximal oxygen uptake for polish rowers.
Klusiewicz, Andrzej; Starczewski, Michał; Ładyga, Maria; Długołęcka, Barbara; Braksator, Wojciech; Mamcarz, Artur; Sitkowski, Dariusz
2014-12-09
The aim of this study was to characterize changes in maximal oxygen uptake over several years and to elaborate current reference values of this index based on determinations carried out in large and representative groups of top Polish rowers. For this study 81 female and 159 male rowers from the sub-junior to senior categories were recruited from the Polish National Team and its direct backup. All the subjects performed an incremental exercise test on a rowing ergometer. During the test maximal oxygen uptake was measured with the BxB method. The calculated reference values for elite Polish junior and U23 rowers allowed to evaluate the athletes' fitness level against the respective reference group and may aid the coach in controlling the training process. Mean values of VO2max achieved by members of the top Polish rowing crews who over the last five years competed in the Olympic Games or World Championships were also presented. The results of the research on the "trainability" of the maximal oxygen uptake may lead to a conclusion that the growth rate of the index is larger in case of high-level athletes and that the index (in absolute values) increases significantly between the age of 19-22 years (U23 category).
Lambeck, Kurt; Purcell, Anthony; Flemming, Nicholas. C.; Vita-Finzi, Claudio; Alsharekh, Abdullah M.; Bailey, Geoffrey N.
2011-12-01
The history of sea level within the Red Sea basin impinges on several areas of research. For archaeology and prehistory, past sea levels of the southern sector define possible pathways of human dispersal out of Africa. For tectonics, the interglacial sea levels provide estimates of rates for vertical tectonics. For global sea level studies, the Red Sea sediments contain a significant record of changing water chemistry with implications on the mass exchange between oceans and ice sheets during glacial cycles. And, because of its geometry and location, the Red Sea provides a test laboratory for models of glacio-hydro-isostasy. The Red Sea margins contain incomplete records of sea level for the Late Holocene, for the Last Glacial Maximum, for the Last Interglacial and for earlier interglacials. These are usually interpreted in terms of tectonics and ocean volume changes but it is shown here that the glacio-hydro-isostatic process is an additional important component with characteristic spatial variability. Through an iterative analysis of the Holocene and interglacial evidence a separation of the tectonic, isostatic and eustatic contributions is possible and we present a predictive model for palaeo-shorelines and water depths for a time interval encompassing the period proposed for migrations of modern humans out of Africa. Principal conclusions include the following. (i) Late Holocene sea level signals evolve along the length of the Red Sea, with characteristic mid-Holocene highstands not developing in the central part. (ii) Last Interglacial sea level signals are also location dependent and, in the absence of tectonics, are not predicted to occur more than 1-2 m above present sea level. (iii) For both periods, Red Sea levels at 'expected far-field' elevations are not necessarily indicative of tectonic stability and the evidence points to a long-wavelength tectonic uplift component along both the African and Arabian northern and central sides of the Red Sea. (iv) The
Soelen, E.E. van; Lammertsma, E.I.; Cremer, H.; Donders, T.H.; Sangiorgi, F.; Brooks, G.R.; Larson, R.A.; Sinninghe Damsté, J.S.; Wagner-Cremer, F.; Reichart, G.J.
2010-01-01
A suite of organic geochemical, micropaleontological and palynological proxies was applied to sediments from Southwest Florida, to study the Holocene environmental changes associated with sea-level rise. Sediments were recovered from Hillsborough Bay, part of Tampa Bay, and studied using biomarkers,
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching;
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...
Gradient dynamics and entropy production maximization
Janečka, Adam
2016-01-01
Gradient dynamics describes irreversible evolution by means of a dissipation potential, which leads to several advantageous features like Maxwell--Onsager relations, distinguishing between thermodynamic forces and fluxes or geometrical interpretation of the dynamics. Entropy production maximization is a powerful tool for predicting constitutive relations in engineering. In this paper, both approaches are compared and their shortcomings and advantages are discussed.
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis, E-mail: anis.matoussi@univ-lemans.fr [Université du Maine, Risk and Insurance institut of Le Mans Laboratoire Manceau de Mathématiques (France); Mezghani, Hanen, E-mail: hanen.mezghani@lamsin.rnu.tn; Mnif, Mohamed, E-mail: mohamed.mnif@enit.rnu.tn [University of Tunis El Manar, Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT (Tunisia)
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Maximizing the Motivated Mind for Emergent Giftedness.
Rea, Dan
2001-01-01
This article explains how the theory of the motivated mind conceptualizes the productive interaction of intelligence, creativity, and achievement motivation and how this theory can help educators to maximize students' emergent potential for giftedness. It discusses the integration of cold-order thinking and hot-chaotic thinking into fluid-adaptive…
The Winning Edge: Maximizing Success in College.
Schmitt, David E.
This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…
MAXIMAL ELEMENTS AND EQUILIBRIUM OF ABSTRACT ECONOMY
刘心歌; 蔡海涛
2001-01-01
An existence theorem of maximal elements for a new type of preference correspondences which are Qθ-majorized is given. Then some existence theorems of equilibrium for abstract economy and qualitative game in which the constraint or preference correspondences are Qθ-majorized are obtained in locally convex topological vector spaces.
DNA solution of the maximal clique problem.
Ouyang, Q; Kaplan, P D; Liu, S; Libchaber, A
1997-10-17
The maximal clique problem has been solved by means of molecular biology techniques. A pool of DNA molecules corresponding to the total ensemble of six-vertex cliques was built, followed by a series of selection processes. The algorithm is highly parallel and has satisfactory fidelity. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main qu
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing throughput in an automated test system
朱君
2007-01-01
@@ Overview This guide is collection of whitepapers designed to help you develop test systems that lower your cost, increase your test throughput, and can scale with future requirements. This whitepaper provides strategies for maximizing system throughput. To download the complete developers guide (120 pages), visit ni. com/automatedtest.
The gaugings of maximal D=6 supergravity
Bergshoeff, E.; Samtleben, H.; Sezgin, E.
2008-01-01
We construct the most general gaugings of the maximal D = 6 supergravity. The theory is ( 2, 2) supersymmetric, and possesses an on-shell SO( 5, 5) duality symmetry which plays a key role in determining its couplings. The field content includes 16 vector fields that carry a chiral spinor representat
WEIGHTED BOUNDEDNESS OF A ROUGH MAXIMAL OPERATOR
无
2000-01-01
In this note the authors give the weighted Lp-boundedness fora class of maximal singular integral operators with rough kernel.The result in this note is an improvement and extension ofthe result obtained by Chen and Lin in 1990.
Maximizing the Range of a Projectile.
Brown, Ronald A.
1992-01-01
Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Testing maximality in muon neutrino flavor mixing
Choubey, S; Choubey, Sandhya; Roy, Probir
2003-01-01
The small difference between the survival probabilities of muon neutrino and antineutrino beams, traveling through earth matter in a long baseline experiment such as MINOS, is shown to be an important measure of any possible deviation from maximality in the flavor mixing of those states.
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
On the Hardy-Littlewood maximal theorem
Shinji Yamashita
1982-01-01
Full Text Available The Hardy-Littlewood maximal theorem is extended to functions of class PL in the sense of E. F. Beckenbach and T. Radó, with a more precise expression of the absolute constant in the inequality. As applications we deduce some results on hyperbolic Hardy classes in terms of the non-Euclidean hyperbolic distance in the unit disk.
Maximal Cartel Pricing and Leniency Programs
Houba, H.E.D.; Motchenkova, E.; Wen, Q.
2008-01-01
For a general class of oligopoly models with price competition, we analyze the impact of ex-ante leniency programs in antitrust regulation on the endogenous maximal-sustainable cartel price. This impact depends upon industry characteristics including its cartel culture. Our analysis disentangles the
How to Generate Good Profit Maximization Problems
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Maximally entangled mixed states made easy
Aiello, A; Voigt, D; Woerdman, J P
2006-01-01
We show that, contrarily to a recent claim [M. Ziman and V. Bu\\v{z}ek, Phys. Rev. A. \\textbf{72}, 052325 (2005)], it is possible to achieve maximally entangled mixed states of two qubits from the singlet state via the action of local nonunital quantum channels. Moreover, we present a simple, feasible linear optical implementation of one of such channels.
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...
Maximal Heat Generation in Nanoscale Systems
ZHOU Li-Ling; LI Shu-Shen; ZENG Zhao-Yang
2009-01-01
We investigate the heat generation in a nanoscale system coupled to normal leads and find that it is maximal when the average occupation of the electrons in the nanoscale system is 0.5,no matter what mechanism induces the heat generation.
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Understanding Violations of Gricean Maxims in Preschoolers and Adults
Mako eOkanda
2015-07-01
Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Bioengineered human IAS reconstructs with functional and molecular properties similar to intact IAS.
Singh, Jagmohan; Rattan, Satish
2012-09-15
Because of its critical importance in rectoanal incontinence, we determined the feasibility to reconstruct internal anal sphincter (IAS) from human IAS smooth muscle cells (SMCs) with functional and molecular attributes similar to the intact sphincter. The reconstructs were developed using SMCs from the circular smooth muscle layer of the human IAS, grown in smooth muscle differentiation media under sterile conditions in Sylgard-coated tissue culture plates with central Sylgard posts. The basal tone in the reconstructs and its changes were recorded following 0 Ca(2+), KCl, bethanechol, isoproterenol, protein kinase C (PKC) activator phorbol 12,13-dibutyrate, and Rho kinase (ROCK) and PKC inhibitors Y-27632 and Gö-6850, respectively. Western blot (WB), immunofluorescence (IF), and immunocytochemical (IC) analyses were also performed. The reconstructs developed spontaneous tone (0.68 ± 0.26 mN). Bethanechol (a muscarinic agonist) and K(+) depolarization produced contraction, whereas isoproterenol (β-adrenoceptor agonist) and Y-27632 produced a concentration-dependent decrease in the tone. Maximal decrease in basal tone with Y-27632 and Gö-6850 (each 10(-5) M) was 80.45 ± 3.29 and 17.76 ± 3.50%, respectively. WB data with the IAS constructs' SMCs revealed higher levels of RhoA/ROCK, protein kinase C-potentiated inhibitor or inhibitory phosphoprotein for myosin phosphatase (CPI-17), phospho-CPI-17, MYPT1, and 20-kDa myosin light chain vs. rectal smooth muscle. WB, IF, and IC studies of original SMCs and redispersed from the reconstructs for the relative distribution of different signal transduction proteins confirmed the feasibility of reconstruction of IAS with functional properties similar to intact IAS and demonstrated the development of myogenic tone with critical dependence on RhoA/ROCK. We conclude that it is feasible to bioengineer IAS constructs using human IAS SMCs that behave like intact IAS.
Milker, Yvonne; Horton, Benjamin P.; Khan, Nicole S.; Nelson, Alan R.; Witter, Robert C.; Engelhart, Simon E.; Ewald, Michael; Brophy, Laura; Bridgeland, William T.
2016-04-01
Stratigraphic sequences beneath salt marshes along the U.S. Pacific Northwest coast preserve 7000 years of plate-boundary earthquakes at the Cascadia subduction zone. The sequences record rapid rises in relative sea level during regional coseismic subsidence caused by great earthquakes and gradual falls in relative sea level during interseismic uplift between earthquakes. These relative sea-level changes are commonly quantified using foraminiferal transfer functions with the assumption that foraminifera rapidly recolonize salt marshes and adjacent tidal flats following coseismic subsidence. The restoration of tidal inundation in the Ni-les'tun unit (NM unit) of the Bandon Marsh National Wildlife Refuge (Oregon), following extensive dike removal in August 2011, allowed us to directly observe changes in foraminiferal assemblages that occur during rapid "coseismic" (simulated by dike removal with sudden tidal flooding) and "interseismic" (stabilization of the marsh following flooding) relative sea-level changes analogous to those of past earthquake cycles. We analyzed surface sediment samples from 10 tidal stations at the restoration site (NM unit) from mudflat to high marsh, and 10 unflooded stations in the Bandon Marsh control site. Samples were collected shortly before and at 1- to 6-month intervals for 3 years after tidal restoration of the NM unit. Although tide gauge and grain-size data show rapid restoration of tides during approximately the first 3 months after dike removal, recolonization of the NM unit by foraminifera was delayed at least 10 months. Re-establishment of typical tidal foraminiferal assemblages, as observed at the control site, required 31 months after tidal restoration, with Miliammina fusca being the dominant pioneering species. If typical of past recolonizations, this delayed foraminiferal recolonization affects the accuracy of coseismic subsidence estimates during past earthquakes because significant postseismic uplift may shortly follow
Kirby, M. E.; Lund, S.; Poulsen, C.; Patterson, W.; Burnett, A.
2002-12-01
Understanding the future of Southern California's present freshwater crisis is dependent on past knowledge of the region's hydrological system. Unfortunately, with the exception of a few short dendroclimatological records and a low-resolution palynological record, there are no terrestrially-based long-term (i.e., Holocene), continuous, high-resolution (i.e., decadal-to-centennial) paleohydrological records for Southern California. Here, we present initial sedimentological and geochemical findings from sediments extracted from one of Southern California's only natural lakes, Lake Elsinore (located 75km southeast of Los Angeles). These results link historically-based meteorological data with lake-level variations, and associated proxies, thus providing a template for geological interpretation of Southern California's paleohydrology. A comparison of lake-level variations at Lake Elsinore to regional winter season (Dec. through Feb.) precipitation amount and latitude of the 500-hPa geopotential height (5460m; i.e., the polar front jet stream) indicate a strong linkage between atmospheric processes and the lake's hydrological budget. Higher lake-levels are related to a migration of the polar front jet stream to lower latitudes which increases the advection of moisture-rich air masses from the sub-tropical and equatorial Pacific regions, and vice versa. Both the environmental magnetic measurement CHI and the value of d18O from calcite precipitated in the lake's water column show a strong correspondence with the historical records of lake-level and winter season precipitation amount. As a result, these initial proxies provide a collaborative method for interpreting past hydrological conditions at Lake Elsinore (i.e., Southern CA). Interestingly, these proxies indicate that the past 150-200 years have been relatively wet when compared to the preceeding 500 years (i.e., possibly the Little Ice Age). On a hemispheric scale, lake-level variations at Lake Elsinore are linked
Breast reconstruction after mastectomy
Daniel eSchmauss
2016-01-01
Full Text Available Breast cancer is the leading cause of cancer death in women worldwide. Its surgical approach has become less and less mutilating in the last decades. However, the overall number of breast reconstructions has significantly increased lately. Nowadays breast reconstruction should be individualized at its best, first of all taking into consideration oncological aspects of the tumor, neo-/adjuvant treatment and genetic predisposition, but also its timing (immediate versus delayed breast reconstruction, as well as the patient’s condition and wish. This article gives an overview over the various possibilities of breast reconstruction, including implant- and expander-based reconstruction, flap-based reconstruction (vascularized autologous tissue, the combination of implant and flap, reconstruction using non-vascularized autologous fat, as well as refinement surgery after breast reconstruction.
Viral Quasispecies Assembly via Maximal Clique Enumeration
Toepfer, A.; Marschall, T.; Bull, R.A.; Luciani, F.; Schoenhuth, A.; Beerenwinkel, N.
2014-01-01
Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the s
Reoperative midface reconstruction.
Acero, Julio; García, Eloy
2011-02-01
Reoperative reconstruction of the midface is a challenging issue because of the complexity of this region and the severity of the aesthetic and functional sequela related to the absence or failure of a primary reconstruction. The different situations that can lead to the indication of a reoperative reconstructive procedure after previous oncologic ablative procedures in the midface are reviewed. Surgical techniques, anatomic problems, and limitations affecting the reoperative reconstruction in this region of the head and neck are discussed.
Track reconstruction in CMS high luminosity environment
Sguazzoni, Giacomo
2014-01-01
The CMS tracker is the largest silicon detector ever built, covering 200 square meters and providing an average of 14 high-precision measurements per track. Tracking is essential for the reconstruction of objects like jets, muons, electrons and tau leptons starting from the raw data from the silicon pixel and strip detectors. Track reconstruction is widely used also at trigger level as it improves objects tagging and resolution.The CMS tracking code is organized in several levels, known as iterative steps, each optimized to reconstruct a class of particle trajectories, as the ones of particles originating from the primary vertex or displaced tracks from particles resulting from secondary vertices. Each iterative step consists of seeding, pattern recognition and fitting by a kalman filter, and a final filtering and cleaning. Each subsequent step works on hits not yet associated to a reconstructed particle trajectory.The CMS tracking code is continuously evolving to make the reconstruction computing load compat...
Surfaces, Digitisations and Reconstructions
2015-01-01
We present a new digital reconstruction of r-regular sets in three-dimensional Euclidean space. We introduce a vector field and analyse the relation between the topologies of the boundaries of the r-regular set and its reconstruction. This reconstruction can be carried out faster than prior models...... based on the same digitisation, making it attractive for computing....
Thomas, Brian C; Dalton, Sean M
2016-01-01
Chemical and morphological features of spores and pollens have been linked to changes in solar ultraviolet radiation (specifically UVB, 280-315 nm) at Earth's surface. Variation in UVB exposure as inferred from these features has been suggested as a proxy for paleoaltitude. While UVB irradiance does increase with altitude above sea level, a number of other factors affect the irradiance at any given place and time. In this modeling study we use the TUV atmospheric radiative transfer model to investigate dependence of surface-level UVB irradiance and relative biological impact on a number of constituents in Earth's atmosphere that are variable over long and short time periods. We consider changes in O3 column density, and SO2 and sulfate aerosols due to periods of volcanic activity, including that associated with the formation of the Siberian Traps. We find that UVB irradiance may be highly variable under volcanic conditions and variations in several of these atmospheric constituents can easily mimic or overwhe...
Wang, Shaopeng; Wang, Yinghui; Zhang, Ruijie; Wang, Weitao; Xu, Daoquan; Guo, Jing; Li, Pingyang; Yu, Kefu
2015-11-01
Historical levels of Pb, Zn, Cd, Cr, Cu, Ni, As, Fe, Al and Mn were found in C1 and C2 sediment cores from the Hejiang River, which is located in a typical mining region of Southern China, the levels date back approximately 57 and 83 years. Temporal variations in the core C1 around the mining peaked in the 1960s, after which they exhibited a decreasing trend, which reflects successful government management. Historical events such as the Pacific War and China's first 5-year economic plan were recorded in core C2, which was collected from the downstream portion of the Hejiang River. Enrichment factors (EF), geo-accumulation (Igeo), and excess flux indicate that severe contamination occurred during the period between 1956 and 1985 due to the release of high amounts of mining waste from human activities around the core C1 region. The highest EF value was displayed by As (67); this was followed by Pb (64), Cd (39), and Zn (35). In contrast, the core C2 sediments exhibited minor pollution because of dilution from tributaries (the Fu River and the Daning River) that do not flow through the mined area and because C2 was farther from the source of the metals. The results of the risk assessment codes (RAC) for both cores indicate that Cd posed a high risk to the local environment. Principal component analysis (PCA) and correlation analysis (CA) revealed that accumulation of heavy metals was mainly due to mining pollution.
Wetter, Oliver; Tuttenuj, Daniel
2016-04-01
systematically analysed the period from 1446-1542 and could prove a large number of pre-instrumental flood events of river Rhine, Birs, Birsig and Wiese in Basel. All in all the weekly led account books contained 54 Rhine flood events, whereas chroniclers and annalists only recorded seven floods during the same period. This is a ratio of almost eight to one. This large difference points to the significantly sharper "observation skills" of the account books towards smaller floods, which may be explained by the fact that bridges can be endangered by relatively small floods because of driftwood, whereas it is known that chroniclers or annalists were predominantly focussing on spectacular (extreme) flood events. We [Oliver Wetter and Daniel Tuttenuj] are now able to present first preliminary results of reconstructed peak water levels and peak discharges of pre instrumental river Aare-, Emme-, Limmat-, Reuss-, Rhine- and Saane floods. These first results clearly show the strengths as well as the limits of the data and method used, depending mainly on the river types. Of the above mentioned rivers only the floods of river Emme could not be reconstructed whereas the long-term development of peak water levels and peak discharges of the other rivers clearly correlate with major local and supra-regional Swiss flood corrections over time. PhD student Daniel Tuttenuj is going to present the results for river Emme and Saane (see Abstract Daniel Tuttenuj), whereas Dr Oliver Wetter is going to present the results for the other rivers and gives a first insight on long-term recurring periods of smaller river Birs-, Birsig-, Rhine- and Wiese flood events based on the analysis of the weekly led account books "Wochenausgabenbücher der Stadt Basel" (see also Abstract of Daniel Tuttenuj).
Tuttenuj, Daniel; Wetter, Oliver
2016-04-01
contained 54 Rhine flood events, whereas chroniclers and annalists only recorded seven floods during the same period. This is a ratio of almost eight to one. This large difference points to the significantly sharper "observation skills" of the account books towards smaller floods, which may be explained by the fact that bridges can be endangered by relatively small floods because of driftwood, whereas it is known that chroniclers or annalists were predominantly focussing on spectacular (extreme) flood events. We [Oliver Wetter and Daniel Tuttenuj] are now able to present first preliminary results of reconstructed peak water levels and peak discharges of pre instrumental river Aare-, Emme-, Limmat-, Reuss-, Rhine- and Saane floods. These first results clearly show the strengths as well as the limits of the data and method used, depending mainly on the river types. Of the above mentioned rivers only the floods of river Emme could not be reconstructed whereas the long-term development of peak water levels and peak discharges of the other rivers clearly correlate with major local and supra-regional Swiss flood corrections over time. PhD student Daniel Tuttenuj is going to present the results of river Emme and Saane, whereas Dr Oliver Wetter is going to present the results for the other rivers and gives a first insight on long-term recurring periods of smaller river Birs, Birsig, Rhine and Wiese flood events based on the analysis of the weekly led account books "Wochenausgabenbücher der Stadt Basel" (see Abstract Oliver Wetter).
Matheoud, Roberta; Della Monica, Patrizia; Loi, Gianfranco; Vigna, Luca; Krengli, Marco; Inglese, Eugenio; Brambilla, Marco
2011-01-30
The purpose of this study was to analyze the behavior of a contouring algorithm for PET images based on adaptive thresholding depending on lesions size and target-to-background (TB) ratio under different conditions of image reconstruction parameters. Based on this analysis, the image reconstruction scheme able to maximize the goodness of fit of the thresholding algorithm has been selected. A phantom study employing spherical targets was designed to determine slice-specific threshold (TS) levels which produce accurate cross-sectional areas. A wide range of TB ratio was investigated. Multiple regression methods were used to fit the data and to construct algorithms depending both on target cross-sectional area and TB ratio, using various reconstruction schemes employing a wide range of iteration number and amount of postfiltering Gaussian smoothing. Analysis of covariance was used to test the influence of iteration number and smoothing on threshold determination. The degree of convergence of ordered-subset expectation maximization (OSEM) algorithms does not influence TS determination. Among these approaches, the OSEM at two iterations and eight subsets with a 6-8 mm post-reconstruction Gaussian three-dimensional filter provided the best fit with a coefficient of determination R² = 0.90 for cross-sectional areas ≤ 133 mm² and R² = 0.95 for cross-sectional areas > 133 mm². The amount of post-reconstruction smoothing has been directly incorporated in the adaptive thresholding algorithms. The feasibility of the method was tested in two patients with lymph node FDG accumulation and in five patients using the bladder to mimic an anatomical structure of large size and uniform uptake, with satisfactory results. Slice-specific adaptive thresholding algorithms look promising as a reproducible method for delineating PET target volumes with good accuracy.
Lepley, Lindsey K; Wojtys, Edward M; Palmieri-Smith, Riann M
2015-06-01
Neuromuscular electrical stimulation (NMES) has been shown to reduce quadriceps activation failure (QAF), and eccentric exercise has been shown to lessen muscle atrophy post-ACL reconstruction. Given that these are two critical components of quadriceps strength, intervention combining these therapies may be effective at reinstituting quadriceps function post-reconstruction. Thus, the aim of this study was to evaluate the effectiveness of a combined NMES and eccentric exercise intervention to improve the recovery of quadriceps activation and strength post-reconstruction. Thirty-six individuals post-injury were placed into four treatment groups (N&E, NMES and eccentrics; E-only, eccentrics only; N-only, NMES-only; and STND, standard of care) and ten healthy controls participated. N&E and N-only received the NMES protocol 2× per week for the first 6 weeks post-reconstruction. N&E and E-only received the eccentric exercise protocol 2× per week beginning 6 weeks post-reconstruction. Quadriceps activation was assessed via the superimposed burst technique and quantified via the central activation ratio. Quadriceps strength was assessed via maximal voluntary isomeric contractions (Nm/kg). Data was gathered on three occasions: pre-operative, 12-weeks-post-surgery and at return-to-play. No differences in pre-operative measures existed (P>0.05). E-only recovered quadriceps activation better than N-only or STND (P0.05). Eccentric exercise was capable of restoring levels of quadriceps activation and strength that were similar to those of healthy adults and better than NMES alone. Level 3, Parallel longitudinal study. Copyright © 2014 Elsevier B.V. All rights reserved.
Should I Have Breast Reconstruction?
... Reconstruction Surgery Breast Cancer Breast Reconstruction Surgery Should I Get Breast Reconstruction Surgery? Women who have surgery ... It usually responds well to treatment. What if I choose not to have breast reconstruction? Many women ...
LOAD THAT MAXIMIZES POWER OUTPUT IN COUNTERMOVEMENT JUMP
Pedro Jimenez-Reyes
2016-02-01
Full Text Available ABSTRACT Introduction: One of the main problems faced by strength and conditioning coaches is the issue of how to objectively quantify and monitor the actual training load undertaken by athletes in order to maximize performance. It is well known that performance of explosive sports activities is largely determined by mechanical power. Objective: This study analysed the height at which maximal power output is generated and the corresponding load with which is achieved in a group of male-trained track and field athletes in the test of countermovement jump (CMJ with extra loads (CMJEL. Methods: Fifty national level male athletes in sprinting and jumping performed a CMJ test with increasing loads up to a height of 16 cm. The relative load that maximized the mechanical power output (Pmax was determined using a force platform and lineal encoder synchronization and estimating the power by peak power, average power and flight time in CMJ. Results: The load at which the power output no longer existed was at a height of 19.9 ± 2.35, referring to a 99.1 ± 1% of the maximum power output. The load that maximizes power output in all cases has been the load with which an athlete jump a height of approximately 20 cm. Conclusion: These results highlight the importance of considering the height achieved in CMJ with extra load instead of power because maximum power is always attained with the same height. We advise for the preferential use of the height achieved in CMJEL test, since it seems to be a valid indicative of an individual's actual neuromuscular potential providing a valid information for coaches and trainers when assessing the performance status of our athletes and to quantify and monitor training loads, measuring only the height of the jump in the exercise of CMJEL.
Measurable Maximal Energy and Minimal Time Interval
Dahab, Eiman Abou El
2014-01-01
The possibility of finding the measurable maximal energy and the minimal time interval is discussed in different quantum aspects. It is found that the linear generalized uncertainty principle (GUP) approach gives a non-physical result. Based on large scale Schwarzshild solution, the quadratic GUP approach is utilized. The calculations are performed at the shortest distance, at which the general relativity is assumed to be a good approximation for the quantum gravity and at larger distances, as well. It is found that both maximal energy and minimal time have the order of the Planck time. Then, the uncertainties in both quantities are accordingly bounded. Some physical insights are addressed. Also, the implications on the physics of early Universe and on quantized mass are outlined. The results are related to the existence of finite cosmological constant and minimum mass (mass quanta).
Maximal temperature in a simple thermodynamical system
Dai, De-Chang
2016-01-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Hamiltonian formalism and path entropy maximization
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Predicting Contextual Sequences via Submodular Function Maximization
Dey, Debadeepta; Hebert, Martial; Bagnell, J Andrew
2012-01-01
Sequence optimization, where the items in a list are ordered to maximize some reward has many applications such as web advertisement placement, search, and control libraries in robotics. Previous work in sequence optimization produces a static ordering that does not take any features of the item or context of the problem into account. In this work, we propose a general approach to order the items within the sequence based on the context (e.g., perceptual information, environment description, and goals). We take a simple, efficient, reduction-based approach where the choice and order of the items is established by repeatedly learning simple classifiers or regressors for each "slot" in the sequence. Our approach leverages recent work on submodular function maximization to provide a formal regret reduction from submodular sequence optimization to simple cost-sensitive prediction. We apply our contextual sequence prediction algorithm to optimize control libraries and demonstrate results on two robotics problems: ...
Nonlinear trading models through Sharpe Ratio maximization.
Choey, M; Weigend, A S
1997-08-01
While many trading strategies are based on price prediction, traders in financial markets are typically interested in optimizing risk-adjusted performance such as the Sharpe Ratio, rather than the price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can indeed be achieved with this nonlinear approach.
Maximally Symmetric Spacetimes emerging from thermodynamic fluctuations
Bravetti, A; Quevedo, H
2015-01-01
In this work we prove that the maximally symmetric vacuum solutions of General Relativity emerge from the geometric structure of statistical mechanics and thermodynamic fluctuation theory. To present our argument, we begin by showing that the pseudo-Riemannian structure of the Thermodynamic Phase Space is a solution to the vacuum Einstein-Gauss-Bonnet theory of gravity with a cosmological constant. Then, we use the geometry of equilibrium thermodynamics to demonstrate that the maximally symmetric vacuum solutions of Einstein's Field Equations -- Minkowski, de-Sitter and Anti-de-Sitter spacetimes -- correspond to thermodynamic fluctuations. Moreover, we argue that these might be the only possible solutions that can be derived in this manner. Thus, the results presented here are the first concrete examples of spacetimes effectively emerging from the thermodynamic limit over an unspecified microscopic theory without any further assumptions.
Consistent 4-form fluxes for maximal supergravity
Godazgar, Hadi; Krueger, Olaf; Nicolai, Hermann
2015-01-01
We derive new ansaetze for the 4-form field strength of D=11 supergravity corresponding to uplifts of four-dimensional maximal gauged supergravity. In particular, the ansaetze directly yield the components of the 4-form field strength in terms of the scalars and vectors of the four-dimensional maximal gauged supergravity---in this way they provide an explicit uplift of all four-dimensional consistent truncations of D=11 supergravity. The new ansaetze provide a substantially simpler method for uplifting d=4 flows compared to the previously available method using the 3-form and 6-form potential ansaetze. The ansatz for the Freund-Rubin term allows us to conjecture a `master formula' for the latter in terms of the scalar potential of d=4 gauged supergravity and its first derivative. We also resolve a long-standing puzzle concerning the antisymmetry of the flux obtained from uplift ansaetze.
Modularity maximization using completely positive programming
Yazdanparast, Sakineh; Havens, Timothy C.
2017-04-01
Community detection is one of the most prominent problems of social network analysis. In this paper, a novel method for Modularity Maximization (MM) for community detection is presented which exploits the Alternating Direction Augmented Lagrangian (ADAL) method for maximizing a generalized form of Newman's modularity function. We first transform Newman's modularity function into a quadratic program and then use Completely Positive Programming (CPP) to map the quadratic program to a linear program, which provides the globally optimal maximum modularity partition. In order to solve the proposed CPP problem, a closed form solution using the ADAL merged with a rank minimization approach is proposed. The performance of the proposed method is evaluated on several real-world data sets used for benchmarks community detection. Simulation results shows the proposed technique provides outstanding results in terms of modularity value for crisp partitions.
Utility maximization in incomplete markets with default
Lim, Thomas
2008-01-01
We adress the maximization problem of expected utility from terminal wealth. The special feature of this paper is that we consider a financial market where the price process of risky assets can have a default time. Using dynamic programming, we characterize the value function with a backward stochastic differential equation and the optimal portfolio policies. We separately treat the cases of exponential, power and logarithmic utility.
Operational Modal Analysis using Expectation Maximization Algorithm
Cara Cañas, Francisco Javier; Carpio Huertas, Jaime; Juan Ruiz, Jesús; Alarcón Álvarez, Enrique
2011-01-01
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method...
Revenue Maximizing Head Starts in Contests
Franke, Jörg; Leininger, Wolfgang; Wasser, Cédric
2014-01-01
We characterize revenue maximizing head starts for all-pay auctions and lottery contests with many heterogeneous players. We show that under optimal head starts all-pay auctions revenue-dominate lottery contests for any degree of heterogeneity among players. Moreover, all-pay auctions with optimal head starts induce higher revenue than any multiplicatively biased all-pay auction or lottery contest. While head starts are more effective than multiplicative biases in all-pay auctions, they are l...
Approximate Revenue Maximization in Interdependent Value Settings
Chawla, Shuchi; Fu, Hu; Karlin, Anna
2014-01-01
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, ...
Maximal supersymmetry and B-mode targets
Kallosh, Renata; Linde, Andrei; Wrase, Timm; Yamada, Yusuke
2017-04-01
Extending the work of Ferrara and one of the authors [1], we present dynamical cosmological models of α-attractors with plateau potentials for 3 α = 1, 2, 3, 4, 5, 6, 7. These models are motivated by geometric properties of maximally supersymmetric theories: M-theory, superstring theory, and maximal N = 8 supergravity. After a consistent truncation of maximal to minimal supersymmetry in a seven-disk geometry, we perform a two-step procedure: 1) we introduce a superpotential, which stabilizes the moduli of the seven-disk geometry in a supersymmetric minimum, 2) we add a cosmological sector with a nilpotent stabilizer, which breaks supersymmetry spontaneously and leads to a desirable class of cosmological attractor models. These models with n s consistent with observational data, and with tensor-to-scalar ratio r ≈ 10-2 - 10-3, provide natural targets for future B-mode searches. We relate the issue of stability of inflationary trajectories in these models to tessellations of a hyperbolic geometry.
Dang, H.; Wang, A. S.; Sussman, Marc S.; Siewerdsen, J. H.; Stayman, J. W.
2014-09-01
Sequential imaging studies are conducted in many clinical scenarios. Prior images from previous studies contain a great deal of patient-specific anatomical information and can be used in conjunction with subsequent imaging acquisitions to maintain image quality while enabling radiation dose reduction (e.g., through sparse angular sampling, reduction in fluence, etc). However, patient motion between images in such sequences results in misregistration between the prior image and current anatomy. Existing prior-image-based approaches often include only a simple rigid registration step that can be insufficient for capturing complex anatomical motion, introducing detrimental effects in subsequent image reconstruction. In this work, we propose a joint framework that estimates the 3D deformation between an unregistered prior image and the current anatomy (based on a subsequent data acquisition) and reconstructs the current anatomical image using a model-based reconstruction approach that includes regularization based on the deformed prior image. This framework is referred to as deformable prior image registration, penalized-likelihood estimation (dPIRPLE). Central to this framework is the inclusion of a 3D B-spline-based free-form-deformation model into the joint registration-reconstruction objective function. The proposed framework is solved using a maximization strategy whereby alternating updates to the registration parameters and image estimates are applied allowing for improvements in both the registration and reconstruction throughout the optimization process. Cadaver experiments were conducted on a cone-beam CT testbench emulating a lung nodule surveillance scenario. Superior reconstruction accuracy and image quality were demonstrated using the dPIRPLE algorithm as compared to more traditional reconstruction methods including filtered backprojection, penalized-likelihood estimation (PLE), prior image penalized-likelihood estimation (PIPLE) without registration, and
Calligaris, Luigi
2016-01-01
A new tracking system is under development for operation in the CMS experiment at the High Luminosity LHC. It includes an outer tracker which will construct ``stubs'' -- pairs of hits built by correlating clusters measured in two closely spaced silicon sensor layers -- and transmit them off-detector at 40 MHz. If tracker data is to contribute to keeping the Level-1 trigger rate at around 750 kHz under increased luminosity, a crucial component of the upgrade will be the ability to identify tracks with transverse momentum above 3 GeV/c by building tracks out of stubs. A concept for an FPGA-based track finder using a fully time-multiplexed architecture is presented, where track candidates are identified using a projective binning algorithm based on the Hough Transform. A hardware system based on the MP7 MicroTCA processing card has been assembled, demonstrating a realistic slice of the track finder in order to help gauge the performance and requirements for a full system. This document outlines the system archit...
[Breast reconstruction after mastectomy].
Ho Quoc, C; Delay, E
2013-02-01
The mutilating surgery for breast cancer causes deep somatic and psychological sequelae. Breast reconstruction can mitigate these effects and permit the patient to help rebuild their lives. The purpose of this paper is to focus on breast reconstruction techniques and on factors involved in breast reconstruction. The methods of breast reconstruction are presented: objectives, indications, different techniques, operative risks, and long-term monitoring. Many different techniques can now allow breast reconstruction in most patients. Clinical cases are also presented in order to understand the results we expect from a breast reconstruction. Breast reconstruction provides many benefits for patients in terms of rehabilitation, wellness, and quality of life. In our mind, breast reconstruction should be considered more as an opportunity and a positive choice (the patient can decide to do it), than as an obligation (that the patient would suffer). The consultation with the surgeon who will perform the reconstruction is an important step to give all necessary informations. It is really important that the patient could speak again with him before undergoing reconstruction, if she has any doubt. The quality of information given by medical doctors is essential to the success of psychological intervention. This article was written in a simple, and understandable way to help gynecologists giving the best information to their patients. It is maybe also possible to let them a copy of this article, which would enable them to have a written support and would facilitate future consultation with the surgeon who will perform the reconstruction.
SPECT reconstruction using DCT-induced tight framelet regularization
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
The SRT reconstruction algorithm for semiquantification in PET imaging
Kastis, George A., E-mail: gkastis@academyofathens.gr [Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece); Gaitanis, Anastasios [Biomedical Research Foundation of the Academy of Athens (BRFAA), Athens 11527 (Greece); Samartzis, Alexandros P. [Nuclear Medicine Department, Evangelismos General Hospital, Athens 10676 (Greece); Fokas, Athanasios S. [Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Cambridge CB30WA, United Kingdom and Research Center of Mathematics, Academy of Athens, Athens 11527 (Greece)
2015-10-15
Purpose: The spline reconstruction technique (SRT) is a new, fast algorithm based on a novel numerical implementation of an analytic representation of the inverse Radon transform. The mathematical details of this algorithm and comparisons with filtered backprojection were presented earlier in the literature. In this study, the authors present a comparison between SRT and the ordered-subsets expectation–maximization (OSEM) algorithm for determining contrast and semiquantitative indices of {sup 18}F-FDG uptake. Methods: The authors implemented SRT in the software for tomographic image reconstruction (STIR) open-source platform and evaluated this technique using simulated and real sinograms obtained from the GE Discovery ST positron emission tomography/computer tomography scanner. All simulations and reconstructions were performed in STIR. For OSEM, the authors used the clinical protocol of their scanner, namely, 21 subsets and two iterations. The authors also examined images at one, four, six, and ten iterations. For the simulation studies, the authors analyzed an image-quality phantom with cold and hot lesions. Two different versions of the phantom were employed at two different hot-sphere lesion-to-background ratios (LBRs), namely, 2:1 and 4:1. For each noiseless sinogram, 20 Poisson realizations were created at five different noise levels. In addition to making visual comparisons of the reconstructed images, the authors determined contrast and bias as a function of the background image roughness (IR). For the real-data studies, sinograms of an image-quality phantom simulating the human torso were employed. The authors determined contrast and LBR as a function of the background IR. Finally, the authors present plots of contrast as a function of IR after smoothing each reconstructed image with Gaussian filters of six different sizes. Statistical significance was determined by employing the Wilcoxon rank-sum test. Results: In both simulated and real studies, SRT
Sparse Image Reconstruction in Computed Tomography
Jørgensen, Jakob Sauer
In recent years, increased focus on the potentially harmful effects of x-ray computed tomography (CT) scans, such as radiation-induced cancer, has motivated research on new low-dose imaging techniques. Sparse image reconstruction methods, as studied for instance in the field of compressed sensing...... and limitations of sparse reconstruction methods in CT, in particular in a quantitative sense. For example, relations between image properties such as contrast, structure and sparsity, tolerable noise levels, suficient sampling levels, the choice of sparse reconstruction formulation and the achievable image...
Cardiorespiratory Coordination in Repeated Maximal Exercise
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Cardiorespiratory Coordination in Repeated Maximal Exercise.
Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L; Balagué, Natàlia
2017-01-01
Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in
Yuan Zhang
Full Text Available OBJECTIVE: To retrospectively compare the efficacy of the titanium mesh cage (TMC and the nano-hydroxyapatite/polyamide66 cage (n-HA/PA66 cage for 1- or 2-level anterior cervical corpectomy and fusion (ACCF to treat multilevel cervical spondylotic myelopathy (MCSM. METHODS: A total of 117 consecutive patients with MCSM who underwent 1- or 2-level ACCF using a TMC or an n-HA/PA66 cage were studied retrospectively at a mean follow-up of 45.28 ± 12.83 months. The patients were divided into four groups according to the level of corpectomy (1- or 2-level corpectomy and cage type used (TMC or n-HA/PA66 cage. Clinical and radiological parameters were used to evaluate outcomes. RESULTS: At the one-year follow-up, the fusion rate in the n-HA/PA66 group was higher, albeit non-significantly, than that in the TMC group for both 1- and 2-level ACCF, but the fusion rates of the procedures were almost equal at the final follow-up. The incidence of cage subsidence at the final follow-up was significantly higher in the TMC group than in the n-HA/PA66 group for the 1-level ACCF (24% vs. 4%, p = 0.01, and the difference was greater for the 2-level ACCF between the TMC group and the n-HA/PA66 group (38% vs. 5%, p = 0.01. Meanwhile, a much greater loss of fused height was observed in the TMC group compared with the n-HA/PA66 group for both the 1- and 2-level ACCF. All four groups demonstrated increases in C2-C7 Cobb angle and JOA scores and decreases in VAS at the final follow-up compared with preoperative values. CONCLUSION: The lower incidence of cage subsidence, better maintenance of the height of the fused segment and similar excellent bony fusion indicate that the n-HA/PA66 cage may be a superior alternative to the TMC for cervical reconstruction after cervical corpectomy, in particular for 2-level ACCF.
Reconstructability analysis of epistasis.
Zwick, Martin
2011-01-01
The literature on epistasis describes various methods to detect epistatic interactions and to classify different types of epistasis. Reconstructability analysis (RA) has recently been used to detect epistasis in genomic data. This paper shows that RA offers a classification of types of epistasis at three levels of resolution (variable-based models without loops, variable-based models with loops, state-based models). These types can be defined by the simplest RA structures that model the data without information loss; a more detailed classification can be defined by the information content of multiple candidate structures. The RA classification can be augmented with structures from related graphical modeling approaches. RA can analyze epistatic interactions involving an arbitrary number of genes or SNPs and constitutes a flexible and effective methodology for genomic analysis.
Determinants of maximal oxygen uptake in severe acute hypoxia
Calbet, J A L; Boushel, Robert Christopher; Rådegran, G
2003-01-01
To unravel the mechanisms by which maximal oxygen uptake (VO2 max) is reduced with severe acute hypoxia in humans, nine Danish lowlanders performed incremental cycle ergometer exercise to exhaustion, while breathing room air (normoxia) or 10.5% O2 in N2 (hypoxia, approximately 5,300 m above sea...... level). With hypoxia, exercise PaO2 dropped to 31-34 mmHg and arterial O2 content (CaO2) was reduced by 35% (P ..., as reflected by the higher alveolar-arterial O2 difference in hypoxia (P stroke VOlume (P
Gribov ambiguities at the Landau -- maximal Abelian interpolating gauge
Pereira, A D
2014-01-01
In a previous work, we presented a new method to account for the Gribov ambiguities in non-Abelian gauge theories. The method consists on the introduction of an extra constraint which directly eliminates the infinitesimal Gribov copies without the usual geometric approach. Such strategy allows to treat gauges with non-hermitian Faddeev-Popov operator. In this work, we apply this method to a gauge which interpolates among the Landau and maximal Abelian gauges. The result is a local and power counting renormalizable action, free of infinitesimal Gribov copies. Moreover, the interpolating tree-level gluon propagator is derived.
Transformation of bipartite non-maximally entangled states into a tripartiteWstate in cavity QED
ZANG XUE-PING; YANG MING; DU CHAO-QUN; WANG MIN; FANG SHU-DONG; CAO ZHUO-LIANG
2016-05-01
We present two schemes for transforming bipartite non-maximally entangled states into a W state in cavity QED system, by using highly detuned interactions and the resonant interactions between two-level atoms and a single-mode cavity field. A tri-atom W state can be generated by adjusting the interaction times between atoms and the cavity mode. These schemes demonstrate that two bipartite non-maximally entangled states can be merged into a maximally entangled W state. So the scheme can, in some sense, be regarded as an entanglement concentration process. The experimental feasibility of the schemes is also discussed.
Baeteman, C.; Waller, M.; Kiden, P.
2011-01-01
A number of disciplines are involved in the collection and interpretation of Holocene palaeoenvironmental data from coastal lowlands. For stratigraphic frameworks and the assessment of relative sea-level (RSL) change, many non-specialists rely on existing regional models. It is, however, important
Baeteman, C.; Waller, M.; Kiden, P.
2011-01-01
A number of disciplines are involved in the collection and interpretation of Holocene palaeoenvironmental data from coastal lowlands. For stratigraphic frameworks and the assessment of relative sea-level (RSL) change, many non-specialists rely on existing regional models. It is, however, important t
Eleutherococcus senticosus (Rupr. & Maxim.) Maxim. (Araliaceae) as an adaptogen: a closer look.
Davydov, M; Krikorian, A D
2000-10-01
The adaptogen concept is examined from an historical, biological, chemical, pharmacological and medical perspective using a wide variety of primary and secondary literature. The definition of an adaptogen first proposed by Soviet scientists in the late 1950s, namely that an adaptogen is any substance that exerts effects on both sick and healthy individuals by 'correcting' any dysfunction(s) without producing unwanted side effects, was used as a point of departure. We attempted to identify critically what an adaptogen supposedly does and to determine whether the word embodies in and of itself any concept(s) acceptable to western conventional (allopathic) medicine. Special attention was paid to the reported pharmacological effects of the 'adaptogen-containing plant' Eleutherococcus senticosus (Rupr. & Maxim.) Maxim. (Araliaceae), referred to by some as 'Siberian ginseng', and to its secondary chemical composition. We conclude that so far as specific pharmacological activities are concerned there are a number of valid arguments for equating the action of so-called adaptogens with those of medicinal agents that have activities as anti-oxidants, and/or anti-cancerogenic, immunomodulatory and hypocholesteroletic as well as hypoglycemic and choleretic action. However, 'adaptogens' and 'anti-oxidants' etc. also show significant dissimilarities and these are discussed. Significantly, the classical definition of an adaptogen has much in common with views currently being invoked to describe and explain the 'placebo effect'. Nevertheless, the chemistry of the secondary compounds of Eleutherococcus isolated thus far and their pharmacological effects support our hypothesis that the reported beneficial effects of adaptogens derive from their capacity to exert protective and/or inhibitory action against free radicals. An inventory of the secondary substances contained in Eleutherococcus discloses a potential for a wide range of activities reported from work on cultured cell lines
Postactivation Potentiation Biases Maximal Isometric Strength Assessment
Leonardo Coelho Rabello Lima
2014-01-01
Full Text Available Postactivation potentiation (PAP is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs. The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n=23 performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT, time to achieve it (tPTI, contractile impulse (CI, root mean square of the electromyographic signal during PTI (RMS, and rate of torque development (RTD, in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m, RTD (746 ± 152 N·m·s−1versus 727 ± 158 N·m·s−1, and RMS (59.1 ± 12.2% RMSMAX versus 54.8 ± 9.4% RMSMAX were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms. We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.
Maximal suppression of renin-angiotensin system in nonproliferative glomerulonephritis.
Iodice, Carmela; Balletta, Mario M; Minutolo, Roberto; Giannattasio, Paolo; Tuccillo, Stefano; Bellizzi, Vincenzo; D'Amora, Maurizio; Rinaldi, Giorgio; Signoriello, Giuseppe; Conte, Giuseppe; De Nicola, Luca
2003-06-01
Elimination of residual proteinuria is the novel target in renoprotection; nevertheless, whether a greater suppression of renin-angiotensin system (RAS) effectively improves the antiproteinuric response in patients with moderate proteinuria remains ill-defined. We evaluated the effects of maximizing RAS suppression on quantitative and qualitative proteinuria in ten patients with stable nonnephrotic proteinuria (2.55 +/- 0.94 g/24 hours) due to primary nonproliferative glomerulonephritis (NPGN), and normal values of creatinine clearance (103 +/- 17 mL/min). The study was divided in three consecutive phases: (1) four subsequent 1-month periods of ramipril at the dose of 2.5, 5.0, 10, and 20 mg/day; (2) 2 months of ramipril 20 mg/day + irbesartan 300 mg/day; and (3) 2 months of irbesartan 300 mg/day alone. Maximizing RAS suppression was not coupled with any major effect on renal function and blood pressure; conversely, a significant decrement in hemoglobin levels, of 0.8 g/dL on average, was observed during up-titration of ramipril dose. The 2.5 mg dose of ramipril significantly decreased proteinuria by 29%. Similar changes were detected after irbesartan alone (-28%). The antiproteinuric effect was not improved either by the higher ramipril doses (-30% after the 20 mg dose) or after combined treatment (-33%). The reduction of proteinuria led to amelioration of the markers of tubular damage, as testified by the significant decrement of alpha 1 microglobulin (alpha 1m) excretion and of the tubular component of proteinuria at sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE). In nonnephrotic NPGN patients, standard doses of either ramipril or irbesartan lead to significant reduction of residual proteinuria and amelioration of the qualitative features suggestive of tubular damage. The enhancement of RAS suppression up to the maximal degree does not improve the antiproteinuric response and is coupled with a decrement of hemoglobin levels.
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...... on the numbers of cycles in graphs depending on numbers of vertices and edges, girth, and homomorphisms to small fixed graphs; and use the bounds to show that among regular graphs, the conjecture holds. We also consider graphs that are close to being regular, with the minimum and maximum degrees differing...
ON THE SPACES OF THE MAXIMAL POINTS
梁基华; 刘应明
2003-01-01
For a continuous domain D, some characterization that the convex powerdomain CD is adomain hull of Max(CD) is given in terms of compact subsets of D. And in this case, it isproved that the set of the maximal points Max(CD) of CD with the relative Scott topology ishomeomorphic to the set of all Scott compact subsets of Max(D) with the topology induced bythe Hausdorff metric derived from a metric on Max(D) when Max(D) is metrizable.
Understanding of English Contracts though Relation Maxims
XU Chi-ying; JIANG Li-hui
2013-01-01
Contract is the legal evidence of the concerning parties of business. And this lead to its unique characteristics:technical terms, archaism, borrowed words, juxtaposition, and abbreviation. The understanding of contracts is of vital importance for each party, because it concerns the share of interests. In order to avoid ambiguity that some words or sentence in English contracts may lead to, and achieve“best relevance and least effort”of communication, this paper, by applying relation maxim, deeply analyze how to understand English contracts though selection of words, modification, the complexity and simplicity of sentence.
Maximizing policy learning in international committees
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Zhu, Hong-Ming; Yu, Yu; Er, Xinzhong; Chen, Xuelei
2015-01-01
The gravitational coupling of a long wavelength tidal field with small scale density fluctuations leads to anisotropic distortions of the locally measured small scale matter correlation function. Since the local correlation function is statistically isotropic in the absence of such tidal interactions, the tidal distortions can be used to reconstruct the long wavelength tidal field and large scale density field in analogy with the cosmic microwave background lensing reconstruction. In this paper we present in detail a formalism for the cosmic tidal reconstruction and test the reconstruction in numerical simulations. We find that the density field on large scales can be reconstructed with good accuracy and the cross correlation coefficient between the reconstructed density field and the original density field is greater than 0.9 on large scales ($k\\lesssim0.1h/\\mathrm{Mpc}$). This is useful in the 21cm intensity mapping survey, where the long wavelength radial modes are lost due to foreground subtraction proces...
Eulerian BAO Reconstructions and N-Point Statistics
Schmittfull, Marcel; Beutler, Florian; Sherwin, Blake; Chu, Man Yat
2015-01-01
As galaxy surveys begin to measure the imprint of baryonic acoustic oscillations (BAO) on large-scale structure at the sub-percent level, reconstruction techniques that reduce the contamination from nonlinear clustering become increasingly important. Inverting the nonlinear continuity equation, we propose an Eulerian growth-shift reconstruction algorithm that does not require the displacement of any objects, which is needed for the standard Lagrangian BAO reconstruction algorithm. In our simulations, the algorithm yields 95% of the BAO signal-to-noise obtained from standard reconstruction. The reconstructed power spectrum is obtained by adding specific simple 3- and 4-point statistics to the pre-reconstruction power spectrum, making it very transparent how additional BAO information from higher-point statistics is included in the power spectrum through the reconstruction process. Analytical models of the reconstructed density for the two algorithms agree at second order. Based on similar modeling efforts, we ...
Ptychographic ultrafast pulse reconstruction
Spangenberg, D; Brügmann, M H; Feurer, T
2014-01-01
We demonstrate a new ultrafast pulse reconstruction modality which is somewhat reminiscent of frequency resolved optical gating but uses a modified setup and a conceptually different reconstruction algorithm that is derived from ptychography. Even though it is a second order correlation scheme it shows no time ambiguity. Moreover, the number of spectra to record is considerably smaller than in most other related schemes which, together with a robust algorithm, leads to extremely fast convergence of the reconstruction.
Maximal subbundles, quot schemes, and curve counting
Gillam, W D
2011-01-01
Let $E$ be a rank 2, degree $d$ vector bundle over a genus $g$ curve $C$. The loci of stable pairs on $E$ in class $2[C]$ fixed by the scaling action are expressed as products of $\\Quot$ schemes. Using virtual localization, the stable pairs invariants of $E$ are related to the virtual intersection theory of $\\Quot E$. The latter theory is extensively discussed for an $E$ of arbitrary rank; the tautological ring of $\\Quot E$ is defined and is computed on the locus parameterizing rank one subsheaves. In case $E$ has rank 2, $d$ and $g$ have opposite parity, and $E$ is sufficiently generic, it is known that $E$ has exactly $2^g$ line subbundles of maximal degree. Doubling the zero section along such a subbundle gives a curve in the total space of $E$ in class $2[C]$. We relate this count of maximal subbundles with stable pairs/Donaldson-Thomas theory on the total space of $E$. This endows the residue invariants of $E$ with enumerative significance: they actually \\emph{count} curves in $E$.
Maximal coherence in a generic basis
Yao, Yao; Dong, G. H.; Ge, Li; Li, Mo; Sun, C. P.
2016-12-01
Since quantum coherence is an undoubted characteristic trait of quantum physics, the quantification and application of quantum coherence has been one of the long-standing central topics in quantum information science. Within the framework of a resource theory of quantum coherence proposed recently, a fiducial basis should be preselected for characterizing the quantum coherence in specific circumstances, namely, the quantum coherence is a basis-dependent quantity. Therefore, a natural question is raised: what are the maximum and minimum coherences contained in a certain quantum state with respect to a generic basis? While the minimum case is trivial, it is not so intuitive to verify in which basis the quantum coherence is maximal. Based on the coherence measure of relative entropy, we indicate the particular basis in which the quantum coherence is maximal for a given state, where the Fourier matrix (or more generally, complex Hadamard matrices) plays a critical role in determining the basis. Intriguingly, though we can prove that the basis associated with the Fourier matrix is a stationary point for optimizing the l1 norm of coherence, numerical simulation shows that it is not a global optimal choice.
Symmetry and approximability of submodular maximization problems
Vondrak, Jan
2011-01-01
A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, where the objective function is accessible via a black box returning f(S) for a given S. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of symmetry gap. Our main result is that for any fixed instance that exhibits a certain symmetry gap in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, and implies several new ones. In particular, we prove that there is no constant-factor approximation for the problem of maximizing a non-negative submodular function over the bases of a matroid. We also provide a closely matching approximation algorithm for...
Acute Hematological Responses to a Maximal Incremental Treadmill Test
Filipe Dinato de Lima
2017-03-01
Full Text Available The present study aimed to study acute hematologic responses in individuals undergoing a cardiopulmonary maximum incremental treadmill test without inclination. Were analyzed 23 individuals, 12 men and 11 women, with a mean age of 30.2 (± 8.4 years, mean weight of 68.1 (± 18.1 kg, mean height of 170.2 (± 9.8 cm, and mean BMI of 23.2 (±3.7 kg/m², physically active, with a minimum practice of 3.5 hours per week of exercise for at least 6 months. The subjects were submitted to a maximal incremental treadmill test, with venous blood collection for analysis before and immediately after completion of the test. Was used Wilcoxon test for analysis of pre and post test variables. Was adopted p < 0.05 as significance level. There was a significant increase in leukocyte count (69.23%; p = 0.005, lymphocytes (17.56%; p = 0.043, monocytes (85.41%; p = 0.012 and granulocytes (28.21%; p = 0.011. It was also observed a significant increase in erythrocytes (3,42%; p = 0,042, hematocrit (5.39%; p = 0.038 and hemoglobin (5.58%; p = 0.013. With this study, was concluded that performing a maximal test of treadmill running can significantly raise blood levels of leukocytes and respective sub-populations, as well as red blood cells and hemoglobin.
Anterior cruciate ligament reconstruction with allograft tendons.
Strickland, Sabrina M; MacGillivray, John D; Warren, Russell F
2003-01-01
Allograft tissue allows reconstruction of the ACL without the donor site morbidity that can be caused by autograft harvesting. Patients who must kneel as a part of their occupation or chosen sport are particularly good candidates for allograft reconstruction. Patients over 45 years of age and those requiring revision ACL surgery can also benefit from the use and availability of allograft tendons. In some cases, patients or surgeons may opt for allograft tendons to maximize the result or morbidity ratio. Despite advances in cadaver screening and graft preparation, there remain risks of disease transmission and joint infection after allograft implantation. Detailed explanation and informed consent is vitally important in cases in which allograft tissue is used.
Jailton Gregório Pelarigo
2007-06-01
was to verify the effect of aerobic performance level on the relationship between the technical indexes corresponding to critical speed (CS and maximal speed of 30 minutes (S30 in swimmers. Participated of this study 23 male swimmers with similar anthropometric characteristics, divided by aerobic performance level in groups G1 (n = 13 and G2 (n = 10. They had at least four years of experience in the modality and a weekly training volume between 30,000 to 45,000 m. The CS was determined through the angular coefficient of the linear regression line between the distances (200 and 400 m and respective times. The S30 was determined through the maximal distance covered in a 30 minutes test. All variables were determined in front crawl. CS was higher than S30 in G1 (1.30 ± 0.04 vs. 1.23 ± 0.06 m.s-1 and G2 (1.17 ± 0.08 vs. 1.07 ± 0.06 m.s -1. These variables were higher in group G1. The stroke rate corresponding to CS (SRCS and S30 (SRS30 obtained in group G1 (33.07 ± 4.34 vs. 31.38 ± 4.15 cycles.min-1 and G2 (35.57 ± 6.52 vs. 33.54 ± 5.89 cycles.min-1 were similar. The SRCS was significantly lower in group G1 than G2, while SRS30 was not different between groups. The stroke length corresponding to CS (SLCS and S30 (SLS30 was significantly higher in group G1 (2.41 ± 0.33 vs. 2.38 ± 0.30 m.cycle-1 than in G2 (2.04 ± 0.43 vs. 1.97 ± 0.40 m.cycle-1, and had similar values in both groups. The correlation (r between CS and S30 and technical variables corresponding to CS and S30 were significant in all comparisons (0.68 to 0.91. Thus, the relationship between the speed and technical variables corresponding to CS and S30 was not modified by the aerobic performance level.
Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.
Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane
2016-08-01
Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.
An Applied Method for Designing Maximally Decimating Non-uniform Filter Banks
无
2003-01-01
Assembling individual line phase filters to form a multi-channel filter bank allows the synthesis filter to be similar to corresponding analysis filters, and the design calculation can be simple. The appropriate relations between synthesis filters and analysis filters eliminate most aliasing resulting from decimation in non-uniform maximally decimating filter banks, and LS algorithm and Remez algorithm are used to optimize the composite character. This design method can achieve approximate Perfect-Reconstruction. An example is given in which the general parameter filters with approximate line phase are used as units of a filter bank.
DD4hep Based Event Reconstruction
AUTHOR|(SzGeCERN)683529; Frank, Markus; Gaede, Frank-Dieter; Hynds, Daniel; Lu, Shaojun; Nikiforou, Nikiforos; Petric, Marko; Simoniello, Rosa; Voutsinas, Georgios Gerasimos
The DD4HEP detector description toolkit offers a flexible and easy-to-use solution for the consistent and complete description of particle physics detectors in a single system. The sub-component DDREC provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDREC there is no need to define an additional, separate reconstruction geometry as is often done in HEP, but one can transparently extend the existing detailed simulation model to be also used for the reconstruction. Based on the extension mechanism of DD4HEP, DDREC allows one to attach user defined data structures to detector elements at all levels of the geometry hierarchy. These data structures define a high level view onto the detectors describing their physical properties, such as measurement layers, point resolutions, and cell sizes. For the purpose of charged particle track reconstruction, dedicated surface objects can be attached to every volume in the detector geometry. These surfaces provide the measuremen...
DD4hep Based Event Reconstruction
Sailer, Andre; Gaede, Frank-Dieter; Hynds, Daniel; Lu, Shaojun; Nikiforou, Nikiforos; Petric, Marko; Simoniello, Rosa; Voutsinas, Georgios Gerasimos
The DD4HEP detector description toolkit offers a flexible and easy-to-use solution for the consistent and complete description of particle physics detectors in a single system. The sub-component DDREC provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDREC there is no need to define an additional, separate reconstruction geometry as is often done in HEP, but one can transparently extend the existing detailed simulation model to be also used for the reconstruction. Based on the extension mechanism of DD4HEP, DDREC allows one to attach user defined data structures to detector elements at all levels of the geometry hierarchy. These data structures define a high level view onto the detectors describing their physical properties, such as measurement layers, point resolutions, and cell sizes. For the purpose of charged particle track reconstruction, dedicated surface objects can be attached to every volume in the detector geometry. These surfaces provide the measuremen...
Dopaminergic balance between reward maximization and policy complexity
Naama eParush
2011-05-01
Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.
A multiscale/multiframe approach to 3D PET data reconstruction
Mendes, Luis; Ferreira, Nuno [Coimbra Univ. (Portugal). Inst. de Biofisica/Biomatematica; ICNAS - Instituto de Ciencias Nucleares Aplicadas a Saude, Coimbra (Portugal); Comtat, Claude [CEA/DSV/12BM, Orsay (France). Service Hospitalier Frederic Joliot
2011-07-01
A multiscale/multiframe 3D reconstruction scheme for Positron Emission Tomography is presented. Usually the dimensions of the reconstructed volume or the projection space binning do not change during the image reconstruction process. In this paper we introduce the concept of time frame to the multiscale reconstruction proposed by Raheja et al. This approach can be used for the generation of images reconstructed in near real time using a suitable scale, taking full advantage of list mode reconstruction techniques. When compared with the Maximum Likelihood - Expectation Maximization algorithm (single scale ML-EM), the Multiscale/Multiframe proposed in this work improves the convergence speed in particular in cold regions, as well as performing a fast reconstruction. The generation of different image sequences at different spatial scales and times may be useful to optimize the acquisition clinical protocols on the fly. (orig.)
Metcalfe, Kelly A; Semple, John; Quan, May-Lynn; Vadaparampil, Susan T; Holloway, Claire; Brown, Mitch; Bower, Bethanne; Sun, Ping; Narod, Steven A
2012-01-01
In this study, we report on the changes in psychosocial functioning over 1 year following breast cancer surgery in 3 groups of women, including those with mastectomy alone, those with mastectomy and immediate reconstruction, and those with delayed reconstruction. Women with breast cancer at 2 teaching hospitals in Ontario who were undergoing mastectomy alone, mastectomy with immediate reconstruction, or delayed reconstruction were asked to complete a battery of psychosocial questionnaires at their preoperative appointment and 1 year following surgery. A total of 190 women consented to participate in the study and completed the presurgical questionnaires. There were no presurgical differences between the 3 groups in quality of life, anxiety, depression, or sexual functioning. However, women who were undergoing delayed breast reconstruction (i.e., already had a mastectomy) had higher levels of body stigma (P = 0.01), body concerns (P = 0.002), and transparency (P = 0.002) than women who were undergoing mastectomy alone or mastectomy with immediate reconstruction. Of these women, 158 (83.2%) completed the 1-year follow-up. There were no significant differences in any of the psychosocial functioning scores between the 3 groups. Contrary to the assumed psychological benefits of breast reconstruction, psychological distress was evident among women regardless of reconstruction or timing of reconstruction. Further, psychosocial functioning (including quality of life, sexual functioning, cancer-related distress, body image, depression, and anxiety) was not different at 1-year postsurgery between women with mastectomy alone, mastectomy with immediate reconstruction, and delayed reconstruction. These results suggest that women need psychosocial support after breast cancer diagnosis, even if they have breast reconstruction.
Maximal lattice free bodies, test sets and the Frobenius problem
Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt
Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram;
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...... variable. Generation of trial databases and/or biobanks originating in large randomized clinical trials has successfully increased the knowledge obtained from those trials. At the 10th Cardiovascular Trialist Workshop, possibilities and pitfalls in designing and accessing clinical trial databases were......, in particular with respect to collaboration with the trial sponsor and to analytic pitfalls. The advantages of creating screening databases in conjunction with a given clinical trial are described; and finally, the potential for posttrial database studies to become a platform for training young scientists...
Maximization of eigenvalues using topology optimization
Pedersen, Niels Leergaard
2000-01-01
Topology optimization is used to optimize the eigenvalues of plates. The results are intended especially for MicroElectroMechanical Systems (MEMS) but call be seen as more general. The problem is not formulated as a case of reinforcement of an existing structure, so there is a problem related...... to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...... is a practical MEMS application; a probe used in an Atomic Force Microscope (AFM). For the AFM probe the optimization is complicated by a constraint on the stiffness and constraints on higher order eigenvalues....
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Paulo André da Conceição Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Paulo André Da Conceiçao Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
Reflection Quasilattices and the Maximal Quasilattice
Boyle, Latham
2016-01-01
We introduce the concept of a {\\it reflection quasilattice}, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e. Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we prove that reflection quasilattices only exist in dimensions two, three and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. We further show that, unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. W...
Distributed Maximality based CTL Model Checking
Djamel Eddine Saidouni
2010-05-01
Full Text Available In this paper we investigate an approach to perform a distributed CTL Model checker algorithm on a network of workstations using Kleen three value logic, the state spaces is partitioned among the network nodes, We represent the incomplete state spaces as a Maximality labeled Transition System MLTS which are able to express true concurrency. we execute in parallel the same algorithm in each node, for a certain property on an incomplete MLTS , this last compute the set of states which satisfy or which if they fail are assigned the value .The third value mean unknown whether true or false because the partial state space lacks sufficient information needed for a precise answer concerning the complete state space .To solve this problem each node exchange the information needed to conclude the result about the complete state space. The experimental version of the algorithm is currently being implemented using the functional programming language Erlang.
Evolution of correlated multiplexity through stability maximization
Dwivedi, Sanjiv K
2016-01-01
Investigating relation between various structural patterns found in real-world networks and stability of underlying systems is crucial to understand importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising of anti-symmetric couplings in one layer, depicting predator-prey relation, and symmetric couplings in the other, depicting mutualistic (or competitive) relation, based on stability maximization through the largest eigenvalue. We find that the correlated multiplexity emerges as evolution progresses. The evolved values of the correlated multiplexity exhibit a dependence on the inter-link coupling strength. Furthermore, the inter-layer coupling strength governs the evolution of disassortativity property in the individual layers. We provide analytical understanding to these findings by considering star like networks in both the layers. The model and tools used here are useful for understanding the principles governing the stability as well as importance of such patterns in ...
Witten spinors on maximal, conformally flat hypersurfaces
Frauendiener, Jörg; Szabados, László B
2011-01-01
The boundary conditions that exclude zeros of the solutions of the Witten equation (and hence guarantee the existence of a 3-frame satisfying the so-called special orthonormal frame gauge conditions) are investigated. We determine the general form of the conformally invariant boundary conditions for the Witten equation, and find the boundary conditions that characterize the constant and the conformally constant spinor fields among the solutions of the Witten equations on compact domains in extrinsically and intrinsically flat, and on maximal, intrinsically globally conformally flat spacelike hypersurfaces, respectively. We also provide a number of exact solutions of the Witten equation with various boundary conditions (both at infinity and on inner or outer boundaries) that single out nowhere vanishing spinor fields on the flat, non-extreme Reissner--Nordstr\\"om and Brill--Lindquist data sets. Our examples show that there is an interplay between the boundary conditions, the global topology of the hypersurface...
Greedy Maximal Scheduling in Wireless Networks
Li, Qiao
2010-01-01
In this paper we consider greedy scheduling algorithms in wireless networks, i.e., the schedules are computed by adding links greedily based on some priority vector. Two special cases are considered: 1) Longest Queue First (LQF) scheduling, where the priorities are computed using queue lengths, and 2) Static Priority (SP) scheduling, where the priorities are pre-assigned. We first propose a closed-form lower bound stability region for LQF scheduling, and discuss the tightness result in some scenarios. We then propose an lower bound stability region for SP scheduling with multiple priority vectors, as well as a heuristic priority assignment algorithm, which is related to the well-known Expectation-Maximization (EM) algorithm. The performance gain of the proposed heuristic algorithm is finally confirmed by simulations.
Dispatch Scheduling to Maximize Exoplanet Detection
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
A New Biflavone from Selaginella pulvinata Maxim
XU Kang-Ping; XU Zhi; DENG Yin-Hua; LI Fu-Shuang; ZHOU Ying-Jun; HU Gao-Yun; TAN Gui-Shan
2003-01-01
@@ Selaginella pulvinata Maxim. distributes all over the country of China and is used for the treatment for haemor rhage. [1] We studied on the chemical constituents of S. pulvinata in order to find the active compounds. Dried stems and leaves of S. pulvinata (6.5 kg) were extracted with 70% ethanol twice. The extract was evaporated under vacuum and than suspended in water, extracted with petroleum and EtOAc sequentially. The EtOAc extract was chromatographed on silica gel, eluted with CHCl3-MeOH. As a result, a novel biflavone, named pulvinatabiflavone, was obtained from fractions 75 ～ 78. Its structure was determined on the basis of spectroscopic analysis as 5,5″, 4′″ trihydroxy-7,7″-dimethoxy-[4′-O-6″]-biflavone (compound 1).
Maximal energy extraction under discrete diffusive exchange
Hay, M. J., E-mail: hay@princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Schiff, J. [Department of Mathematics, Bar-Ilan University, Ramat Gan 52900 (Israel); Fisch, N. J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Maximal energy extraction under discrete diffusive exchange
Hay, Michael J; Fisch, Nathaniel J
2015-01-01
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Delayed breast implant reconstruction
Hvilsom, Gitte B.; Hölmich, Lisbet R.; Steding-Jessen, Marianne;
2012-01-01
We evaluated the association between radiation therapy and severe capsular contracture or reoperation after 717 delayed breast implant reconstruction procedures (288 1- and 429 2-stage procedures) identified in the prospective database of the Danish Registry for Plastic Surgery of the Breast during...... reconstruction approaches other than implants should be seriously considered among women who have received radiation therapy....
Breast reconstruction - slideshow
... this page: //medlineplus.gov/ency/presentations/100156.htm Breast reconstruction - series—Indication, part 1 To use the sharing ... A.M. Editorial team. Related MedlinePlus Health Topics Breast Reconstruction A.D.A.M., Inc. is accredited by ...
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
THE EFFECTS MAXIMAL AND SUB MAXIMAL AEROBIC EXERCISE ON THE BRONCHOSPASM INDICES IN NON ATHLETIC
Amir GANJİ
2012-08-01
Full Text Available Background: Exercise-induced bronchospasm (EIB is a transient airway obstruction that occurs during and after the exercise. Exercise-induced bronchospasm is observed in healthy individuals as well as the asthmatic and allergic rhinitis patients. Research question: The study compared the effects of one session of submaximal aerobic exercise and a maximal one on the prevalence of exercise-induced bronchospasm in non-athletic students. Type of study: An experimental study, using human subjects, was designed. Methods: 20 non-athletic male students participated in two sessions of aerobic exercise. The prevalence of EIB was investigated among them. The criteria for assessing exercise-induced bronchospasm were ≥10% fall in FEV1, ≥15% fall in FEF25-75%, or ≥25% fall in PEFR. Results: The results revealed that the maximal exercise did not affect FEF25-75% and PEF, but it led to a meaningful reduction in FEV1. Contrarily, the submaximal exercise affected none of these indices. That is, in both protocols the same result was obtained for PEF and FEF25-75. Moreover, the prevalence of EIB was 15% in the submaximal exercise and 20% in the maximal one. Actually, this difference was significant. Conclusion: This study demonstrated that in contrast to the subjects who performed submaximal exercise, those who participated in the maximal protocol showed more changes in the pulmonary function indices and the prevalence of EIB was greater among them.
Maximal elements of non necessarily acyclic binary relations
Josep Enric Peris Ferrando; Begoña Subiza Martínez
1992-01-01
The existence of maximal elements for binary preference relations is analyzed without imposing transitivity or convexity conditions. From each preference relation a new acyclic relation is defined in such a way that some maximal elements of this new relation characterize maximal elements of the original one. The result covers the case whereby the relation is acyclic.
Reliability and validity of the maximal anaerobic running test.
Nummela, A; Alberts, M; Rijntjes, R P; Luhtanen, P; Rusko, H
1996-07-01
Physically active men (n = 13) twice performed the Maximal Anaerobic Running Test (MART) on a treadmill and once the Wingate Anaerobic Test (WAnT) on a cycle ergometer. The MART consisted of n 20-s runs with 100-s recovery between the runs. The speed of the first run was 14.6 km.h-1 and the inclination 4 degrees. Thereafter, the speed was increased by 1.37 km.h-1 every run until exhaustion. During all tests oxygen uptake was measured breath-by-breath and blood samples were taken from the fingertip 40s after each run to determine the lactate concentration (BLa). Power at submaximal BLa levels and maximal power (P5mM, P10mM and Pmax, respectively) were calculated and P was expressed as the oxygen demand of running according to the American College of Sports Medicine equation. In the MART the Pmax was 108 ml.kg-1.min-1 and peak BLa was 15.6 mM. The reliability for the power indices in the MART were as follows: r = 0.92 (p cycle ergometer test measure slightly different qualities.
Consumer-driven profit maximization in broiler production and processing
Ecio de Farias Costa
2004-01-01
Full Text Available Increased emphasis on consumer markets in broiler profit-maximizing modeling generates results that differ from those by traditional profit-maximization models. This approach reveals that the adoption of step pricing and consideration of marketing options (examples of responsiveness to consumers affect the optimal feed formulation levels and types of broiler production to generate maximum profitability. The adoption of step pricing attests that higher profits can be obtained for targeted weights only if premium prices for broiler products are contracted.Um aumento na ênfase dada ao mercado de consumidores de carne de frango e modelos de maximização de lucros na produção de frangos de corte geram resultados que diferem daqueles obtidos em modelos tradicionais de maximização de lucros. Esta metodologia revela que a adoção de step-pricing e considerando opções de mercado (exemplos de resposta às preferências de consumidores afetam os níveis ótimos de formulação de rações e os tipos de produção de frangos de corte que geram uma lucratividade máxima. A adoção de step-pricing atesta que maiores lucros podem ser obtidos para pesos-alvo somente se preços-prêmio para produtos processados de carne de frango forem contratados.
Breburda, C. S.; Griffin, B. P.; Pu, M.; Rodriguez, L.; Cosgrove, D. M. 3rd; Thomas, J. D.
1998-01-01
OBJECTIVES: We sought to validate direct planimetry of mitral regurgitant orifice area from three-dimensional echocardiographic reconstructions. BACKGROUND: Regurgitant orifice area (ROA) is an important measure of the severity of mitral regurgitation (MR) that up to now has been calculated from hemodynamic data rather than measured directly. We hypothesized that improved spatial resolution of the mitral valve (MV) with three-dimensional (3D) echo might allow accurate planimetry of ROA. METHODS: We reconstructed the MV using 3D echo with 3 degrees rotational acquisitions (TomTec) using a transesophageal (TEE) multiplane probe in 15 patients undergoing MV repair (age 59 +/- 11 years). One observer reconstructed the prolapsing mitral leaflet in a left atrial plane parallel to the ROA and planimetered the two-dimensional (2D) projection of the maximal ROA. A second observer, blinded to the results of the first, calculated maximal ROA using the proximal convergence method defined as maximal flow rate (2pi(r2)va, where r is the radius of a color alias contour with velocity va) divided by regurgitant peak velocity (obtained by continuous wave [CW] Doppler) and corrected as necessary for proximal flow constraint. RESULTS: Maximal ROA was 0.79 +/- 0.39 (mean +/- SD) cm2 by 3D and 0.86 +/- 0.42 cm2 by proximal convergence (p = NS). Maximal ROA by 3D echo (y) was highly correlated with the corresponding flow measurement (x) (y = 0.87x + 0.03, r = 0.95, p < 0.001) with close agreement seen (AROA (y - x) = 0.07 +/- 0.12 cm2). CONCLUSIONS: 3D echo imaging of the MV allows direct visualization and planimetry of the ROA in patients with severe MR with good agreement to flow-based proximal convergence measurements.
Breast Reconstruction with Flap Surgery
Breast reconstruction with flap surgery Overview By Mayo Clinic Staff Breast reconstruction is a surgical procedure that restores shape to ... breast tissue to treat or prevent breast cancer. Breast reconstruction with flap surgery is a type of breast ...
Maximization Paradox: Result of Believing in an Objective Best.
Luan, Mo; Li, Hong
2017-05-01
The results from four studies provide reliable evidence of how beliefs in an objective best influence the decision process and subjective feelings. A belief in an objective best serves as the fundamental mechanism connecting the concept of maximizing and the maximization paradox (i.e., expending great effort but feeling bad when making decisions, Study 1), and randomly chosen decision makers operate similar to maximizers once they are manipulated to believe that the best is objective (Studies 2A, 2B, and 3). In addition, the effect of a belief in an objective best on the maximization paradox is moderated by the presence of a dominant option (Study 3). The findings of this research contribute to the maximization literature by demonstrating that believing in an objective best leads to the maximization paradox. The maximization paradox is indeed the result of believing in an objective best.
Tomographic reconstruction of time-bin-entangled qudits
Nowierski, Samantha J.; Oza, Neal N.; Kumar, Prem; Kanter, Gregory S.
2016-10-01
We describe an experimental implementation to generate and measure high-dimensional time-bin-entangled qudits. Two-photon time-bin entanglement is generated via spontaneous four-wave mixing in single-mode fiber. Unbalanced Mach-Zehnder interferometers transform selected time bins to polarization entanglement, allowing standard polarization-projective measurements to be used for complete quantum state tomographic reconstruction. Here we generate maximally entangled qubits (d =2 ) , qutrits (d =3 ) , and ququarts (d =4 ) , as well as other phase-modulated nonmaximally entangled qubits and qutrits. We reconstruct and verify all generated states using maximum-likelihood estimation tomography.
2008-01-01
On May 22,10 days after the Wenchuan earthquake in Sichuan Province,the State Council formed the Post-earthquake Reconstruction Planning Group,deciding to work out a general recon- struction plan within a period of three months. Sichuan was the worst-hit area of China,so reconstruction work there will have a direct influence on how plans proceed in other areas.On July 18,Beijing Review reporter Feng Jianhua interviewed Wang Guangsi,Vice Director of the Sichuan Development and Reform Commission,about Sichuan’s reconstruction plan.
Rebinning and reconstruction techniques for 3D TOF-PET
Vandenberghe, Stefaan [Philips Research USA, Briarcliff NY (United States)]. E-mail: stefaan.vandenberghe@ugent.be; Karp, Joel [PET instrumentation group, University of Pennsylvania, Philadelphia, PA (United States)
2006-12-20
The measured time difference in 3D Time-of-Flight (TOF) positron emission tomography (PET) makes it possible to improve the signal-to-noise ratio of reconstructed images. The improvement in signal-to-noise ratio will probably be used to reduce imaging time. To keep up with workflow there will be a need for faster reconstruction methods. A variety of reconstruction and rebinning methods have been developed in the past for 2D and 3D TOF-PET data. The TOF information makes very simple reconstruction methods possible. These allow real time reconstruction but the obtained image quality is lower. Relative fast reconstructions can be obtained using rebinning techniques. Fully 3D iterative listmode reconstruction makes no approximations but comes at the expense of long reconstruction times. Data from Monte Carlo simulations of 3D TOF-PET scanners are used to quantify differences in noise and contrast between the different methods. Real time methods are useful for direct display after or even during acquisition, but do not generate useful data for reviewing. Rebinning methods can be used to reduce the reconstruction time with a small loss in image quality and the image quality loss is quite small if good timing resolution can be achieved. Fully 3D iterative listmode reconstruction maximizes the obtained image quality and should be used if not even a small loss in image quality is acceptable. When timing resolution is improved the difference between the different methods become clearly smaller and in the limit where timing resolution is equal to spatial resolution, the methods are equivalent.
Breast Augmentation and Breast Reconstruction Demonstrate Equivalent Aesthetic Outcomes
Davis, Christopher R.; Nguyen, Dung H.
2016-01-01
Background: There is a perception that cosmetic breast surgery has more favorable aesthetic outcomes than reconstructive breast surgery. We tested this hypothesis by comparing aesthetic outcomes after breast augmentation and reconstruction. Methods: Postoperative images of 10 patients (cosmetic, n = 4; reconstructive, n = 6; mean follow-up, 27 months) were presented anonymously to participants who were blinded to clinical details. Participants were asked if they believed cosmetic or reconstructive surgery had been performed. Aesthetic outcome measures were quantified: (1) natural appearance, (2) size, (3) contour, (4) symmetry, (5) position of breasts, (6) position of nipples, (7) scars (1 = poor and 4 = excellent). Images were ranked from 1 (most aesthetic) to 10 (least aesthetic). Analyses included two-tailed t tests, Mann–Whitney U tests, and χ2 tests. Results: One thousand eighty-five images were quantified from 110 surveys (99% response rate). The accuracy of identifying cosmetic or reconstructive surgery was 55% and 59%, respectively (P = 0.18). Significantly more of the top 3 aesthetic cases were reconstructive (51% vs 49%; P = 0.03). Despite this, cases perceived to be reconstructive were ranked significantly lower (5.9 vs 5.0; P 0.05), with the exception of breast position that improved after reconstruction (2.9 vs 2.7; P = 0.009) and scars that were more favorable after augmentation (2.9 vs 3.1; P reconstructive breast surgery are broadly equivalent, though preconceptions influence aesthetic opinion. Plastic surgeons' mutually inclusive–reconstructive and aesthetic skill set maximizes aesthetic outcomes. PMID:27536490
EXPLANATORY VARIANCE IN MAXIMAL OXYGEN UPTAKE
Jacalyn J. Robert McComb
2006-06-01
Full Text Available The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females, ages 18 - 24 years, underwent the following testing procedures: (a a 7-site skin fold assessment; (b a land VO2max running treadmill test; and (c a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants' head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF, height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27 of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF.
Reflection quasilattices and the maximal quasilattice
Boyle, Latham; Steinhardt, Paul J.
2016-08-01
We introduce the concept of a reflection quasilattice, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e., Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we explain that reflection quasilattices only exist in dimensions two, three, and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. Unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. We tabulate the complete set of scale factors for all reflection quasilattices in dimension d >2 , and for all those with quadratic irrational scale factors in d =2 .
Network channel allocation and revenue maximization
Hamalainen, Timo; Joutsensalo, Jyrki
2002-09-01
This paper introduces a model that can be used to share link capacity among customers under different kind of traffic conditions. This model is suitable for different kind of networks like the 4G networks (fast wireless access to wired network) to support connections of given duration that requires a certain quality of service. We study different types of network traffic mixed in a same communication link. A single link is considered as a bottleneck and the goal is to find customer traffic profiles that maximizes the revenue of the link. Presented allocation system accepts every calls and there is not absolute blocking, but the offered data rate/user depends on the network load. Data arrival rate depends on the current link utilization, user's payment (selected CoS class) and delay. The arrival rate is (i) increasing with respect to the offered data rate, (ii) decreasing with respect to the price, (iii) decreasing with respect to the network load, and (iv) decreasing with respect to the delay. As an example, explicit formula obeying these conditions is given and analyzed.
Evolution of correlated multiplexity through stability maximization
Dwivedi, Sanjiv K.; Jalan, Sarika
2017-02-01
Investigating the relation between various structural patterns found in real-world networks and the stability of underlying systems is crucial to understand the importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising antisymmetric couplings in one layer depicting predator-prey relationship and symmetric couplings in the other depicting mutualistic (or competitive) relationship, based on stability maximization through the largest eigenvalue of the corresponding adjacency matrices. We find that there is an emergence of the correlated multiplexity between the mirror nodes as the evolution progresses. Importantly, evolved values of the correlated multiplexity exhibit a dependence on the interlayer coupling strength. Additionally, the interlayer coupling strength governs the evolution of the disassortativity property in the individual layers. We provide analytical understanding to these findings by considering starlike networks representing both the layers. The framework discussed here is useful for understanding principles governing the stability as well as the importance of various patterns in the underlying networks of real-world systems ranging from the brain to ecology which consist of multiple types of interaction behavior.
Maximal respiratory pressure in healthy Japanese children
Tagami, Miki; Okuno, Yukako; Matsuda, Tadamitsu; Kawamura, Kenta; Shoji, Ryosuke; Tomita, Kazuhide
2017-01-01
[Purpose] Normal values for respiratory muscle pressures during development in Japanese children have not been reported. The purpose of this study was to investigate respiratory muscle pressures in Japanese children aged 3–12 years. [Subjects and Methods] We measured respiratory muscle pressure values using a manovacuometer without a nose clip, with subjects in a sitting position. Data were collected for ages 3–6 (Group I: 68 subjects), 7–9 (Group II: 86 subjects), and 10–12 (Group III: 64 subjects) years. [Results] The values for respiratory muscle pressures in children were significantly higher with age in both sexes, and were higher in boys than in girls. Correlation coefficients were significant at values of 0.279 to 0.471 for each gender relationship between maximal respiratory pressure and age, height, and weight, respectively. [Conclusion] In this study, we showed pediatric respiratory muscle pressure reference value for each age. In the present study, values for respiratory muscle pressures were lower than Brazilian studies. This suggests that differences in respiratory muscle pressures vary with ethnicity. PMID:28356644
Maximizing exosome colloidal stability following electroporation.
Hood, Joshua L; Scott, Michael J; Wickline, Samuel A
2014-03-01
Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo.
Kavanagh, Justin J; Feldman, Matthew R; Simmonds, Michael J
2016-09-07
The aim of this study was to investigate how maximal intermittent contractions for a hand muscle influence cortical and reflex activity, as well as the ability to voluntarily activate, the homologous muscle in the opposite limb. Twelve healthy subjects (age: 24 ± 3 years, all right hand dominant) performed maximal contractions of the dominant limb first dorsal interosseous (FDI), and activity of the contralateral FDI was examined in a series of experiments. Index finger abduction force, FDI EMG, motor evoked potentials and heteronomous reflexes were obtained from the contralateral limb during brief non-fatiguing contractions. The same measures, as well as the ability to voluntarily activate the contralateral FDI, were then assessed in an extended intermittent contraction protocol that elicited fatigue. Brief contractions under non-fatigued conditions increased index finger abduction force, FDI EMG, and motor evoked potential amplitude of the contralateral limb. However, when intermittent maximal contractions were continued until fatigue, there was an inability to produce maximal force with the contralateral limb (~30%) which was coupled to a decrease in the level of voluntary activation (~20%). These declines were present without changes in reflex activity, and regardless of whether cortical or motor point stimulation was used to assess voluntary activation. It is concluded that performing maximal intermittent contractions with a single limb causes an inability of the CNS to maximally drive the homologous muscle of the contralateral limb. This was, in part, mediated by mechanisms that involve the motor cortex ipsilateral to the contracting limb.
Optimal Energy Management for a Smart Grid using Resource-Aware Utility Maximization
Abegaz, Brook W.; Mahajan, Satish M.; Negeri, Ebisa O.
2016-06-01
Heterogeneous energy prosumers are aggregated to form a smart grid based energy community managed by a central controller which could maximize their collective energy resource utilization. Using the central controller and distributed energy management systems, various mechanisms that harness the power profile of the energy community are developed for optimal, multi-objective energy management. The proposed mechanisms include resource-aware, multi-variable energy utility maximization objectives, namely: (1) maximizing the net green energy utilization, (2) maximizing the prosumers' level of comfortable, high quality power usage, and (3) maximizing the economic dispatch of energy storage units that minimize the net energy cost of the energy community. Moreover, an optimal energy management solution that combines the three objectives has been implemented by developing novel techniques of optimally flexible (un)certainty projection and appliance based pricing decomposition in an IBM ILOG CPLEX studio. A real-world, per-minute data from an energy community consisting of forty prosumers in Amsterdam, Netherlands is used. Results show that each of the proposed mechanisms yields significant increases in the aggregate energy resource utilization and welfare of prosumers as compared to traditional peak-power reduction methods. Furthermore, the multi-objective, resource-aware utility maximization approach leads to an optimal energy equilibrium and provides a sustainable energy management solution as verified by the Lagrangian method. The proposed resource-aware mechanisms could directly benefit emerging energy communities in the world to attain their energy resource utilization targets.
Image Segmentation by Discounted Cumulative Ranking on Maximal Cliques
Carreira, Joao; Sminchisescu, Cristian
2010-01-01
We propose a mid-level image segmentation framework that combines multiple figure-ground hypothesis (FG) constrained at different locations and scales, into interpretations that tile the entire image. The problem is cast as optimization over sets of maximal cliques sampled from the graph connecting non-overlapping, putative figure-ground segment hypotheses. Potential functions over cliques combine unary Gestalt-based figure quality scores and pairwise compatibilities among spatially neighboring segments, constrained by T-junctions and the boundary interface statistics resulting from projections of real 3d scenes. Learning the model parameters is formulated as rank optimization, alternating between sampling image tilings and optimizing their potential function parameters. State of the art results are reported on both the Berkeley and the VOC2009 segmentation dataset, where a 28% improvement was achieved.
Maximizing Information from Residential Measurements of Volatile Organic Compounds
Maddalena, Randy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Li, Na [Berkeley Analytical Associates, Richmond, CA (United States); Hodgson, Alfred [Berkeley Analytical Associates, Richmond, CA (United States); Offermann, Francis [Indoor Environmental Engineering, San Francisco, CA (United States); Singer, Brett [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2013-02-01
Continually changing materials used in home construction and finishing can introduce new chemicals or changes in the VOC profile in residential air and the trend towards tighter homes can lead to higher exposure concentrations for many indoor sources. However, the complex mixture of VOCs in residential air makes it difficult to discover emerging contaminants and/or trends in pollutant profiles. The purpose of this study is to prepare a comprehensive library of chemicals found in homes, along with a semi-quantitative approach to maximize the information gained from VOC measurements. We carefully reviewed data from 108 new California homes and identified 238 individual compounds. The majority of the identified VOCs originated indoors. Only 31% were found to have relevant health based exposure guidelines and less than 10% had a chronic reference exposure level (CREL). The finding highlights the importance of extending IAQ studies to include a wider range of VOCs
Prediction of Maximal Heart Rate in Children and Adolescents.
Gelbart, Miri; Ziv-Baran, Tomer; Williams, Craig A; Yarom, Yoni; Dubnov-Raz, Gal
2017-03-01
To identify a method to predict the maximal heart rate (MHR) in children and adolescents, as available prediction equations developed for adults have a low accuracy in children. We hypothesized that MHR may be influenced by resting heart rate, anthropometric factors, or fitness level. Cross-sectional study. Sports medicine center in primary care. Data from 627 treadmill maximal exercise tests performed by 433 pediatric athletes (age 13.7 ± 2.1 years, 70% males) were analyzed. Age, sex, sport type, stature, body mass, BMI, body fat, fitness level, resting, and MHR were recorded. To develop a prediction equation for MHR in youth, using stepwise multivariate linear regression and linear mixed model. To determine correlations between existing prediction equations and pediatric MHR. Observed MHR was 197 ± 8.6 b·min. Regression analysis revealed that resting heart rate, fitness, body mass, and fat percent were predictors of MHR (R = 0.25, P MHR variance, body mass added 5.7%, fat percent added 2.4%, and fitness added 1.2%. Existing adult equations had low correlations with observed MHR in children and adolescents (r = -0.03-0.34). A new equation to predict MHR in children and adolescents was developed, but was found to have low predictive ability, a finding similar to adult equations applied to children. Considering the narrow range of MHR in youth, we propose using 197 b·min as the mean MHR in children and adolescents, with 180 b·min the minimal threshold value (-2 standard deviations).
Breast Reconstruction After Mastectomy
... It also does not involve cutting of the abdominal muscle and is a free flap. This type of ... NCI fact sheet Mammograms . What are some new developments in breast reconstruction after mastectomy? Oncoplastic surgery. In ...
Prairie Reconstruction Initiative
US Fish and Wildlife Service, Department of the Interior — The purpose of the Prairie Reconstruction Initiative Advisory Team (PRIAT) is to identify and take steps to resolve uncertainties in the process of prairie...
... work together. Head and neck surgeons also perform craniofacial reconstruction operations. The surgery is done while you are deep asleep and pain-free (under general anesthesia ). The surgery may take ...
Reconstructions of eyelid defects
Nirmala Subramanian
2011-01-01
Full Text Available Eyelids are the protective mechanism of the eyes. The upper and lower eyelids have been formed for their specific functions by Nature. The eyelid defects are encountered in congenital anomalies, trauma, and postexcision for neoplasm. The reconstructions should be based on both functional and cosmetic aspects. The knowledge of the basic anatomy of the lids is a must. There are different techniques for reconstructing the upper eyelid, lower eyelid, and medial and lateral canthal areas. Many a times, the defects involve more than one area. For the reconstruction of the lid, the lining should be similar to the conjunctiva, a cover by skin and the middle layer to give firmness and support. It is important to understand the availability of various tissues for reconstruction. One layer should have the vascularity to support the other layer which can be a graft. A proper plan and execution of it is very important.
Dydak, F; Nefedov, Y; Wotschack, J; Zhemchugov, A
2004-01-01
For a bias-free momentum measurement of TPC tracks, the correct determination of cluster positions is mandatory. We argue in particular that (i) the reconstruction of the entire longitudinal signal shape in view of longitudinal diffusion, electronic pulse shaping, and track inclination is important both for the polar angle reconstruction and for optimum r phi resolution; and that (ii) self-crosstalk of pad signals calls for special measures for the reconstruction of the z coordinate. The problem of 'shadow clusters' is resolved. Algorithms are presented for accepting clusters as 'good' clusters, and for the reconstruction of the r phi and z cluster coordinates, including provisions for 'bad' pads and pads next to sector boundaries, respectively.
Prairie Reconstruction Initiative Project
US Fish and Wildlife Service, Department of the Interior — The purpose of the Prairie Reconstruction Initiative Advisory Team (PRIAT) is to identify and take steps to resolve uncertainties in the process of prairie...
Permutationally invariant state reconstruction
Moroder, Tobias; Hyllus, Philipp; Tóth, Géza;
2012-01-01
Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti......Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large...... likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex...
Permutationally invariant state reconstruction
Moroder, Tobias; Toth, Geza; Schwemmer, Christian; Niggebaum, Alexander; Gaile, Stefanie; Gühne, Otfried; Weinfurter, Harald
2012-01-01
Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, also an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a non-linear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed n...
The evolving breast reconstruction
Thomsen, Jørn Bo; Gunnarsson, Gudjon Leifur
2014-01-01
The aim of this editorial is to give an update on the use of the propeller thoracodorsal artery perforator flap (TAP/TDAP-flap) within the field of breast reconstruction. The TAP-flap can be dissected by a combined use of a monopolar cautery and a scalpel. Microsurgical instruments are generally...... not needed. The propeller TAP-flap can be designed in different ways, three of these have been published: (I) an oblique upwards design; (II) a horizontal design; (III) an oblique downward design. The latissimus dorsi-flap is a good and reliable option for breast reconstruction, but has been criticized...... for oncoplastic and reconstructive breast surgery and will certainly become an invaluable addition to breast reconstructive methods....
无
2010-01-01
The earthquake-hit Yushu shifts its focus from rescuing survivors to post-quake reconstruction The first phase of earthquake relief, in which rescuing lives was the priority, finished 12 days after a 7.1-magnitude earthquake struck the Tibetan Autonomous Prefecture of Yushu in northwest China’s Qinghai Province on April 14, and reconstruction of the area is now ready to begin.
Holan, Scott H.; Viator, John A.
2008-06-01
Photoacoustic image reconstruction may involve hundreds of point measurements, each of which contributes unique information about the subsurface absorbing structures under study. For backprojection imaging, two or more point measurements of photoacoustic waves induced by irradiating a biological sample with laser light are used to produce an image of the acoustic source. Each of these measurements must undergo some signal processing, such as denoising or system deconvolution. In order to process the numerous signals, we have developed an automated wavelet algorithm for denoising signals. We appeal to the discrete wavelet transform for denoising photoacoustic signals generated in a dilute melanoma cell suspension and in thermally coagulated blood. We used 5, 9, 45 and 270 melanoma cells in the laser beam path as test concentrations. For the burn phantom, we used coagulated blood in 1.6 mm silicon tube submerged in Intralipid. Although these two targets were chosen as typical applications for photoacoustic detection and imaging, they are of independent interest. The denoising employs level-independent universal thresholding. In order to accommodate nonradix-2 signals, we considered a maximal overlap discrete wavelet transform (MODWT). For the lower melanoma cell concentrations, as the signal-to-noise ratio approached 1, denoising allowed better peak finding. For coagulated blood, the signals were denoised to yield a clean photoacoustic resulting in an improvement of 22% in the reconstructed image. The entire signal processing technique was automated so that minimal user intervention was needed to reconstruct the images. Such an algorithm may be used for image reconstruction and signal extraction for applications such as burn depth imaging, depth profiling of vascular lesions in skin and the detection of single cancer cells in blood samples.
Gravel, Paul; Verhaeghe, Jeroen; Reader, Andrew J.
2013-01-01
This work explores the feasibility and impact of including both the motion correction and the image registration transformation parameters from positron emission tomography (PET) image space to magnetic resonance (MR), or stereotaxic, image space within the system matrix of PET image reconstruction. This approach is motivated by the fields of neuroscience and psychiatry, where PET is used to investigate differences in activation patterns between different groups of participants, requiring all images to be registered to a common spatial atlas. Currently, image registration is performed after image reconstruction which introduces interpolation effects into the final image. Furthermore, motion correction (also requiring registration) introduces a further level of interpolation, and the overall result of these operations can lead to resolution degradation and possibly artifacts. It is important to note that performing such operations on a post-reconstruction basis means, strictly speaking, that the final images are not ones which maximize the desired objective function (e.g. maximum likelihood (ML), or maximum a posteriori reconstruction (MAP)). To correctly seek parameter estimates in the desired spatial atlas which are in accordance with the chosen reconstruction objective function, it is necessary to include the transformation parameters for both motion correction and registration within the system modeling stage of image reconstruction. Such an approach not only respects the statistically chosen objective function (e.g. ML or MAP), but furthermore should serve to reduce the interpolation effects. To evaluate the proposed method, this work investigates registration (including motion correction) using 2D and 3D simulations based on the high resolution research tomograph (HRRT) PET scanner geometry, with and without resolution modeling, using the ML expectation maximization (MLEM) reconstruction algorithm. The quality of reconstruction was assessed using bias
An Ecological Study of Anterior Cruciate Ligament Reconstruction, Part 2
McGrath, Timothy M.; Waddington, Gordon; Scarvell, Jennie M.; Ball, Nick; Creer, Rob; Woods, Kevin; Smith, Damian; Adams, Roger
2017-01-01
GRF during a step-down task. When the performance tests were pooled together, mean postoperative improvements of 24% were observed from preoperative to 24 weeks within the surgical cohort. For each performance test, preoperative level of function strongly correlated with performance levels on the same test at 24 weeks. Discussion: The results of this study indicate that clinicians might seek to prioritize these tests and the rehabilitation themes they imply when seeking to maximize postoperative ACL activity outcomes. The observed strength between pre- and postoperative performance tests and return-to-sport outcomes within this study highlights the potential value of preoperative conditioning before undergoing ACL reconstruction. Future research should examine absolute predictive criterion thresholds for functional performance-based tests and reinjury risk reduction after ACL reconstruction. PMID:28255567
Camila Coelho Greco
2010-04-01
evaluation of these athletes is the maximal lactate steady state (MLSS, which is usually determined by a continuous protocol. However, the interruptions during intermittent exercise may alter the metabolic conditions of the exercise. The objective of this study was to compare the intensity at MLSS determined by continuous (MLSSc and intermittent protocols (MLSSi in athletes with different aerobic performance levels. Twelve male swimmers (22 ± 8 years, 69.9 ± 7.6 kg and 1.76 ± 0.07 m and eight male triathletes (22 ± 9 years, 69.5 ± 10.4 kg and 1.76 ± 0.13 m performed the following tests on different days in a 25 m swimming pool: 1 400 m performance test (v400 2 2 to 4 repetitions with 30 min duration at different intensities to determine MLSSc, and 4 2-4 repetitions of 12 x 150 s with an interval of 30 s (5:1 at different intensities to determine MLSSi. The swimmers showed v400 (1.38 ± 0.05 and 1.26 ± 0.06 ms-1, respectively, MLSSc (1.23 ± 0.05 and 1.08 ± 0.04 ms-1, respectively and MLSSi (1.26 ± 0.05 and 1.11 ± 0.05 ms-1, respectively values higher than triathletes. However, the percentage difference between MLSSc and MLSSi was statistically similar between groups (3%. There was no difference between blood lactate concentration at MLSSc and MLSSi in either group. Based on these results, it can be concluded that the intermittent exercise used enables increase in the exercise intensity at MLSS, without change in lactate concentration regardless of the aerobic performance level.
Kluiving, S.J.; Lascaris, M.A.; de Kraker, A.M.J.; Renes, H.; Borger, G.J.; Soetens, S.A.
2013-01-01
This paper demonstrates that methodologies from various disciplines can be utilised to explain the coastal development of the southern North Sea during the last 3000 years. The potential and uses of archaeological data are tested against the applicability of delivering data for reconstructing sea le