Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Optimized ancestral state reconstruction using Sankoff parsimony
Valiente Gabriel
2009-02-01
Full Text Available Abstract Background Parsimony methods are widely used in molecular evolution to estimate the most plausible phylogeny for a set of characters. Sankoff parsimony determines the minimum number of changes required in a given phylogeny when a cost is associated to transitions between character states. Although optimizations exist to reduce the computations in the number of taxa, the original algorithm takes time O(n2 in the number of states, making it impractical for large values of n. Results In this study we introduce an optimization of Sankoff parsimony for the reconstruction of ancestral states when ultrametric or additive cost matrices are used. We analyzed its performance for randomly generated matrices, Jukes-Cantor and Kimura's two-parameter models of DNA evolution, and in the reconstruction of elongation factor-1α and ancestral metabolic states of a group of eukaryotes, showing that in all cases the execution time is significantly less than with the original implementation. Conclusion The algorithms here presented provide a fast computation of Sankoff parsimony for a given phylogeny. Problems where the number of states is large, such as reconstruction of ancestral metabolism, are particularly adequate for this optimization. Since we are reducing the computations required to calculate the parsimony cost of a single tree, our method can be combined with optimizations in the number of taxa that aim at finding the most parsimonious tree.
The effect of natural selection on the performance of maximum parsimony
Ofria Charles
2007-06-01
Full Text Available Abstract Background Maximum parsimony is one of the most commonly used and extensively studied phylogeny reconstruction methods. While current evaluation methodologies such as computer simulations provide insight into how well maximum parsimony reconstructs phylogenies, they tell us little about how well maximum parsimony performs on taxa drawn from populations of organisms that evolved subject to natural selection in addition to the random factors of drift and mutation. It is clear that natural selection has a significant impact on Among Site Rate Variation (ASRV and the rate of accepted substitutions; that is, accepted mutations do not occur with uniform probability along the genome and some substitutions are more likely to occur than other substitutions. However, little is know about how ASRV and non-uniform character substitutions impact the performance of reconstruction methods such as maximum parsimony. To gain insight into these issues, we study how well maximum parsimony performs with data generated by Avida, a digital life platform where populations of digital organisms evolve subject to natural selective pressures. Results We first identify conditions where natural selection does affect maximum parsimony's reconstruction accuracy. In general, as we increase the probability that a significant adaptation will occur in an intermediate ancestor, the performance of maximum parsimony improves. In fact, maximum parsimony can correctly reconstruct small 4 taxa trees on data that have received surprisingly many mutations if the intermediate ancestor has received a significant adaptation. We demonstrate that this improved performance of maximum parsimony is attributable more to ASRV than to non-uniform character substitutions. Conclusion Maximum parsimony, as well as most other phylogeny reconstruction methods, may perform significantly better on actual biological data than is currently suggested by computer simulation studies because of natural
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Maximum Parsimony and the Skewness Test: A Simulation Study of the Limits of Applicability
Määttä, Jussi; Roos, Teemu
2016-01-01
The maximum parsimony (MP) method for inferring phylogenies is widely used, but little is known about its limitations in non-asymptotic situations. This study employs large-scale computations with simulated phylogenetic data to estimate the probability that MP succeeds in finding the true phylogeny for up to twelve taxa and 256 characters. The set of candidate phylogenies are taken to be unrooted binary trees; for each simulated data set, the tree lengths of all (2n − 5)!! candidates are computed to evaluate quantities related to the performance of MP, such as the probability of finding the true phylogeny, the probability that the tree with the shortest length is unique, the probability that the true phylogeny has the shortest tree length, and the expected inverse of the number of trees sharing the shortest length. The tree length distributions are also used to evaluate and extend the skewness test of Hillis for distinguishing between random and phylogenetic data. The results indicate, for example, that the critical point after which MP achieves a success probability of at least 0.9 is roughly around 128 characters. The skewness test is found to perform well on simulated data and the study extends its scope to up to twelve taxa. PMID:27035667
Haseeb A. Khan
2008-01-01
Full Text Available This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA, maximum parsimony (MP and unweighted pair group method with arithmetic mean (UPGMA. The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella and an out-group (Addax nasomaculatus were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65% followed by cyt-b (94.22% and d-loop (87.29%. There were few transitions (2.35% and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions and d-loop (11.57% transitions and 1.14% transversions while com- paring the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers.
State space parsimonious reconstruction of attractor produced by an electronic oscillator
Aguirre, Luis A.; Freitas, Ubiratan S.; Letellier, Christophe; Sceller, Lois Le; Maquet, Jean
2000-02-01
This work discusses the reconstruction, from a set of real data, of a chaotic attractor produced by a well-known electronic oscillator, Chua's circuit. The mathematical representation used is a nonlinear differential equation of the polynomial type. One of the contributions of the present study is that structure selection techniques have been applied to help determine the regressors in the model. Models of the chaotic attractor obtained with and without structure selection were compared. The main differences between structure-selected models and complete structure models are: i) the former are more parsimonious that the latter, ii) fixed-point symmetry is guaranteed for the former, iii) for structure-selected models a trivial fixed point is also guaranteed, and iv) the former set of models produce attractors that are topologically closer to the original attractor than those produced by the complete structure models.
Holden, Clare Janaki
2002-04-22
Linguistic divergence occurs after speech communities divide, in a process similar to speciation among isolated biological populations. The resulting languages are hierarchically related, like genes or species. Phylogenetic methods developed in evolutionary biology can thus be used to infer language trees, with the caveat that 'borrowing' of linguistic elements between languages also occurs, to some degree. Maximum-parsimony trees for 75 Bantu and Bantoid African languages were constructed using 92 items of basic vocabulary. The level of character fit on the trees was high (consistency index was 0.65), indicating that a tree model fits Bantu language evolution well, at least for the basic vocabulary. The Bantu language tree reflects the spread of farming across this part of sub-Saharan Africa between ca. 3000 BC and AD 500. Modern Bantu subgroups, defined by clades on parsimony trees, mirror the earliest farming traditions both geographically and temporally. This suggests that the major subgroups of modern Bantu stem from the Neolithic and Early Iron Age, with little subsequent movement by speech communities.
Smith, J F
2000-06-01
Generic relationships within Episcieae were assessed using ITS and ndhF sequences. Previous analyses of this tribe have focussed only on ndhF data and have excluded two genera, Rhoogeton and Oerstedina, which are included in this analysis. Data were analyzed using both parsimony and maximum-likelihood methods. Results from partition homogeneity tests imply that the two data sets are significantly incongruent, but when Rhoogeton is removed from the analysis, the data sets are not significantly different. The combined data sets reveal greater strength of relationships within the tribe with the exception of the position of Rhoogeton. Poorly or unresolved relationships based exclusively on ndhF data are more fully resolved with ITS data. These resolved clades include the monophyly of the genera Columnea and Paradrymonia and the sister-group relationship of Nematanthus and Codonanthe. A closer affinity between Neomortonia nummularia and N. rosea than has previously been seen is apparent from these data, although these two species are not monophyletic in any tree. Lastly, Capanea appears to be a member of Gloxinieae, although C. grandiflora remains within Episcieae. Evolution of fruit type, epiphytic habit, and presence of tubers is re-examined with the new data presented here.
Automatic maximum entropy spectral reconstruction in NMR.
Mobli, Mehdi; Maciejewski, Mark W; Gryk, Michael R; Hoch, Jeffrey C
2007-10-01
Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system.
Peyronie's Reconstruction for Maximum Length and Girth Gain: Geometrical Principles
Paulo H. Egydio
2008-01-01
Full Text Available Peyronie's disease has been associated with penile shortening and some degree of erectile dysfunction. Surgical reconstruction should be based on giving a functional penis, that is, rectifying the penis with rigidity enough to make the sexual intercourse. The procedure should be discussed preoperatively in terms of length and girth reconstruction in order to improve patient satisfaction. The tunical reconstruction for maximum penile length and girth restoration should be based on the maximum length of the dissected neurovascular bundle possible and the application of geometrical principles to define the precise site and size of tunical incision and grafting procedure. As penile rectification and rigidity are required to achieve complete functional restoration of the penis and 20 to 54% of patients experience associated erectile dysfunction, penile straightening alone may not be enough to provide complete functional restoration. Therefore, phosphodiesterase inhibitors, self-injection, or penile prosthesis may need to be added in some cases.
Beyond maximum entropy: Fractal Pixon-based image reconstruction
Puetter, Richard C.; Pina, R. K.
1994-01-01
We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.
Fast Forward Maximum entropy reconstruction of sparsely sampled data.
Balsgart, Nicholas M; Vosegaard, Thomas
2012-10-01
We present an analytical algorithm using fast Fourier transformations (FTs) for deriving the gradient needed as part of the iterative reconstruction of sparsely sampled datasets using the forward maximum entropy reconstruction (FM) procedure by Hyberts and Wagner [J. Am. Chem. Soc. 129 (2007) 5108]. The major drawback of the original algorithm is that it required one FT and one evaluation of the entropy per missing datapoint to establish the gradient. In the present study, we demonstrate that the entire gradient may be obtained using only two FT's and one evaluation of the entropy derivative, thus achieving impressive time savings compared to the original procedure. An example: A 2D dataset with sparse sampling of the indirect dimension, with sampling of only 75 out of 512 complex points (15% sampling) would lack (512-75)×2=874 points per ν(2) slice. The original FM algorithm would require 874 FT's and entropy function evaluations to setup the gradient, while the present algorithm is ∼450 times faster in this case, since it requires only two FT's. This allows reduction of the computational time from several hours to less than a minute. Even more impressive time savings may be achieved with 2D reconstructions of 3D datasets, where the original algorithm required days of CPU time on high-performance computing clusters only require few minutes of calculation on regular laptop computers with the new algorithm.
Superfast maximum-likelihood reconstruction for quantum tomography
Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon
2017-06-01
Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.
Pure Parsimony Xor Haplotyping
Bonizzoni, Paola; Dondi, Riccardo; Pirola, Yuri; Rizzi, Romeo
2010-01-01
The haplotype resolution from xor-genotype data has been recently formulated as a new model for genetic studies. The xor-genotype data is a cheaply obtainable type of data distinguishing heterozygous from homozygous sites without identifying the homozygous alleles. In this paper we propose a formulation based on a well-known model used in haplotype inference: pure parsimony. We exhibit exact solutions of the problem by providing polynomial time algorithms for some restricted cases and a fixed-parameter algorithm for the general case. These results are based on some interesting combinatorial properties of a graph representation of the solutions. Furthermore, we show that the problem has a polynomial time k-approximation, where k is the maximum number of xor-genotypes containing a given SNP. Finally, we propose a heuristic and produce an experimental analysis showing that it scales to real-world large instances taken from the HapMap project.
Pure parsimony xor haplotyping.
Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri; Rizzi, Romeo
2010-01-01
The haplotype resolution from xor-genotype data has been recently formulated as a new model for genetic studies. The xor-genotype data is a cheaply obtainable type of data distinguishing heterozygous from homozygous sites without identifying the homozygous alleles. In this paper, we propose a formulation based on a well-known model used in haplotype inference: pure parsimony. We exhibit exact solutions of the problem by providing polynomial time algorithms for some restricted cases and a fixed-parameter algorithm for the general case. These results are based on some interesting combinatorial properties of a graph representation of the solutions. Furthermore, we show that the problem has a polynomial time k-approximation, where k is the maximum number of xor-genotypes containing a given single nucleotide polymorphisms (SNP). Finally, we propose a heuristic and produce an experimental analysis showing that it scales to real-world large instances taken from the HapMap project.
Maximum entropy reconstruction of spin densities involving non uniform prior
Schweizer, J.; Ressouche, E. [DRFMC/SPSMS/MDN CEA-Grenoble (France); Papoular, R.J. [CEA-Saclay, Gif sur Yvette (France). Lab. Leon Brillouin; Tasset, F. [Inst. Laue Langevin, Grenoble (France); Zheludev, A.I. [Brookhaven National Lab., Upton, NY (United States). Physics Dept.
1997-09-01
Diffraction experiments give microscopic information on structures in crystals. A method which uses the concept of maximum of entropy (MaxEnt), appears to be a formidable improvement in the treatment of diffraction data. This method is based on a bayesian approach: among all the maps compatible with the experimental data, it selects that one which has the highest prior (intrinsic) probability. Considering that all the points of the map are equally probable, this probability (flat prior) is expressed via the Boltzman entropy of the distribution. This method has been used for the reconstruction of charge densities from X-ray data, for maps of nuclear densities from unpolarized neutron data as well as for distributions of spin density. The density maps obtained by this method, as compared to those resulting from the usual inverse Fourier transformation, are tremendously improved. In particular, any substantial deviation from the background is really contained in the data, as it costs entropy compared to a map that would ignore such features. However, in most of the cases, before the measurements are performed, some knowledge exists about the distribution which is investigated. It can range from the simple information of the type of scattering electrons to an elaborate theoretical model. In these cases, the uniform prior which considers all the different pixels as equally likely, is too weak a requirement and has to be replaced. In a rigorous bayesian analysis, Skilling has shown that prior knowledge can be encoded into the Maximum Entropy formalism through a model m({rvec r}), via a new definition for the entropy given in this paper. In the absence of any data, the maximum of the entropy functional is reached for {rho}({rvec r}) = m({rvec r}). Any substantial departure from the model, observed in the final map, is really contained in the data as, with the new definition, it costs entropy. This paper presents illustrations of model testing.
Maximum likelihood pedigree reconstruction using integer linear programming.
Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A
2013-01-01
Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible.
Nonuniform sampling and maximum entropy reconstruction in multidimensional NMR.
Hoch, Jeffrey C; Maciejewski, Mark W; Mobli, Mehdi; Schuyler, Adam D; Stern, Alan S
2014-02-18
NMR spectroscopy is one of the most powerful and versatile analytic tools available to chemists. The discrete Fourier transform (DFT) played a seminal role in the development of modern NMR, including the multidimensional methods that are essential for characterizing complex biomolecules. However, it suffers from well-known limitations: chiefly the difficulty in obtaining high-resolution spectral estimates from short data records. Because the time required to perform an experiment is proportional to the number of data samples, this problem imposes a sampling burden for multidimensional NMR experiments. At high magnetic field, where spectral dispersion is greatest, the problem becomes particularly acute. Consequently multidimensional NMR experiments that rely on the DFT must either sacrifice resolution in order to be completed in reasonable time or use inordinate amounts of time to achieve the potential resolution afforded by high-field magnets. Maximum entropy (MaxEnt) reconstruction is a non-Fourier method of spectrum analysis that can provide high-resolution spectral estimates from short data records. It can also be used with nonuniformly sampled data sets. Since resolution is substantially determined by the largest evolution time sampled, nonuniform sampling enables high resolution while avoiding the need to uniformly sample at large numbers of evolution times. The Nyquist sampling theorem does not apply to nonuniformly sampled data, and artifacts that occur with the use of nonuniform sampling can be viewed as frequency-aliased signals. Strategies for suppressing nonuniform sampling artifacts include the careful design of the sampling scheme and special methods for computing the spectrum. Researchers now routinely report that they can complete an N-dimensional NMR experiment 3(N-1) times faster (a 3D experiment in one ninth of the time). As a result, high-resolution three- and four-dimensional experiments that were prohibitively time consuming are now practical
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Anwar A. Jabbar
2015-08-01
Full Text Available Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU based real-time maximum a-posteriori (MAP image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset when compared to existing CPU based systems.
Real-time maximum a-posteriori image reconstruction for fluorescence microscopy
Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.
2015-08-01
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Shilpa Dilipkumar
2015-03-01
Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.
Bian, Liheng; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai
2016-01-01
Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for error removal. Results on both simulated data and real data captured using our laser FPM setup show that the proposed...
Maximum-entropy weak lens reconstruction improved methods and application to data
Marshall, P J; Gull, S F; Bridle, S L
2002-01-01
We develop the maximum-entropy weak shear mass reconstruction method presented in earlier papers by taking each background galaxy image shape as an independent estimator of the reduced shear field and incorporating an intrinsic smoothness into the reconstruction. The characteristic length scale of this smoothing is determined by Bayesian methods. Within this algorithm the uncertainties due to the intrinsic distribution of galaxy shapes are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures can be calculated with corresponding uncertainties. We apply this method to two clusters taken from N-body simulations using mock observations corresponding to Keck LRIS and mosaiced HST WFPC2 fields. We demonstrate that the Bayesian choice of smoothing length is sensible and that masses within apertures (including one on a filamentary structure) are reliable. We apply the method to data taken on the cluster MS1054-03 using the Keck LRIS (Clowe et al. 2000) and HST (Hoekstra e...
Reconstruction of North American drainage basins and river discharge since the Last Glacial Maximum
Wickert, Andrew D.
2016-11-01
Over the last glacial cycle, ice sheets and the resultant glacial isostatic adjustment (GIA) rearranged river systems. As these riverine threads that tied the ice sheets to the sea were stretched, severed, and restructured, they also shrank and swelled with the pulse of meltwater inputs and time-varying drainage basin areas, and sometimes delivered enough meltwater to the oceans in the right places to influence global climate. Here I present a general method to compute past river flow paths, drainage basin geometries, and river discharges, by combining models of past ice sheets, glacial isostatic adjustment, and climate. The result is a time series of synthetic paleohydrographs and drainage basin maps from the Last Glacial Maximum to present for nine major drainage basins - the Mississippi, Rio Grande, Colorado, Columbia, Mackenzie, Hudson Bay, Saint Lawrence, Hudson, and Susquehanna/Chesapeake Bay. These are based on five published reconstructions of the North American ice sheets. I compare these maps with drainage reconstructions and discharge histories based on a review of observational evidence, including river deposits and terraces, isotopic records, mineral provenance markers, glacial moraine histories, and evidence of ice stream and tunnel valley flow directions. The sharp boundaries of the reconstructed past drainage basins complement the flexurally smoothed GIA signal that is more often used to validate ice-sheet reconstructions, and provide a complementary framework to reduce nonuniqueness in model reconstructions of the North American ice-sheet complex.
Yin, Lo I.; Bielefeld, Michael J.
1987-01-01
The maximum entropy method (MEM) and balanced correlation method were used to reconstruct the images of low-intensity X-ray objects obtained experimentally by means of a uniformly redundant array coded aperture system. The reconstructed images from MEM are clearly superior. However, the MEM algorithm is computationally more time-consuming because of its iterative nature. On the other hand, both the inherently two-dimensional character of images and the iterative computations of MEM suggest the use of parallel processing machines. Accordingly, computations were carried out on the massively parallel processor at Goddard Space Flight Center as well as on the serial processing machine VAX 8600, and the results are compared.
A new global reconstruction of temperature changes at the Last Glacial Maximum
J. D. Annan
2013-02-01
Full Text Available Some recent compilations of proxy data both on land and ocean (MARGO Project Members, 2009; Bartlein et al., 2011; Shakun et al., 2012, have provided a new opportunity for an improved assessment of the overall climatic state of the Last Glacial Maximum. In this paper, we combine these proxy data with the ensemble of structurally diverse state of the art climate models which participated in the PMIP2 project (Braconnot et al., 2007 to generate a spatially complete reconstruction of surface air (and sea surface temperatures. We test a variety of approaches, and show that multiple linear regression performs well for this application. Our reconstruction is significantly different to and more accurate than previous approaches and we obtain an estimated global mean cooling of 4.0 ± 0.8 °C (95% CI.
A New Maximum-Likelihood Technique for Reconstructing Cosmic-Ray Anisotropy at All Angular Scales
Ahlers, Markus; Desiati, Paolo; Díaz-Vélez, Juan Carlos; Fiorino, Daniel W; Westerhoff, Stefan
2016-01-01
The arrival directions of TeV-PeV cosmic rays show weak but significant anisotropies with relative intensities at the level of one per mille. Due to the smallness of the anisotropies, quantitative studies require careful disentanglement of detector effects from the observation. We discuss an iterative maximum-likelihood reconstruction that simultaneously fits cosmic ray anisotropies and detector acceptance. The method does not rely on detector simulations and provides an optimal anisotropy reconstruction for ground-based cosmic ray observatories located in the middle latitudes. It is particularly well suited to the recovery of the dipole anisotropy, which is a crucial observable for the study of cosmic ray diffusion in our Galaxy. We also provide general analysis methods for recovering large- and small-scale anisotropies that take into account systematic effects of the observation by ground-based detectors.
N. Etien
2007-10-01
Full Text Available In this study, we have combined a Burgundy grape harvest date record with new δ^{18}O measurements conducted on timbers and living trees cellulose from Fontainebleau castle and forest. Our reconstruction is expected to provide a reference series for the variability of growing season temperature (from April to September in Western Europe from 1596 to 2000. We have estimated an uncertainty of 0.55°C on individual growing season maximum temperature reconstructions. We are able to assess this uncertainty, which is not the case for many documentary sources (diaries etc., and even not the case for early instrumental temperature data.
We compare our data with a number of independent temperature estimates for Europe and the Northern Hemisphere. The comparison between our reconstruction and Manley mean growing season temperature data provides an independent control of the quality of CET data. We show that our reconstruction preserves more variance back in time, because it was not distorted/averaged by statistical/homogenisation methods.
Further works will be conducted to compare the δ^{18}O data from wood cellulose provided by transects of different tree species in Europe obtained within the EC ISONET project and the French ANR Program ESCARSEL, to analyse the spatial and temporal coherency between δ^{18}O records. The decadal variability will be also compared with other precipitation δ^{18}O records such as those obtained from benthic ostracods from deep peri-Alpine lakes or simulated by regional atmospheric models equipped with the modelling of water stable isotopes.
Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.
1993-01-01
We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.
Barth, Aaron M.; Clark, Peter U.; Clark, Jorie; McCabe, A. Marshall; Caffee, Marc
2016-06-01
Reconstructions of the extent and height of the Irish Ice Sheet (IIS) during the Last Glacial Maximum (LGM, ∼19-26 ka) are widely debated, in large part due to limited age constraints on former ice margins and due to uncertainties in the origin of the trimlines. A key area is southwestern Ireland, where various LGM reconstructions range from complete coverage by a contiguous IIS that extends to the continental shelf edge to a separate, more restricted southern-sourced Kerry-Cork Ice Cap (KCIC). We present new 10Be surface exposure ages from two moraines in a cirque basin in the Macgillycuddy's Reeks that provide a unique and unequivocal constraint on ice thickness for this region. Nine 10Be ages from an outer moraine yield a mean age of 24.5 ± 1.4 ka while six ages from an inner moraine yield a mean age of 20.4 ± 1.2 ka. These ages show that the northern flanks of the Macgillycuddy's Reeks were not covered by the IIS or a KCIC since at least 24.5 ± 1.4 ka. If there was more extensive ice coverage over the Macgillycuddy's Reeks during the LGM, it occurred prior to our oldest ages.
Parsimonious Language Models for Information Retrieval
Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo
2004-01-01
We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,
Reconstruction of the glacial maximum recorded in the central Cantabrian Mountains (N Iberia)
Rodríguez-Rodríguez, Laura; Jiménez-Sánchez, Montserrat; José Domínguez-Cuesta, María
2014-05-01
The Cantabrian Mountains is a coastal range up to 2648 m altitude trending parallel to northern Iberian Peninsula edge at a maximum distance of 100 km inland (~43oN 5oW). Glacial sediments and landforms are generally well-preserved at altitudes higher than 1600 m, evidencing the occurrence of former glaciations. Previous research supports a regional glacial maximum prior to ca 38 cal ka BP and an advanced state of deglaciation by the time of the global Last Glacial Maximum (Jiménez-Sánchez et al., 2013). A geomorphological database has been produced in ArcGIS (1:25,000 scale) for an area about 800 km2 that partially covers the Redes Natural Reservation and Picos de Europa Regional Park. A reconstruction of the ice extent and flow pattern of the former glaciers is presented for this area, showing that an ice field was developed on the study area during the local glacial maximum. The maximum length of the ice tongues that drained this icefield was remarkably asymmetric between both slopes, recording 1 to 6 km-long in the northern slope and up to 19 km-long in southern one. The altitude difference between the glacier fronts of both mountain slopes was ca 100 m. This asymmetric character of the ice tongues is related to geologic and topo-climatic factors. Jiménez-Sánchez, M., Rodríguez-Rodríguez, L., García-Ruiz, J.M., Domínguez-Cuesta, M.J., Farias, P., Valero-Garcés, B., Moreno, A., Rico, M., Valcárcel, M., 2013. A review of glacial geomorphology and chronology in northern Spain: timing and regional variability during the last glacial cycle. Geomorphology 196, 50-64. Research funded by the CANDELA project (MINECO-CGL2012-31938). L. Rodríguez-Rodríguez is a PhD student with a grant from the Spanish national FPU Program (MECD).
Zhu, Liangjun; Zhang, Yuandong; Li, Zongshan; Guo, Binde; Wang, Xiaochun
2016-07-01
We present a reconstruction of July-August mean maximum temperature variability based on a chronology of tree-ring widths over the period AD 1646-2013 in the northern part of the northwestern Sichuan Plateau (NWSP), China. A regression model explains 37.1 % of the variance of July-August mean maximum temperature during the calibration period from 1954 to 2012. Compared with nearby temperature reconstructions and gridded land surface temperature data, our temperature reconstruction had high spatial representativeness. Seven major cold periods were identified (1708-1711, 1765-1769, 1818-1821, 1824-1828, 1832-1836, 1839-1842, and 1869-1877), and three major warm periods occurred in 1655-1668, 1719-1730, and 1858-1859 from this reconstruction. The typical Little Ice Age climate can also be well represented in our reconstruction and clearly ended with climatic amelioration at the late of the 19th century. The 17th and 19th centuries were cold with more extreme cold years, while the 18th and 20th centuries were warm with less extreme cold years. Moreover, the 20th century rapid warming was not obvious in the NWSP mean maximum temperature reconstruction, which implied that mean maximum temperature might play an important and different role in global change as unique temperature indicators. Multi-taper method (MTM) spectral analysis revealed significant periodicities of 170-, 49-114-, 25-32-, 5.7-, 4.6-4.7-, 3.0-3.1-, 2.5-, and 2.1-2.3-year quasi-cycles at a 95 % confidence level in our reconstruction. Overall, the mean maximum temperature variability in the NWSP may be associated with global land-sea atmospheric circulation (e.g., ENSO, PDO, or AMO) as well as solar and volcanic forcing.
HARFF; Jan; MEYER; Michael
2009-01-01
The range of relative sea level rise in the northwestern South China Sea since the Last Glacial Maximum was over 100 m. As a result, lowland regions including the Northeast Vietnam coast, Beibu Gulf, and South China coast experienced an evolution from land to sea. Based on the principle of reconstructing paleogeography and using recent digital elevation model, relative sea level curves, and sediment accumulation data, this paper presents a series of paleogeographic scenarios back to 20 cal. ka BP for the northwestern South China Sea. The scenarios demonstrate the entire process of coastline changes for the area of interest. During the late glacial period from 20 to 15 cal. ka BP, coastline slowly retreated, causing a land loss of only 1×104 km2, and thus the land-sea distribution remained nearly unchanged. Later in 15-10 cal. ka BP coastline rapidly retreated and area of land loss was up to 24×104 km2, causing lowlands around Northeast Vietnam and South China soon to be underwater. Coastline retreat continued quite rapidly during the early Holocene. From 10 to 6 cal. ka BP land area had decreased by 9×104 km2, and during that process the Qiongzhou Strait completely opened up. Since the mid Holocene, main controls on coastline change are from vertical crustal movements and sedimentation. Transgression was surpassed by regression, resulting in a land accretion of about 10×104 km2.
Two dimensional IR-FID-CPMG acquisition and adaptation of a maximum entropy reconstruction
Rondeau-Mouro, C.; Kovrlija, R.; Van Steenberge, E.; Moussaoui, S.
2016-04-01
By acquiring the FID signal in two-dimensional TD-NMR spectroscopy, it is possible to characterize mixtures or complex samples composed of solid and liquid phases. We have developed a new sequence for this purpose, called IR-FID-CPMG, making it possible to correlate spin-lattice T1 and spin-spin T2 relaxation times, including both liquid and solid phases in samples. We demonstrate here the potential of a new algorithm for the 2D inverse Laplace transformation of IR-FID-CPMG data based on an adapted reconstruction of the maximum entropy method, combining the standard decreasing exponential decay function with an additional term drawn from Abragam's FID function. The results show that the proposed IR-FID-CPMG sequence and its related inversion model allow accurate characterization and quantification of both solid and liquid phases in multiphasic and compartmentalized systems. Moreover, it permits to distinguish between solid phases having different T1 relaxation times or to highlight cross-relaxation phenomena.
Parsimonious Refraction Interferometry and Tomography
Hanafy, Sherif
2017-02-04
We present parsimonious refraction interferometry and tomography where a densely populated refraction data set can be obtained from two reciprocal and several infill shot gathers. The assumptions are that the refraction arrivals are head waves, and a pair of reciprocal shot gathers and several infill shot gathers are recorded over the line of interest. Refraction traveltimes from these shot gathers are picked and spawned into O(N2) virtual refraction traveltimes generated by N virtual sources, where N is the number of geophones in the 2D survey. The virtual traveltimes can be inverted to give the velocity tomogram. This enormous increase in the number of traveltime picks and associated rays, compared to the many fewer traveltimes from the reciprocal and infill shot gathers, allows for increased model resolution and a better condition number with the system of normal equations. A significant benefit is that the parsimonious survey and the associated traveltime picking is far less time consuming than that for a standard refraction survey with a dense distribution of sources.
Nor, Igor; Charlat, Sylvain; Engelstadter, Jan; Reuter, Max; Duron, Olivier; Sagot, Marie-France
2010-01-01
We address in this paper a new computational biology problem that aims at understanding a mechanism that could potentially be used to genetically manipulate natural insect populations infected by inherited, intra-cellular parasitic bacteria. In this problem, that we denote by \\textsc{Mod/Resc Parsimony Inference}, we are given a boolean matrix and the goal is to find two other boolean matrices with a minimum number of columns such that an appropriately defined operation on these matrices gives back the input. We show that this is formally equivalent to the \\textsc{Bipartite Biclique Edge Cover} problem and derive some complexity results for our problem using this equivalence. We provide a new, fixed-parameter tractability approach for solving both that slightly improves upon a previously published algorithm for the \\textsc{Bipartite Biclique Edge Cover}. Finally, we present experimental results where we applied some of our techniques to a real-life data set.
Danforth, B N; Sauquet, H; Packer, L
1999-12-01
We investigated higher-level phylogenetic relationships within the genus Halictus based on parsimony and maximum likelihood (ML) analysis of elongation factor-1alpha DNA sequence data. Our data set includes 41 OTUs representing 35 species of halictine bees from a diverse sample of outgroup genera and from the three widely recognized subgenera of Halictus (Halictus s.s., Seladonia, and Vestitohalictus). We analyzed 1513 total aligned nucleotide sites spanning three exons and two introns. Equal-weights parsimony analysis of the overall data set yielded 144 equally parsimonious trees. Major conclusions supported in this analysis (and in all subsequent analyses) included the following: (1) Thrincohalictus is the sister group to Halictus s.l., (2) Halictus s.l. is monophyletic, (3) Vestitohalictus renders Seladonia paraphyletic but together Seladonia + Vestitohalictus is monophyletic, (4) Michener's Groups 1 and 3 are monophyletic, and (5) Michener's Group 1 renders Group 2 paraphyletic. In order to resolve basal relationships within Halictus we applied various weighting schemes under parsimony (successive approximations character weighting and implied weights) and employed ML under 17 models of sequence evolution. Weighted parsimony yielded conflicting results but, in general, supported the hypothesis that Seladonia + Vestitohalictus is sister to Michener's Group 3 and renders Halictus s.s. paraphyletic. ML analyses using the GTR model with site-specific rates supported an alternative hypothesis: Seladonia + Vestitohalictus is sister to Halictus s.s. We mapped social behavior onto trees obtained under ML and parsimony in order to reconstruct the likely historical pattern of social evolution. Our results are unambiguous: the ancestral state for the genus Halictus is eusociality. Reversal to solitary behavior has occurred at least four times among the species included in our analysis. Copyright 1999 Academic Press.
Statistical parsimony networks and species assemblages in Cephalotrichid nemerteans (nemertea).
Chen, Haixia; Strand, Malin; Norenburg, Jon L; Sun, Shichun; Kajihara, Hiroshi; Chernyshev, Alexey V; Maslakova, Svetlana A; Sundberg, Per
2010-09-21
It has been suggested that statistical parsimony network analysis could be used to get an indication of species represented in a set of nucleotide data, and the approach has been used to discuss species boundaries in some taxa. Based on 635 base pairs of the mitochondrial protein-coding gene cytochrome c oxidase I (COI), we analyzed 152 nemertean specimens using statistical parsimony network analysis with the connection probability set to 95%. The analysis revealed 15 distinct networks together with seven singletons. Statistical parsimony yielded three networks supporting the species status of Cephalothrix rufifrons, C. major and C. spiralis as they currently have been delineated by morphological characters and geographical location. Many other networks contained haplotypes from nearby geographical locations. Cladistic structure by maximum likelihood analysis overall supported the network analysis, but indicated a false positive result where subnetworks should have been connected into one network/species. This probably is caused by undersampling of the intraspecific haplotype diversity. Statistical parsimony network analysis provides a rapid and useful tool for detecting possible undescribed/cryptic species among cephalotrichid nemerteans based on COI gene. It should be combined with phylogenetic analysis to get indications of false positive results, i.e., subnetworks that would have been connected with more extensive haplotype sampling.
Statistical parsimony networks and species assemblages in Cephalotrichid nemerteans (nemertea.
Haixia Chen
Full Text Available BACKGROUND: It has been suggested that statistical parsimony network analysis could be used to get an indication of species represented in a set of nucleotide data, and the approach has been used to discuss species boundaries in some taxa. METHODOLOGY/PRINCIPAL FINDINGS: Based on 635 base pairs of the mitochondrial protein-coding gene cytochrome c oxidase I (COI, we analyzed 152 nemertean specimens using statistical parsimony network analysis with the connection probability set to 95%. The analysis revealed 15 distinct networks together with seven singletons. Statistical parsimony yielded three networks supporting the species status of Cephalothrix rufifrons, C. major and C. spiralis as they currently have been delineated by morphological characters and geographical location. Many other networks contained haplotypes from nearby geographical locations. Cladistic structure by maximum likelihood analysis overall supported the network analysis, but indicated a false positive result where subnetworks should have been connected into one network/species. This probably is caused by undersampling of the intraspecific haplotype diversity. CONCLUSIONS/SIGNIFICANCE: Statistical parsimony network analysis provides a rapid and useful tool for detecting possible undescribed/cryptic species among cephalotrichid nemerteans based on COI gene. It should be combined with phylogenetic analysis to get indications of false positive results, i.e., subnetworks that would have been connected with more extensive haplotype sampling.
Apel, W D; Bähren, L; Bekk, K; Bertaina, M; Biermann, P L; Blümer, J; Bozdog, H; Brancus, I M; Cantoni, E; Chiavassa, A; Daumiller, K; de Souza, V; Di Pierro, F; Doll, P; Engel, R; Falcke, H; Fuchs, B; Fuhrmann, D; Gemmeke, H; Grupen, C; Haungs, A; Heck, D; Hörandel, J R; Horneffer, A; Huber, D; Huege, T; Isar, P G; Kampert, K -H; Kang, D; Krömer, O; Kuijpers, J; Link, K; Łuczak, P; Ludwig, M; Mathes, H J; Melissas, M; Morello, C; Oehlschläger, J; Palmieri, N; Pierog, T; Rautenberg, J; Rebel, H; Roth, M; Rühle, C; Saftoiu, A; Schieler, H; Schmidt, A; Schröder, F G; Sima, O; Toma, G; Trinchero, G C; Weindl, A; Wochele, J; Zabierowski, J; Zensus, J A
2014-01-01
LOPES is a digital radio interferometer located at Karlsruhe Institute of Technology (KIT), Germany, which measures radio emission from extensive air showers at MHz frequencies in coincidence with KASCADE-Grande. In this article, we explore a method (slope method) which leverages the slope of the measured radio lateral distribution to reconstruct crucial attributes of primary cosmic rays. First, we present an investigation of the method on the basis of pure simulations. Second, we directly apply the slope method to LOPES measurements. Applying the slope method to simulations, we obtain uncertainties on the reconstruction of energy and depth of shower maximum Xmax of 13% and 50 g/cm^2, respectively. Applying it to LOPES measurements, we are able to reconstruct energy and Xmax of individual events with upper limits on the precision of 20-25% for the primary energy and 95 g/cm^2 for Xmax, despite strong human-made noise at the LOPES site.
Jang, Yun Hee; Cho, Bum Sang; Kang, Min Ho; Kang, Woo Young; Lee, Jisun; Kim, Yook; Lee, Soo Hyun; Lee, Soo Jung [Dept. of Radiology, Chungbuk National University Hospital, Cheongju (Korea, Republic of); Lee, Jin Yong [Public Health Medical Service, Seoul National University Boramae Medical Center, Seoul (Korea, Republic of)
2016-07-15
The purpose of this study was to determine a suitable position in which the measured length on ultrasound is close to the true renal length obtained through a multiplanar reconstructed MR image. A total of 33 individuals (males: 15, females: 18) without any underlying renal disease were included in the present study. Renal length was measured as the longest axis at the level of the renal hilum in three positions-supine, lateral decubitus, and prone, respectively. With a 3.0 T MR scanner, 3D eTHRIVE was acquired. Subsequently, the maximum longitudinal length of both the kidneys was measured through multiplanar reconstructed MR images. Paired t-test was used to compare the renal length obtained from ultrasonographic measurement with the length obtained through multiplanar reconstructed MR images. Our study demonstrated significant difference between sonographic renal length in three positions and renal length through MRI (p < 0.001). However, the longest longitudinal length of right kidney among the measured three values by ultrasound was statistically similar to the renal length measured by reconstructed MR image. Among them, the lateral decubitus position showed the strongest correlation with true renal length (right: 0.887; left: 0.849). We recommend measurement of the maximum renal longitudinal length in all possible positions on ultrasonography. If not allowed, the best measurement is on the lateral decubitus showing the strongest correlation coefficient with true renal length.
,
2015-01-01
Since its commissioning in autumn 2012, Tunka-Rex, the radio extension of the air-Cherenkov detector Tunka-133, performed three years of air shower measurements. Currently the detector consists of 44 antennas connected to air-Cherenkov and scintillator detectors, respectively, placed in the Tunka valley, Siberia. Triggered by these detectors, Tunka-Rex measures the radio signal up to EeV-scale air-showers. This configuration provides a unique possibility for cross-calibration between air-Cherenkov, radio and particle techniques. We present reconstruction methods for the energy and the shower maximum developed with CoREAS simulations, which allow for a precision competitive with the air-Cherenkov technique. We apply these methods to data acquired by Tunka-Rex in the first year which we use for cross-calibration, and we compare the results with the reconstruction of the energy and the shower maximum by Tunka-133, which provides also a reconstruction for the shower core used for the radio reconstruction. Our met...
Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert
2014-02-01
The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-07
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT
Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo
2016-11-01
Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.
Banks, William E.; d'Errico, Francesco; Peterson, A. Townsend; Kageyama, Masa; Colombeau, Guillaume
2008-12-01
A variety of approaches have been used to reconstruct glacial distributions of species, identify their environmental characteristics, and understand their influence on subsequent population expansions. Traditional methods, however, provide only rough estimates of past distributions, and are often unable to identify the ecological and geographic processes that shaped them. Recently, ecological niche modeling (ENM) methodologies have been applied to these questions in an effort to overcome such limitations. We apply ENM to the European faunal record of the Last Glacial Maximum (LGM) to reconstruct ecological niches and potential ranges for caribou ( Rangifer tarandus) and red deer ( Cervus elaphus), and evaluate whether their LGM distributions resulted from tracking the geographic footprint of their ecological niches (niche conservatism) or if ecological niche shifts between the LGM and present might be implicated. Results indicate that the LGM geographic ranges of both species represent distributions characterized by niche conservatism, expressed through geographic contraction of the geographic footprints of their respective ecological niches.
Dianfeng Liu; Zimei Dong; Yanze Gu; Lingxia Tao
2008-01-01
We studied patterns of distribution and relationships among distributional areas of Tetrigidae insects in China using parsimony analysis of endemism (PAE). We constructed a matrix based on distribution data for Chinese Tetrigidae insects and an area cladogram using northeastern China area as an outgroup. Exhaustivesearches were conducted under the maximum parsimony criterion. Cluster analysis divided eight biogeographic areas into four groups; group 1 was composed of northeast China, group 2 ...
Veberic, Darko
2011-01-01
We present a novel method for combining the analog and photon-counting measurements of lidar transient recorders into reconstructed photon returns. The method takes into account the statistical properties of the two measurement modes and estimates the most likely number of arriving photons and the most likely values of acquisition parameters describing the two measurement modes. It extends and improves the standard combining ("gluing") methods and does not rely on any ad hoc definitions of the overlap region nor on any ackground subtraction methods.
Hong, Hunsop; Schonfeld, Dan
2008-06-01
In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.
3D PET image reconstruction based on Maximum Likelihood Estimation Method (MLEM) algorithm
Słomski, Artur; Bednarski, Tomasz; Białas, Piotr; Czerwiński, Eryk; Kapłon, Łukasz; Kochanowski, Andrzej; Korcyl, Grzegorz; Kowal, Jakub; Kowalski, Paweł; Kozik, Tomasz; Krzemień, Wojciech; Molenda, Marcin; Moskal, Paweł; Niedźwiecki, Szymon; Pałka, Marek; Pawlik, Monika; Raczyński, Lech; Salabura, Piotr; Gupta-Sharma, Neha; Silarski, Michał; Smyrski, Jerzy; Strzelecki, Adam; Wiślicki, Wojciech; Zieliński, Marcin; Zoń, Natalia
2015-01-01
Positron emission tomographs (PET) do not measure an image directly. Instead, they measure at the boundary of the field-of-view (FOV) of PET tomograph a sinogram that consists of measurements of the sums of all the counts along the lines connecting two detectors. As there is a multitude of detectors build-in typical PET tomograph structure, there are many possible detector pairs that pertain to the measurement. The problem is how to turn this measurement into an image (this is called imaging). Decisive improvement in PET image quality was reached with the introduction of iterative reconstruction techniques. This stage was reached already twenty years ago (with the advent of new powerful computing processors). However, three dimensional (3D) imaging remains still a challenge. The purpose of the image reconstruction algorithm is to process this imperfect count data for a large number (many millions) of lines-of-responce (LOR) and millions of detected photons to produce an image showing the distribution of the l...
Le Brocq, A. M.; Bentley, M. J.; Hubbard, A.; Fogwill, C. J.; Sugden, D. E.; Whitehouse, P. L.
2011-09-01
The Weddell Sea Embayment (WSE) sector of the Antarctic ice sheet has been suggested as a potential source for a period of rapid sea-level rise - Meltwater Pulse 1a, a 20 m rise in ˜500 years. Previous modelling attempts have predicted an extensive grounding line advance in the WSE, to the continental shelf break, leading to a large equivalent sea-level contribution for the sector. A range of recent field evidence suggests that the ice sheet elevation change in the WSE at the Last Glacial Maximum (LGM) is less than previously thought. This paper describes and discusses an ice flow modelling derived reconstruction of the LGM ice sheet in the WSE, constrained by the recent field evidence. The ice flow model reconstructions suggest that an ice sheet consistent with the field evidence does not support grounding line advance to the continental shelf break. A range of modelled ice sheet surfaces are instead produced, with different grounding line locations derived from a novel grounding line advance scheme. The ice sheet reconstructions which best fit the field constraints lead to a range of equivalent eustatic sea-level estimates between approximately 1.4 and 3 m for this sector. This paper describes the modelling procedure in detail, considers the assumptions and limitations associated with the modelling approach, and how the uncertainty may impact on the eustatic sea-level equivalent results for the WSE.
Maximum entropy reconstructions of dynamic signaling networks from quantitative proteomics data.
Locasale, Jason W; Wolf-Yadlin, Alejandro
2009-08-26
Advances in mass spectrometry among other technologies have allowed for quantitative, reproducible, proteome-wide measurements of levels of phosphorylation as signals propagate through complex networks in response to external stimuli under different conditions. However, computational approaches to infer elements of the signaling network strictly from the quantitative aspects of proteomics data are not well established. We considered a method using the principle of maximum entropy to infer a network of interacting phosphotyrosine sites from pairwise correlations in a mass spectrometry data set and derive a phosphorylation-dependent interaction network solely from quantitative proteomics data. We first investigated the applicability of this approach by using a simulation of a model biochemical signaling network whose dynamics are governed by a large set of coupled differential equations. We found that in a simulated signaling system, the method detects interactions with significant accuracy. We then analyzed a growth factor mediated signaling network in a human mammary epithelial cell line that we inferred from mass spectrometry data and observe a biologically interpretable, small-world structure of signaling nodes, as well as a catalog of predictions regarding the interactions among previously uncharacterized phosphotyrosine sites. For example, the calculation places a recently identified tumor suppressor pathway through ARHGEF7 and Scribble, in the context of growth factor signaling. Our findings suggest that maximum entropy derived network models are an important tool for interpreting quantitative proteomics data.
YAO YanTao; HARFF Jan; MEYER Michael; ZHAN WenHuan
2009-01-01
The range of relative sea level rise in the northwestern South China Sea since the Last Glacial Maximum was over 100 m. As a result, lowland regions including the Northeast Vietnam coast, Beibu Gulf, and South China coast experienced an evolution from land to sea. Based on the principle of recon structing paleogeography and using recent digital elevation model, relative sea level curves, and sediment accumulation data, this paper presents a series of paleogeographic scenarios back to 20 cal. ka BP for the northwestern South China Sea. The scenarios demonstrate the entire process of coastline changes for the area of interest. During the late glacial period from 20 to 15 cal. ka BP, coastline slowly retreated, causing a land loss of only 1 ×104 km2, and thus the land-sea distribution remained nearly unchanged. Later in 15--10 cal. ka BP coastline rapidly retreated and area of land loss was up to 24×104 km2, causing lowlands around Northeast Vietnam and South China soon to be underwater. Coastline retreat continued quite rapidly during the early Holocene. From 10 to 6 cal. ka BP land area had decreased by 9×104 km2, and during that process the Qiongzhou Strait completely opened up. Since the mid Holocene, main controls on coastline change are from vertical crustal movements and sedimentation. Transgression was surpassed by regression, resulting in a land accretion of about 10×104 km2.
Parsimony score of phylogenetic networks: hardness results and a linear-time heuristic.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2009-01-01
Phylogenies-the evolutionary histories of groups of organisms-play a major role in representing the interrelationships among biological entities. Many methods for reconstructing and studying such phylogenies have been proposed, almost all of which assume that the underlying history of a given set of species can be represented by a binary tree. Although many biological processes can be effectively modeled and summarized in this fashion, others cannot: recombination, hybrid speciation, and horizontal gene transfer result in networks of relationships rather than trees of relationships. In previous works, we formulated a maximum parsimony (MP) criterion for reconstructing and evaluating phylogenetic networks, and demonstrated its quality on biological as well as synthetic data sets. In this paper, we provide further theoretical results as well as a very fast heuristic algorithm for the MP criterion of phylogenetic networks. In particular, we provide a novel combinatorial definition of phylogenetic networks in terms of "forbidden cycles," and provide detailed hardness and hardness of approximation proofs for the "small" MP problem. We demonstrate the performance of our heuristic in terms of time and accuracy on both biological and synthetic data sets. Finally, we explain the difference between our model and a similar one formulated by Nguyen et al., and describe the implications of this difference on the hardness and approximation results.
Partin, Judson Wiley
The West Pacific Warm Pool (WPWP) plays an important role in the global heat budget and global hydrologic cycle, so knowledge about its past variability would improve our understanding of global climate. Variations in WPWP precipitation are most notable during El Nino-Southern Oscillation events, when climate changes in the tropical Pacific impact rainfall not only in the WPWP, but around the globe. The stalagmite records presented in this dissertation provide centennial-to-millennial-scale constraints of WPWP precipitation during three distinct climatic periods: the Last Glacial Maximum (LGM), the last deglaciation, and the Holocene. In Chapter 2, the methodologies associated with the generation of U/Th-based absolute ages for the stalagmites are presented. In the final age models for the stalagmites, dates younger than 11,000 years have absolute errors of +/-400 years or less, and dates older than 11,000 years have a relative error of +/-2%. Stalagmite-specific 230Th/ 232Th ratios, calculated using isochrons, are used to correct for the presence of unsupported 230Th in a stalagmite at the time of formation. Hiatuses in the record are identified using a combination of optical properties, high 232Th concentrations, and extrapolation from adjacent U/Th dates. In Chapter 3, stalagmite oxygen isotopic composition (delta18O) records from N. Borneo are presented which reveal millennial-scale rainfall changes that occurred in response to changes in global climate boundary conditions, radiative forcing, and abrupt climate changes. The stalagmite delta18O records detect little change in inferred precipitation between the LGM and the present, although significant uncertainties are associated with the impact of the Sunda Shelf on rainfall delta 18O during the LGM. A millennial-scale drying in N. Borneo, inferred from an increase in stalagmite delta18O, peaks at ˜16.5ka coeval with timing of Heinrich event 1, possibly related to a southward movement of the Intertropical
A. J. Kettle
2010-07-01
Full Text Available Archaeozoological finds of the remains of marine and amphihaline fish from the Last Glacial Maximum (LGM ca. 21 ka ago show evidence of very different species ranges compared to the present. We show how an ecological niche model (ENM based on palaeoclimatic reconstructions of sea surface temperature and bathymetry can be used to effectively predict the spatial range of marine fish during the LGM. The results indicate that the ranges of marine fish species that are now in Northwestern Europe were almost completely displaced southward from the modern distribution. Significantly, there is strong evidence that there was an invasion of fish of current economic importance into the Western Mediterranean through the Straits of Gibraltar, where they were exploited by Palaeolithic human populations. There has been much recent interest in the marine glacial refugia to understand how the ranges of the economically important fish species will be displaced with the future climate warming. Recent ENM studies have suggested that species ranges may not have been displaced far southward during the coldest conditions of the LGM. However, archaeozoological evidence and LGM ocean temperature reconstructions indicate that there were large range changes, and certain marine species were able invade the Western Mediterranean. These findings are important for ongoing studies of molecular ecology that aim to assess marine glacial refugia from the genetic structure of living populations, and they pose questions about the genetic identity of vanished marine populations during the LGM. The research presents a challenge for future archaeozoological work to verify palaeoclimatic reconstructions and delimit the glacial refugia.
Boroomand, A.; Shafiee, M. J.; Wong, A.; Bizheva, K.
2015-03-01
The lateral resolution of a Spectral Domain Optical Coherence Tomography (SD-OCT) image is limited by the focusing properties of the OCT imaging probe optics, the wavelength range which SD-OCT system operates at, spherical and chromatic aberrations induced by the imaging optics, the optical properties of the imaged object, and in the special case of in-vivo retinal imaging by the optics of the eye. This limitation often results in challenges with resolving fine details and structures of the imaged sample outside of the Depth-Of-Focus (DOF) range. We propose a novel technique for generating Laterally Resolved OCT (LR-OCT) images using OCT measurements acquired with intentional imbrications. The proposed, novel method is based on a Maximum A Posteriori (MAP) reconstruction framework which takes advantage of a Stochastic Fully Connected Conditional Random Field (SFCRF) model to compensate for the artifacts and noise when reconstructing a LR-OCT image from imbricated OCT measurement. The proposed lateral resolution enhancement method was tested on synthetic OCT measurement as well as on a human cornea SDOCT image to evaluate the usefulness of the proposed approach in lateral resolution enhancement. Experimental results show that applying this method to OCT images, noticeably improves the sharpness of morphological features in the OCT image and in lateral direction, thus demonstrating better delineation of fine dot shape details in the synthetic OCT test, as well as better delineation of the keratocyte cells in the human corneal OCT test image.
Makeev, Andrey; Ikejimba, Lynda; Lo, Joseph Y.; Glick, Stephen J.
2016-03-01
Although digital mammography has reduced breast cancer mortality by approximately 30%, sensitivity and specificity are still far from perfect. In particular, the performance of mammography is especially limited for women with dense breast tissue. Two out of every three biopsies performed in the U.S. are unnecessary, thereby resulting in increased patient anxiety, pain, and possible complications. One promising tomographic breast imaging method that has recently been approved by the FDA is dedicated breast computed tomography (BCT). However, visualizing lesions with BCT can still be challenging for women with dense breast tissue due to the minimal contrast for lesions surrounded by fibroglandular tissue. In recent years there has been renewed interest in improving lesion conspicuity in x-ray breast imaging by administration of an iodinated contrast agent. Due to the fully 3-D imaging nature of BCT, as well as sub-optimal contrast enhancement while the breast is under compression with mammography and breast tomosynthesis, dedicated BCT of the uncompressed breast is likely to offer the best solution for injected contrast-enhanced x-ray breast imaging. It is well known that use of statistically-based iterative reconstruction in CT results in improved image quality at lower radiation dose. Here we investigate possible improvements in image reconstruction for BCT, by optimizing free regularization parameter in method of maximum likelihood and comparing its performance with clinical cone-beam filtered backprojection (FBP) algorithm.
Morelli, Agnese; Bruno, Luigi; Cleveland, David M.; Drexler, Tina M.; Amorosi, Alessandro
2017-10-01
Paleosols are commonly used to reconstruct ancient landscapes and past environmental conditions. Through identification and subsurface mapping of two pedogenically modified surfaces formed at the onset of the Last Glacial Maximum (LGM) and during the Younger Dryas (YD) cold event, respectively, and based on their lateral correlation with coeval channel-belt sand bodies, we assessed the geomorphic processes affecting the Po coastal plain during the Late Pleistocene (30-11.5 cal ky BP). The 3D-reconstruction of the LGM and YD paleosurfaces provides insight into the paleolandscapes that developed in the Po alluvial plain at the transitions between warm and cold climate periods. The LGM paleosol records a stratigraphic hiatus of approximately 5 kyr (29-24 cal ky BP), whereas the development of the YD paleosol was associated with a climatic episode of significantly shorter duration. Both paleosols, dissected by Apennine rivers flowing from the south, dip towards the north-east, where they are replaced by fluvial channel belts fed by the Po River. The LGM channel-belt sand body reflects the protracted lateral migration of the Po River at the onset of the glacial maximum. It is wider (> 24 km) and thicker ( 15 m) of the fluvial sand body formed during the YD. The northern margin of LGM Po channel-belt deposits was not encountered in the study area. In contrast, a spatially restricted paleosol, identified in the north at the same elevation as the southern plateau, may represent local expression of the Alpine interfluve during the YD event. This study highlights how 3D-mapping of regionally extensive, weakly developed paleosols can be used to assess the geomorphic response of an alluvial system to rapid climate change.
Parsimonious catchment and river flow modelling
Khatibi, R.H.; Moore, R.J.; Booij, Martijn J.; Cadman, D.; Boyce, G.; Rizzoli, A.E.; Jakeman, A.J.
2002-01-01
It is increasingly the case that models are being developed as “evolving” products rather than one-off application tools, such that auditable modelling versus ad hoc treatment of models becomes a pivotal issue. Auditable modelling is particularly vital to “parsimonious modelling” aimed at meeting
Izumi, Kenji; Bartlein, Patrick J.
2016-10-01
The inverse modeling through iterative forward modeling (IMIFM) approach was used to reconstruct Last Glacial Maximum (LGM) climates from North American fossil pollen data. The approach was validated using modern pollen data and observed climate data. While the large-scale LGM temperature IMIFM reconstructions are similar to those calculated using conventional statistical approaches, the reconstructions of moisture variables differ between the two approaches. We used two vegetation models, BIOME4 and BIOME5-beta, with the IMIFM approach to evaluate the effects on the LGM climate reconstruction of differences in water use efficiency, carbon use efficiency, and atmospheric CO2 concentrations. Although lower atmospheric CO2 concentrations influence pollen-based LGM moisture reconstructions, they do not significantly affect temperature reconstructions over most of North America. This study implies that the LGM climate was very cold but not very much drier than present over North America, which is inconsistent with previous studies.
Tremblay, Marissa; Spagnolo, Matteo; Ribolini, Adriano; Shuster, David
2016-04-01
The Gesso Valley, located in the southwestern-most, Maritime portion of the European Alps, contains an exceptionally well-preserved record of glacial advances during the late Pleistocene and Holocene. Detailed geomorphic mapping, geochronology of glacial deposits, and glacier reconstructions indicate that glaciers in this Mediterranean region responded to millennial scale climate variability differently than glaciers in the interior of the European Alps. This suggests that the Mediterranean Sea somehow modulated the climate of this region. However, since glaciers respond to changes in temperature and precipitation, both variables were potentially influenced by proximity to the Sea. To disentangle the competing effects of temperature and precipitation changes on glacier size, we are constraining past temperature variations in the Gesso Valley since the Last Glacial Maximum (LGM) using cosmogenic noble gas paleothermometry. The cosmogenic noble gases 3He and 21Ne experience diffusive loss from common minerals like quartz and feldspars at Earth surface temperatures. Cosmogenic noble gas paleothermometry utilizes this open-system behavior to quantitatively constrain thermal histories of rocks during exposure to cosmic ray particles at the Earth's surface. We will present measurements of cosmogenic 3He in quartz sampled from moraines in the Gesso Valley with LGM, Bühl stadial, and Younger Dryas ages. With these 3He measurements and experimental data quantifying the diffusion kinetics of 3He in quartz, we will provide a preliminary temperature reconstruction for the Gesso Valley since the LGM. Future work on samples from younger moraines in the valley system will be used to fill in details of the more recent temperature history.
Ritter, André; Durst, Jürgen; Gödel, Karl; Haas, Wilhelm; Michel, Thilo; Rieger, Jens; Weber, Thomas; Wucherer, Lukas; Anton, Gisela
2013-01-01
Phase-wrapping artifacts, statistical image noise and the need for a minimum amount of phase steps per projection limit the practicability of x-ray grating based phase-contrast tomography, when using filtered back projection reconstruction. For conventional x-ray computed tomography, the use of statistical iterative reconstruction algorithms has successfully reduced artifacts and statistical issues. In this work, an iterative reconstruction method for grating based phase-contrast tomography is presented. The method avoids the intermediate retrieval of absorption, differential phase and dark field projections. It directly reconstructs tomographic cross sections from phase stepping projections by the use of a forward projecting imaging model and an appropriate likelihood function. The likelihood function is then maximized with an iterative algorithm. The presented method is tested with tomographic data obtained through a wave field simulation of grating based phase-contrast tomography. The reconstruction result...
Reed, David L; Carpenter, Kent E; deGravelle, Martin J
2002-06-01
The Carangidae represent a diverse family of marine fishes that include both ecologically and economically important species. Currently, there are four recognized tribes within the family, but phylogenetic relationships among them based on morphology are not resolved. In addition, the tribe Carangini contains species with a variety of body forms and no study has tried to interpret the evolution of this diversity. We used DNA sequences from the mitochondrial cytochrome b gene to reconstruct the phylogenetic history of 50 species from each of the four tribes of Carangidae and four carangoid outgroup taxa. We found support for the monophyly of three tribes within the Carangidae (Carangini, Naucratini, and Trachinotini); however, monophyly of the fourth tribe (Scomberoidini) remains questionable. A sister group relationship between the Carangini and the Naucratini is well supported. This clade is apparently sister to the Trachinotini plus Scomberoidini but there is uncertain support for this relationship. Additionally, we examined the evolution of body form within the tribe Carangini and determined that each of the predominant clades has a distinct evolutionary trend in body form. We tested three methods of phylogenetic inference, parsimony, maximum-likelihood, and Bayesian inference. Whereas the three analyses produced largely congruent hypotheses, they differed in several important relationships. Maximum-likelihood and Bayesian methods produced hypotheses with higher support values for deep branches. The Bayesian analysis was computationally much faster and yet produced phylogenetic hypotheses that were very similar to those of the maximum-likelihood analysis. (c) 2002 Elsevier Science (USA).
Predicting protein interactions via parsimonious network history inference.
Patro, Rob; Kingsford, Carl
2013-07-01
Reconstruction of the network-level evolutionary history of protein-protein interactions provides a principled way to relate interactions in several present-day networks. Here, we present a general framework for inferring such histories and demonstrate how it can be used to determine what interactions existed in the ancestral networks, which present-day interactions we might expect to exist based on evolutionary evidence and what information extant networks contain about the order of ancestral protein duplications. Our framework characterizes the space of likely parsimonious network histories. It results in a structure that can be used to find probabilities for a number of events associated with the histories. The framework is based on a directed hypergraph formulation of dynamic programming that we extend to enumerate many optimal and near-optimal solutions. The algorithm is applied to reconstructing ancestral interactions among bZIP transcription factors, imputing missing present-day interactions among the bZIPs and among proteins from five herpes viruses, and determining relative protein duplication order in the bZIP family. Our approach more accurately reconstructs ancestral interactions than existing approaches. In cross-validation tests, we find that our approach ranks the majority of the left-out present-day interactions among the top 2 and 17% of possible edges for the bZIP and herpes networks, respectively, making it a competitive approach for edge imputation. It also estimates relative bZIP protein duplication orders, using only interaction data and phylogenetic tree topology, which are significantly correlated with sequence-based estimates. The algorithm is implemented in C++, is open source and is available at http://www.cs.cmu.edu/ckingsf/software/parana2. Supplementary data are available at Bioinformatics online.
Parsimonious modeling with information filtering networks
Barfuss, Wolfram; Massara, Guido Previde; Di Matteo, T.; Aste, Tomaso
2016-12-01
We introduce a methodology to construct parsimonious probabilistic models. This method makes use of information filtering networks to produce a robust estimate of the global sparse inverse covariance from a simple sum of local inverse covariances computed on small subparts of the network. Being based on local and low-dimensional inversions, this method is computationally very efficient and statistically robust, even for the estimation of inverse covariance of high-dimensional, noisy, and short time series. Applied to financial data our method results are computationally more efficient than state-of-the-art methodologies such as Glasso producing, in a fraction of the computation time, models that can have equivalent or better performances but with a sparser inference structure. We also discuss performances with sparse factor models where we notice that relative performances decrease with the number of factors. The local nature of this approach allows us to perform computations in parallel and provides a tool for dynamical adaptation by partial updating when the properties of some variables change without the need of recomputing the whole model. This makes this approach particularly suitable to handle big data sets with large numbers of variables. Examples of practical application for forecasting, stress testing, and risk allocation in financial systems are also provided.
Parsimonious Ways to Use Vision for Navigation
Paul Graham
2012-05-01
Full Text Available The use of visual information for navigation appears to be a universal strategy for sighted animals, amongst which, one particular group of expert navigators are the ants. The broad interest in studies of ant navigation is in part due to their small brains, thus biomimetic engineers expect to be impressed by elegant control solutions, and psychologists might hope for a description of the minimal cognitive requirements for complex spatial behaviours. In this spirit, we have been taking an interdisciplinary approach to the visual guided navigation of ants in their natural habitat. Behavioural experiments and natural image statistics show that visual navigation need not depend on the remembering or recognition of objects. Further modelling work suggests how simple behavioural routines might enable navigation using familiarity detection rather than explicit recall, and we present a proof of concept that visual navigation using familiarity can be achieved without specifying when or what to learn, nor separating routes into sequences of waypoints. We suggest that our current model represents the only detailed and complete model of insect route guidance to date. What's more, we believe the suggested mechanisms represent useful parsimonious hypotheses for the visually guided navigation in larger-brain animals.
Nagarajan, Rajakumar; Iqbal, Zohaib; Burns, Brian; Wilson, Neil E; Sarma, Manoj K; Margolis, Daniel A; Reiter, Robert E; Raman, Steven S; Thomas, M Albert
2015-11-01
The overlap of metabolites is a major limitation in one-dimensional (1D) spectral-based single-voxel MRS and multivoxel-based MRSI. By combining echo planar spectroscopic imaging (EPSI) with a two-dimensional (2D) J-resolved spectroscopic (JPRESS) sequence, 2D spectra can be recorded in multiple locations in a single slice of prostate using four-dimensional (4D) echo planar J-resolved spectroscopic imaging (EP-JRESI). The goal of the present work was to validate two different non-linear reconstruction methods independently using compressed sensing-based 4D EP-JRESI in prostate cancer (PCa): maximum entropy (MaxEnt) and total variation (TV). Twenty-two patients with PCa with a mean age of 63.8 years (range, 46-79 years) were investigated in this study. A 4D non-uniformly undersampled (NUS) EP-JRESI sequence was implemented on a Siemens 3-T MRI scanner. The NUS data were reconstructed using two non-linear reconstruction methods, namely MaxEnt and TV. Using both TV and MaxEnt reconstruction methods, the following observations were made in cancerous compared with non-cancerous locations: (i) higher mean (choline + creatine)/citrate metabolite ratios; (ii) increased levels of (choline + creatine)/spermine and (choline + creatine)/myo-inositol; and (iii) decreased levels of (choline + creatine)/(glutamine + glutamate). We have shown that it is possible to accelerate the 4D EP-JRESI sequence by four times and that the data can be reliably reconstructed using the TV and MaxEnt methods. The total acquisition duration was less than 13 min and we were able to detect and quantify several metabolites.
Cheung, Y; Sawant, A [UT Southwestern Medical Center, Dallas, TX (United States); Hinkle, J; Joshi, S [University of Utah, Salt Lake City, UT (United States)
2014-06-01
Purpose: Thoracic motion changes from cycle-to-cycle and day-to-day. Conventional 4DCT does not capture these cycle to cycle variations. We present initial results of a novel 4DCT reconstruction technique based on maximum a posteriori (MAP) reconstruction. The technique uses the same acquisition process (and therefore dose) as a conventional 4DCT in order to create a high spatiotemporal resolution cine CT that captures several breathing cycles. Methods: Raw 4DCT data were acquired from a lung cancer patient. The continuous 4DCT was reconstructed using MAP algorithm which uses the raw, time-stamped CT data to reconstruct images while simultaneously estimating deformation in the subject's anatomy. This framework incorporates physical effects such as hysteresis and is robust to detector noise and irregular breathing patterns. The 4D image is described in terms of a 3D reference image defined at one end of the hysteresis loop, and two deformation vector fields (DVFs) corresponding to inhale motion and exhale motion respectively. The MAP method uses all of the CT projection data and maximizes the log posterior in order to iteratively estimate a timevariant deformation vector field that describes the entire moving and deforming volume. Results: The MAP 4DCT yielded CT-quality images for multiple cycles corresponding to the entire duration of CT acquisition, unlike the conventional 4DCT, which only yielded a single cycle. Variations such as amplitude and frequency changes and baseline shifts were clearly captured by the MAP 4DC Conclusion: We have developed a novel, binning-free, parameterized 4DCT reconstruction technique that can capture cycle-to-cycle variations of respiratory motion. This technique provides an invaluable tool for respiratory motion management research. This work was supported by funding from the National Institutes of Health and VisionRT Ltd. Amit Sawant receives research funding from Varian Medical Systems, Vision RT and Elekta.
Quality Quandaries- Time Series Model Selection and Parsimony
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Quality Quandaries- Time Series Model Selection and Parsimony
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Shigemitsu, Yoshiki; Ikeya, Teppei; Yamamoto, Akihiro; Tsuchie, Yuusuke; Mishima, Masaki; Smith, Brian O; Güntert, Peter; Ito, Yutaka
2015-02-06
Despite their advantages in analysis, 4D NMR experiments are still infrequently used as a routine tool in protein NMR projects due to the long duration of the measurement and limited digital resolution. Recently, new acquisition techniques for speeding up multidimensional NMR experiments, such as nonlinear sampling, in combination with non-Fourier transform data processing methods have been proposed to be beneficial for 4D NMR experiments. Maximum entropy (MaxEnt) methods have been utilised for reconstructing nonlinearly sampled multi-dimensional NMR data. However, the artefacts arising from MaxEnt processing, particularly, in NOESY spectra have not yet been clearly assessed in comparison with other methods, such as quantitative maximum entropy, multidimensional decomposition, and compressed sensing. We compared MaxEnt with other methods in reconstructing 3D NOESY data acquired with variously reduced sparse sampling schedules and found that MaxEnt is robust, quick and competitive with other methods. Next, nonlinear sampling and MaxEnt processing were applied to 4D NOESY experiments, and the effect of the artefacts of MaxEnt was evaluated by calculating 3D structures from the NOE-derived distance restraints. Our results demonstrated that sufficiently converged and accurate structures (RMSD of 0.91Å to the mean and 1.36Å to the reference structures) were obtained even with NOESY spectra reconstructed from 1.6% randomly selected sampling points for indirect dimensions. This suggests that 3D MaxEnt processing in combination with nonlinear sampling schedules is still a useful and advantageous option for rapid acquisition of high-resolution 4D NOESY spectra of proteins.
Stefano Zurrida
2011-01-01
Full Text Available Breast cancer is the most common cancer in women. Primary treatment is surgery, with mastectomy as the main treatment for most of the twentieth century. However, over that time, the extent of the procedure varied, and less extensive mastectomies are employed today compared to those used in the past, as excessively mutilating procedures did not improve survival. Today, many women receive breast-conserving surgery, usually with radiotherapy to the residual breast, instead of mastectomy, as it has been shown to be as effective as mastectomy in early disease. The relatively new skin-sparing mastectomy, often with immediate breast reconstruction, improves aesthetic outcomes and is oncologically safe. Nipple-sparing mastectomy is newer and used increasingly, with better acceptance by patients, and again appears to be oncologically safe. Breast reconstruction is an important adjunct to mastectomy, as it has a positive psychological impact on the patient, contributing to improved quality of life.
A. P. Tran
2013-07-01
Full Text Available The vertical profile of shallow unsaturated zone soil moisture plays a key role in many hydro-meteorological and agricultural applications. We propose a closed-loop data assimilation procedure based on the maximum likelihood ensemble filter algorithm to update the vertical soil moisture profile from time-lapse ground-penetrating radar (GPR data. A hydrodynamic model is used to propagate the system state in time and a radar electromagnetic model and petrophysical relationships to link the state variable with the observation data, which enables us to directly assimilate the GPR data. Instead of using the surface soil moisture only, the approach allows to use the information of the whole soil moisture profile for the assimilation. We validated our approach through a synthetic study. We constructed a synthetic soil column with a depth of 80 cm and analyzed the effects of the soil type on the data assimilation by considering 3 soil types, namely, loamy sand, silt and clay. The assimilation of GPR data was performed to solve the problem of unknown initial conditions. The numerical soil moisture profiles generated by the Hydrus-1D model were used by the GPR model to produce the "observed" GPR data. The results show that the soil moisture profile obtained by assimilating the GPR data is much better than that of an open-loop forecast. Compared to the loamy sand and silt, the updated soil moisture profile of the clay soil converges to the true state much more slowly. Decreasing the update interval from 60 down to 10 h only slightly improves the effectiveness of the GPR data assimilation for the loamy sand but significantly for the clay soil. The proposed approach appears to be promising to improve real-time prediction of the soil moisture profiles as well as to provide effective estimates of the unsaturated hydraulic properties at the field scale from time-lapse GPR measurements.
Mitsunaga, B.; Mering, J. A.; Eagle, R.; Bricker, H. L.; Davila, N.; Trewman, S.; Burford, S.; Li, G.; Tripati, A. K.
2016-12-01
The climate of the Chinese Loess Plateau is affected by the East Asian Monsoon, an important water source for over a billion people. We are examining how temperature and hydrology on the Loess Plateau has changed since the Last Glacial Maximum (18,000 - 23,000 years before the present) in response to insolation, deglaciation, and rising levels of greenhouse gases. Specifically, we are reconstructing temperature and meteoric δ18O through paired clumped and oxygen isotope analyses performed on carbonate minerals. Clumped isotope thermometry—the use of 13C—18O bond frequency in carbonates—is a novel geochemical proxy that provides constraints on mineral formation temperatures and can be combined with carbonate δ18O to quantify meteoric δ18O. We have measured a suite of nodular loess concretions and gastropod shells from the modern as well as the Last Glacial Maximum from 15 sites across the Chinese Loess Plateau. These observations constrain spatial variations in temperature and precipitation, which in turn will provide key constraints on models that simulate changes in regional climates and monsoon intensity over the last 20,000 years.
A temporal extension to the parsimonious covering theory.
Wainer, J; Rezende, A de M
1997-07-01
In this paper, parsimonious covering theory is extended in such a way that temporal knowledge can be accommodated. In addition to causally associating possible manifestations with disorders, temporal relationships about duration and the time elapsed before a manifestation comes into existence can be represented by a graph. Precise definitions of the solution of a temporal diagnostic problem as well as algorithms to compute the solutions are provided. The medical suitability of the extended parsimonious cover theory is studied in the domain of food-borne disease.
Ruiz Fernández, Jesús; Oliva, Marc; Fernández Menéndez, Susana del Carmen; García Hernández, Cristina; Menéndez Duarte, Rosa Ana; Pellitero Ondicol, Ramón; Pérez Alberti, Augusto; Schimmelpfennig, Irene
2017-04-01
CRONOANTAR brings together researchers from Spain, Portugal, France and United Kingdom with the objective of spatially and temporally reconstruct the deglaciation process at the two largest islands in the South Shetlands Archipelago (Maritime Antarctica), since the Global Last Glacial Maximum. Glacier retreat in polar areas has major implications at a local, regional and even planetary scale. Global average sea level rise is the most obvious and socio-economically relevant, but there are others such as the arrival of new fauna to deglaciated areas, plant colonisation or permafrost formation and degradation. This project will study the ice-free areas in Byers and Hurd peninsulas (Livingston Island) and Fildes and Potter peninsulas (King George Island). Ice-cap glacier retreat chronology will be revealed by the use of cosmogenic isotopes (mainly 36Cl) on glacially originated sedimentary and erosive records. Cosmogenic dating will be complemented by other dating methods (C14 and OSL), which will permit the validation of these methods in regions with cold-based glaciers. Given the geomorphological evidences and the obtained ages, a deglaciation calendar will be proposed and we will use a GIS methodology to reconstruct the glacier extent and the ice thickness. The results emerging from this project will allow to assess whether the high glacier retreat rates observed during the last decades were registered in the past, or if they are conversely the consequence (and evidence) of the Global Change in Antarctica. Acknowledgements This work has been funded by the Spanish Ministry of Economy, Industry and Competitiveness (Reference: CTM2016-77878-P).
Le Brocq, A.; Bentley, M.; Hubbard, A.; Fogwill, C.; Sugden, D.
2008-12-01
A numerical ice sheet model constrained by recent field evidence is employed to reconstruct the Last Glacial Maximum (LGM) ice sheet in the Weddell Sea Embayment (WSE). Previous modelling attempts have predicted an extensive grounding line advance (to the continental shelf break) in the WSE, leading to a large equivalent sea level contribution for the sector. The sector has therefore been considered as a potential source for a period of rapid sea level rise (MWP1a, 20 m rise in ~500 years). Recent field evidence suggests that the elevation change in the Ellsworth mountains at the LGM is lower than previously thought (~400 m). The numerical model applied in this paper suggests that a 400 m thicker ice sheet at the LGM does not support such an extensive grounding line advance. A range of ice sheet surfaces, resulting from different grounding line locations, lead to an equivalent sea level estimate of 1 - 3 m for this sector. It is therefore unlikely that the sector made a significant contribution to sea level rise since the LGM, and in particular to MWP1a. The reduced ice sheet size also has implications for the correction of GRACE data, from which Antarctic mass balance calculations have been derived.
Moossen, H. M.; Abell, R.; Quillmann, U.; Andrews, J. T.; Bendle, J. A.
2011-12-01
Holocene climate change is of significantly smaller amplitude than the Pleistocene Glacial-Interglacial cycles, but climatic variations have affected humans over at least the last 4000 years. Studying Holocene climate variations is important to disentangle climate change caused by anthropogenic influences from natural climate change. Sedimentary records stemming from fjords afford the opportunity to study marine and terrestrial paleo-climatic changes and linking the two together. Typically high sediment accumulation rates of fjordic environments facilitate resolution of rapid climate change (RCC) events. The fjords of Northwest Iceland are ideal for studying Holocene climate change as they receive warm water from the Irminger current, but are also influenced by the east Greenland current which brings polar waters to the region (Jennings et al., 2011). In the Holocene, Nordic Seas and the Arctic have been sensitive to climate change. The 8.2 ka event, a cool interval, highlights the sensitivity of that region. Recent climate variations such as the Little Ice Age have been detected in sedimentary records around Iceland (Sicre et al., 2008). We reconstruct Holocene marine and terrestrial climate change producing high resolution (1sample/ 30 years) records from 10700 cal a BP to 300 cal a BP using biomarkers. Alkenones, terrestrial leaf wax components, GDGTs and C/N ratios from a sediment core (MD99-2266) from the mouth of the Ìsafjardardjúp fjord were studied. For more information on the core and evolution of the fjord during the Holocene consult Quillmann et al., (2010) The average chain length (ACL) of terrestrial n-alkanes indicates changes in aridity, and the alkenone unsaturation index represents changes in sea surface temperature. These independent records exhibit similar trends over the studied time period. Our alkenone derived SST record shows the Holocene Thermal Maximum, Holocene Neoglaciation as well as climate change associated with the Medieval Warm
Parsimonious extreme learning machine using recursive orthogonal least squares.
Wang, Ning; Er, Meng Joo; Han, Min
2014-10-01
Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.
Barclay, R. S.; Wing, S. L.
2013-12-01
The Paleocene-Eocene Thermal Maximum (PETM) was a geologically brief interval of intense global warming 56 million years ago. It is arguably the best geological analog for a worst-case scenario of anthropogenic carbon emissions. The PETM is marked by a ~4-6‰ negative carbon isotope excursion (CIE) and extensive marine carbonate dissolution, which together are powerful evidence for a massive addition of carbon to the oceans and atmosphere. In spite of broad agreement that the PETM reflects a large carbon cycle perturbation, atmospheric concentrations of CO2 (pCO2) during the event are not well constrained. The goal of this study is to produce a high resolution reconstruction of pCO2 using stomatal frequency proxies (both stomatal index and stomatal density) before, during, and after the PETM. These proxies rely upon a genetically controlled mechanism whereby plants decrease the proportion of gas-exchange pores (stomata) in response to increased pCO2. Terrestrial sections in the Bighorn Basin, Wyoming, contain macrofossil plants with cuticle immediately bracketing the PETM, as well as dispersed plant cuticle from within the body of the CIE. These fossils allow for the first stomatal-based reconstruction of pCO2 near the Paleocene-Eocene boundary; we also use them to determine the relative timing of pCO2 change in relation to the CIE that defines the PETM. Preliminary results come from macrofossil specimens of Ginkgo adiantoides, collected from an ~200ka interval prior to the onset of the CIE (~230-30ka before), and just after the 'recovery interval' of the CIE. Stomatal index values decreased by 37% within an ~70ka time interval at least 100ka prior to the onset of the CIE. The decrease in stomatal index is interpreted as a significant increase in pCO2, and has a magnitude equivalent to the entire range of stomatal index adjustment observed in modern Ginkgo biloba during the anthropogenic CO2 rise during the last 150 years. The inferred CO2 increase prior to the
McCann, Jamie; Stuessy, Tod F.; Villaseñor, Jose L.; Weiss-Schneeweiss, Hanna
2016-01-01
Chromosome number change (polyploidy and dysploidy) plays an important role in plant diversification and speciation. Investigating chromosome number evolution commonly entails ancestral state reconstruction performed within a phylogenetic framework, which is, however, prone to uncertainty, whose effects on evolutionary inferences are insufficiently understood. Using the chromosomally diverse plant genus Melampodium (Asteraceae) as model group, we assess the impact of reconstruction method (maximum parsimony, maximum likelihood, Bayesian methods), branch length model (phylograms versus chronograms) and phylogenetic uncertainty (topological and branch length uncertainty) on the inference of chromosome number evolution. We also address the suitability of the maximum clade credibility (MCC) tree as single representative topology for chromosome number reconstruction. Each of the listed factors causes considerable incongruence among chromosome number reconstructions. Discrepancies between inferences on the MCC tree from those made by integrating over a set of trees are moderate for ancestral chromosome numbers, but severe for the difference of chromosome gains and losses, a measure of the directionality of dysploidy. Therefore, reliance on single trees, such as the MCC tree, is strongly discouraged and model averaging, taking both phylogenetic and model uncertainty into account, is recommended. For studying chromosome number evolution, dedicated models implemented in the program ChromEvol and ordered maximum parsimony may be most appropriate. Chromosome number evolution in Melampodium follows a pattern of bidirectional dysploidy (starting from x = 11 to x = 9 and x = 14, respectively) with no prevailing direction. PMID:27611687
Y. Wang
2013-06-01
Full Text Available Pollen records from large lakes have been used for quantitative palaeoclimate reconstruction but the influences that lake-size (as a result of species-specific variations in pollen dispersal patterns and taphonomy have on these climatic signals have not previously been systematically investigated. We introduce the concept of pollen source-area to pollen-based climate calibration using the climate history of the north-eastern Tibetan Plateau as our study area. We present a pollen data-set collected from large lakes in the arid to semi-arid region of Central Asia. The influences that lake size and the inferred pollen source-areas have on pollen compositions have been investigated through comparisons with pollen assemblages in neighbouring lakes of various sizes. Modern pollen samples collected from different parts of Lake Donggi Cona (in the north-eastern part of the Tibetan Plateau reveal variations in pollen assemblages within this large lake, which are interpreted in terms of the species-specific dispersal and depositional patterns for different types of pollen, and in terms of fluvial input components. We have estimated the pollen source-area for each lake individually and used this information to infer modern climate data with which to then develop a modern calibration data-set, using both the Multivariate Regression Tree (MRT and Weighted-Averaging Partial Least Squares (WA-PLS approaches. Fossil pollen data from Lake Donggi Cona have been used to reconstruct the climate history of the north-eastern part of the Tibetan Plateau since the Last Glacial Maximum (LGM. The mean annual precipitation was quantitatively reconstructed using WA-PLS: extremely dry conditions are found to have dominated the LGM, with annual precipitation of around 100 mm, which is only 32% of present-day precipitation. A gradually increasing trend in moisture conditions during the Late Glacial is terminated by an abrupt reversion to a dry phase that lasts for about 1000
A Parsimonious and Universal Description of Turbulent Velocity Increments
Barndorff-Nielsen, O.E.; Blæsild, P.; Schmiegel, J.
This paper proposes a reformulation and extension of the concept of Extended Self-Similarity. In support of this new hypothesis, we discuss an analysis of the probability density function (pdf) of turbulent velocity increments based on the class of normal inverse Gaussian distributions. It allows...... for a parsimonious description of velocity increments that covers the whole range of amplitudes and all accessible scales from the finest resolution up to the integral scale. The analysis is performed for three different data sets obtained from a wind tunnel experiment, a free-jet experiment and an atmospheric...
Parsimony analysis of endemicity of enchodontoid fishes from the Cenomanian
Da Silva, Hilda,; GALLO,VALÉRIA
2007-01-01
8 pages; International audience; Parsimony analysis of endemicity was applied to analyze the distribution of enchodontoid fishes occurring strictly in the Cenomanian. The analysis was carried out using the computer program PAUP* 4.0b10, based on a data matrix built with 17 taxa and 12 areas. The rooting was made on an hypothetical all-zero outgroup. Applying the exact algorithm branch and bound, 47 trees were obtained with 26 steps, a consistency index of 0.73, and a retention index of 0.50. ...
A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series
Fernando Luiz Cyrino Oliveira
2014-01-01
Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.
Exactly computing the parsimony scores on phylogenetic networks using dynamic programming.
Kannan, Lavanya; Wheeler, Ward C
2014-04-01
Scoring a given phylogenetic network is the first step that is required in searching for the best evolutionary framework for a given dataset. Using the principle of maximum parsimony, we can score phylogenetic networks based on the minimum number of state changes across a subset of edges of the network for each character that are required for a given set of characters to realize the input states at the leaves of the networks. Two such subsets of edges of networks are interesting in light of studying evolutionary histories of datasets: (i) the set of all edges of the network, and (ii) the set of all edges of a spanning tree that minimizes the score. The problems of finding the parsimony scores under these two criteria define slightly different mathematical problems that are both NP-hard. In this article, we show that both problems, with scores generalized to adding substitution costs between states on the endpoints of the edges, can be solved exactly using dynamic programming. We show that our algorithms require O(m(p)k) storage at each vertex (per character), where k is the number of states the character can take, p is the number of reticulate vertices in the network, m = k for the problem with edge set (i), and m = 2 for the problem with edge set (ii). This establishes an O(nm(p)k(2)) algorithm for both the problems (n is the number of leaves in the network), which are extensions of Sankoff's algorithm for finding the parsimony scores for phylogenetic trees. We will discuss improvements in the complexities and show that for phylogenetic networks whose underlying undirected graphs have disjoint cycles, the storage at each vertex can be reduced to O(mk), thus making the algorithm polynomial for this class of networks. We will present some properties of the two approaches and guidance on choosing between the criteria, as well as traverse through the network space using either of the definitions. We show that our methodology provides an effective means to
A Practical pedestrian approach to parsimonious regression with inaccurate inputs
Seppo Karrila
2014-04-01
Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.
High-Performance Phylogeny Reconstruction
Tiffani L. Williams
2004-11-10
Under the Alfred P. Sloan Fellowship in Computational Biology, I have been afforded the opportunity to study phylogenetics--one of the most important and exciting disciplines in computational biology. A phylogeny depicts an evolutionary relationship among a set of organisms (or taxa). Typically, a phylogeny is represented by a binary tree, where modern organisms are placed at the leaves and ancestral organisms occupy internal nodes, with the edges of the tree denoting evolutionary relationships. The task of phylogenetics is to infer this tree from observations upon present-day organisms. Reconstructing phylogenies is a major component of modern research programs in many areas of biology and medicine, but it is enormously expensive. The most commonly used techniques attempt to solve NP-hard problems such as maximum likelihood and maximum parsimony, typically by bounded searches through an exponentially-sized tree-space. For example, there are over 13 billion possible trees for 13 organisms. Phylogenetic heuristics that quickly analyze large amounts of data accurately will revolutionize the biological field. This final report highlights my activities in phylogenetics during the two-year postdoctoral period at the University of New Mexico under Prof. Bernard Moret. Specifically, this report reports my scientific, community and professional activities as an Alfred P. Sloan Postdoctoral Fellow in Computational Biology.
Diodato, Nazzareno; Borrelli, Pasquale; Fiener, Peter; Bellocchi, Gianni; Romano, Nunzio
2017-01-01
An in-depth analysis of the interannual variability of storms is required to detect changes in soil erosive power of rainfall, which can also result in severe on-site and off-site damages. Evaluating long-term rainfall erosivity is a challenging task, mainly because of the paucity of high-resolution historical precipitation observations that are generally reported at coarser temporal resolutions (e.g., monthly to annual totals). In this paper we suggest overcoming this limitation through an analysis of long-term processes governing rainfall erosivity with an application to datasets available the central Ruhr region (Western Germany) for the period 1701-2011. Based on a parsimonious interpretation of seasonal rainfall-related processes (from spring to autumn), a model was derived using 5-min erosivity data from 10 stations covering the period 1937-2002, and then used to reconstruct a long series of annual rainfall erosivity values. Change-points in the evolution of rainfall erosivity are revealed over the 1760s and the 1920s that mark three sub-periods characterized by increasing mean values. The results indicate that the erosive hazard tends to increase as a consequence of an increased frequency of extreme precipitation events occurred during the last decades, characterized by short-rain events regrouped into prolonged wet spells.
Fuad Julardžija
2014-04-01
Full Text Available Introduction: Magnetic resonance cholangiopancreatography (MRCP is a method that allows noninvasive visualization of pancreatobiliary tree and does not require contrast application. It is a modern method based on heavily T2-weighted imaging (hydrography, which uses bile and pancreatic secretions as a natural contrast medium. Certain weaknesses in quality of demonstration of pancreatobiliary tract can be observed in addition to its good characteristics. Our aim was to compare the 3D Maximum intensity projection (MIP reconstruction and 2D T2 Half-Fourier Acquisition Single-Shot Turbo Spin-Echo (HASTE sequence in magnetic resonance cholangiopancreatography.Methods: During the period of one year 51 patients underwent MRCP on 3T „Trio“ system. Patients of different sex and age structure were included, both outpatient and hospitalized. 3D MIP reconstruction and 2D T2 haste sequence were used according to standard scanning protocols.Results: There were 45.1% (n= 23 male and 54.9% (n=28 female patients, age range from 17 to 81 years. 2D T2 haste sequence was more susceptible to respiratory artifacts presence in 64% patients, compared to 3D MIP reconstruction with standard error (0.09, result significance indication (p=0.129 and confidence interval (0.46 to 0.81. 2D T2 haste sequences is more sensitive and superior for pancreatic duct demonstration compared to 3D MIP reconstruction with standard error (0.07, result significance indication (p=0.01 and confidence interval (0.59 to 0.87Conclusion: In order to make qualitative demonstration and analysis of hepatobiliary and pancreatic system on MR, both 2D T2 haste sequence in transversal plane and 3D MIP reconstruction are required.
Pengintegrasian Model Leadership Menuju Model yang Lebih Komprhensip dan Parsimoni
Miswanto Miswanti
2016-06-01
Full Text Available ABTSRACT Through leadership models offered by Locke et. al (1991 we can say that whether good or not the vision of leaders in the organization is highly dependent on whether good or not the motives and traits, knowledge, skill, and abilities owned leaders. Then, good or not the implementation of the vision by the leader depends on whether good or not the motives and traits, knowledge, skills, abilities, and the vision of the leaders. Strategic Leadership written by Davies (1991 states that the implementation of the vision by using strategic leadership, the meaning is much more complete than what has been written by Locke et. al. in the fourth stage of leadership. Thus, aspects of the implementation of the vision by Locke et al (1991 it is not complete implementation of the vision according to Davies (1991. With the considerations mentioned above, this article attempts to combine the leadership model of the Locke et. al and strategic leadership of the Davies. With this modification is expected to be an improvement model of leadership is more comprehensive and parsimony.
SEAPODYM-LTL: a parsimonious zooplankton dynamic biomass model
Conchon, Anna; Lehodey, Patrick; Gehlen, Marion; Titaud, Olivier; Senina, Inna; Séférian, Roland
2017-04-01
Mesozooplankton organisms are of critical importance for the understanding of early life history of most fish stocks, as well as the nutrient cycles in the ocean. Ongoing climate change and the need for improved approaches to the management of living marine resources has driven recent advances in zooplankton modelling. The classical modeling approach tends to describe the whole biogeochemical and plankton cycle with increasing complexity. We propose here a different and parsimonious zooplankton dynamic biomass model (SEAPODYM-LTL) that is cost efficient and can be advantageously coupled with primary production estimated either from satellite derived ocean color data or biogeochemical models. In addition, the adjoint code of the model is developed allowing a robust optimization approach for estimating the few parameters of the model. In this study, we run the first optimization experiments using a global database of climatological zooplankton biomass data and we make a comparative analysis to assess the importance of resolution and primary production inputs on model fit to observations. We also compare SEAPODYM-LTL outputs to those produced by a more complex biogeochemical model (PISCES) but sharing the same physical forcings.
R. Schneider
2013-11-01
δ13Catm level in the Penultimate (~ 140 000 yr BP and Last Glacial Maximum (~ 22 000 yr BP, which can be explained by either (i changes in the isotopic composition or (ii intensity of the carbon input fluxes to the combined ocean/atmosphere carbon reservoir or (iii by long-term peat buildup. Our isotopic data suggest that the carbon cycle evolution along Termination II and the subsequent interglacial was controlled by essentially the same processes as during the last 24 000 yr, but with different phasing and magnitudes. Furthermore, a 5000 yr lag in the CO2 decline relative to EDC temperatures is confirmed during the glacial inception at the end of MIS5.5 (120 000 yr BP. Based on our isotopic data this lag can be explained by terrestrial carbon release and carbonate compensation.
Staines-Urías, Francisca; Seidenkrantz, Marit-Solveig; Fischel, Andrea; Kuijpers, Antoon
2017-04-01
The elemental composition of sediments from gravity core HOLOVAR11-03 provides a ca. 40 ka record of past climate variability in the Strait of Yucatan, between the Caribbean Sea and the Gulf of Mexico, a region where precipitation variability is determined by the seasonal position of the Intertropical Convergence Zone (ITCZ). Within this region, sea level pressure decreases and rainfall increases as the ITCZ moves north of the equator in response to increased solar insolation in the Northern Hemisphere during boreal summer. In contrast, as the ITCZ retracts southward towards the equator during boreal winter, rainfall diminishes and the regional sea level pressure gradient strengthens. On interannual, multidecadal and millennial timescales, fluctuations in the average latitudinal position of the ITCZ in response to insolation forcing modulate the intensity and duration of the seasonal regimens, determining average regional precipitation and, ultimately, the elemental composition of the marine sedimentary record. Regionally, higher titanium and iron content in marine sediments reflect greater terrigenous input from inland runoff, indicating greater precipitation, hence a more northerly position of the ITCZ. Correspondingly, Ti and Fe concentration data were used to reconstruct regional rainfall variability since the Last Glacial Maxima (LGM ˜24 cal ka BP). HOLOVAR11-03 age model (based on 4 AMS 14C dates obtained from multi-specific samples of planktic foraminifera) shows stable sedimentation rates in the area throughout the cored period. Nonetheless, higher terrestrial mineral input is observed since the LGM and all through the last glacial termination (24 to 12 cal ka BP), indicating a period of increased precipitation. In contrast, lower Ti and Fe values are typical for the period between 12 and 8 cal ka BP, indicating reduced precipitation. A positive trend characterizes the following interval, showing a return to wetter conditions lasting until 5 cal ka BP
On simulated annealing phase transitions in phylogeny reconstruction.
Strobl, Maximilian A R; Barker, Daniel
2016-08-01
Phylogeny reconstruction with global criteria is NP-complete or NP-hard, hence in general requires a heuristic search. We investigate the powerful, physically inspired, general-purpose heuristic simulated annealing, applied to phylogeny reconstruction. Simulated annealing mimics the physical process of annealing, where a liquid is gently cooled to form a crystal. During the search, periods of elevated specific heat occur, analogous to physical phase transitions. These simulated annealing phase transitions play a crucial role in the outcome of the search. Nevertheless, they have received comparably little attention, for phylogeny or other optimisation problems. We analyse simulated annealing phase transitions during searches for the optimal phylogenetic tree for 34 real-world multiple alignments. In the same way in which melting temperatures differ between materials, we observe distinct specific heat profiles for each input file. We propose this reflects differences in the search landscape and can serve as a measure for problem difficulty and for suitability of the algorithm's parameters. We discuss application in algorithmic optimisation and as a diagnostic to assess parameterisation before computationally costly, large phylogeny reconstructions are launched. Whilst the focus here lies on phylogeny reconstruction under maximum parsimony, it is plausible that our results are more widely applicable to optimisation procedures in science and industry.
Knight, Jasper
2016-10-01
Southwest Ireland is a critical location to examine the sensitivity of late Pleistocene glaciers to climate variability in the northeast Atlantic, because of its proximal location to Atlantic moisture sources and the presence of high mountains in the Macgillycuddy's Reeks range which acted as a focus for glacierization (Harrison et al., 2010). The extent of Last Glacial Maximum (LGM) glaciers in southwest Ireland and their link to the wider British-Irish Ice Sheet (BIIS), however, is under debate. Some models suggest that during the LGM the region was wholly inundated by ice from the larger BIIS (Warren, 1992; Sejrup et al., 2005), whereas others suggest north-flowing ice from the semi-independent Cork-Kerry Ice Cap (CKIC) was diverted around mountain peaks, resulting in exposed nunataks in the Macgillycuddy's Reeks (Anderson et al., 2001; Ballantyne et al., 2011). Cirque glaciers may also have been present on mountain slopes above this regional ice surface (Warren, 1979; Rea et al., 2004). More recently, investigations have focused on the extent and age of cirque glaciers in the Reeks, based on the mapped distribution of end moraines (Warren, 1979; Harrison et al., 2010), and on cosmogenic dates on boulders on these moraines (Harrison et al., 2010) and on associated scoured bedrock surfaces across the region (Ballantyne et al., 2011). The recent paper by Barth et al. (2016) contributes to this debate by providing nine cosmogenic 10Be ages on boulders from two moraines from one small (∼1.7 km2) and low (373 m elevation of the cirque floor) cirque basin at Alohart (52°00‧50″N, 9°40‧30″W) within the Reeks range. These dates are welcomed because they add to the lengthening list of age constraints on geomorphic activity in the region that spans the time period from the LGM to early Holocene.
Gallach, Xavi; Ogier, Christophe; Ravanel, Ludovic; Deline, Philip; Carcaillet, Julien
2017-04-01
Rockfalls and rock avalanches are active processes in the Mont Blanc massif, with infrastructure and alpinists at risk. Thanks to a network of observers (hut keepers, mountain guides, alpinists) set up in 2007 present rockfalls are well surveyed and documented. Rockfall frequency over the past 150 years has been studied by comparison of historical photographs, showing that it strongly increased during the three last decades, especially during hot periods like the summer of 2003 and 2015, due to permafrost degradation driven by the climate change. In order to decipher the possible relationship between rockfall occurrence and the warmest periods of the Lateglacial and the Holocene, we start to study the morphodynamics of some selected high-elevated (>3000 m a.s.l.) rockwalls of the massif on a long timescale. Contrary to low altitude, deglaciated sites where study of large rockfall deposits allows to quantify frequency and magnitude of the process, rockfalls that detached from high-elevated rockwalls are no more noticeable as debris were absorbed and evacuated by the glaciers. Therefore, our study focuses on the rockfall scars. Their 10Be dating gives us the rock surface exposure age from present to far beyond the Last Glacial Maximum, interpreted as the rockfall ages. TCN dating of rockfalls has been carried out at the Aiguille du Midi in 2007 (Boehlert et al., 2008), and three other sites in the Mont Blanc massif in 2011 (Gallach et al., submitted). Here we present a new data set of rockfall dating carried out in 2015 that improves the 2007 and 2011 data. Furthermore, a relationship between the colour of the Mont Blanc granite and its exposure age has been shown: fresh rock surface is light grey (e.g. in recent rockfall scars) whereas weathered rock surface is in the range grey to orange/red: the redder a rock surface, the older its age. Here, reflectance spectroscopy is used to quantify the granite surface colour. Böhlert, R., Gruber, S., Egli, M., Maisch, M
Barth, Aaron M.; Clark, Peter U.; Clark, Jorie; McCabe, A. Marshall; Caffee, Marc
2016-10-01
We concluded that our new 10Be chronology records onset of retreat of a cirque glacier within the Alohart basin of southwestern Ireland 24.5 ± 1.4 ka, placing limiting constraints on reconstructions of the Irish Ice Sheet (IIS) and Kerry-Cork Ice Cap (KCIC) during the Last Glacial Maximum (LGM) (Barth et al., 2016). Knight (2016) raises two main arguments against our interpretation: (1) the glacier in the Alohart basin was not a cirque glacier, but instead a southern-sourced ice tongue from the KCIC overtopping the MacGillycuddy's Reeks, and (2) that the boulders we sampled for 10Be exposure dating were derived from supraglacial rockfall rather than transported subglacially, experienced nuclide inheritance, and are thus too old. In the following, we address both of these arguments.
Kinkhabwala, Ali
2013-01-01
The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...
Parsimonious Hydrologic and Nitrate Response Models For Silver Springs, Florida
Klammler, Harald; Yaquian-Luna, Jose Antonio; Jawitz, James W.; Annable, Michael D.; Hatfield, Kirk
2014-05-01
Silver Springs with an approximate discharge of 25 m3/sec is one of Florida's first magnitude springs and among the largest springs worldwide. Its 2500-km2 springshed overlies the mostly unconfined Upper Floridan Aquifer. The aquifer is approximately 100 m thick and predominantly consists of porous, fractured and cavernous limestone, which leads to excellent surface drainage properties (no major stream network other than Silver Springs run) and complex groundwater flow patterns through both rock matrix and fast conduits. Over the past few decades, discharge from Silver Springs has been observed to slowly but continuously decline, while nitrate concentrations in the spring water have enormously increased from a background level of 0.05 mg/l to over 1 mg/l. In combination with concurrent increases in algae growth and turbidity, for example, and despite an otherwise relatively stable water quality, this has given rise to concerns about the ecological equilibrium in and near the spring run as well as possible impacts on tourism. The purpose of the present work is to elaborate parsimonious lumped parameter models that may be used by resource managers for evaluating the springshed's hydrologic and nitrate transport responses. Instead of attempting to explicitly consider the complex hydrogeologic features of the aquifer in a typically numerical and / or stochastic approach, we use a transfer function approach wherein input signals (i.e., time series of groundwater recharge and nitrate loading) are transformed into output signals (i.e., time series of spring discharge and spring nitrate concentrations) by some linear and time-invariant law. The dynamic response types and parameters are inferred from comparing input and output time series in frequency domain (e.g., after Fourier transformation). Results are converted into impulse (or step) response functions, which describe at what time and to what magnitude a unitary change in input manifests at the output. For the
A large version of the small parsimony problem
Fredslund, Jakob; Hein, Jotun; Scharling, Tejs
2003-01-01
Given a multiple alignment over $k$ sequences, an evolutionary tree relating the sequences, and a subadditive gap penalty function (e.g. an affine function), we reconstruct the internal nodes of the tree optimally: we find the optimal explanation in terms of indels of the observed gaps and find...... case time. E.g. for a tree with nine leaves and a random alignment of length 10.000 with 60% gaps, the running time is on average around 45 seconds. For a real alignment of length 9868 of nine HIV-1 sequences, the running time is less than one second....
A Large Version of the Small Parsimony Problem
Fredslund, Jacob; Hein, Jotun; Scharling, Tejs
2003-01-01
Given a multiple alignment over k sequences, an evolutionary tree relating the sequences, and a subadditive gap penalty function (e.g. an affine function), we reconstruct the internal nodes of the tree optimally: we find the optimal explanation in terms of indels of the observed gaps and find...... case time. E.g. for a tree with nine leaves and a random alignment of length 10.000 with 60% gaps, the running time is on average around 45 seconds. For a real alignment of length 9868 of nine HIV-1 sequences, the running time is less than one second....
Maximum information photoelectron metrology
Hockett, P; Wollenhaupt, M; Baumert, T
2015-01-01
Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...
Reconstructing ancestral ranges in historical biogeography: properties and prospects
Kristin S. LAMM; Benjamin D. REDELINGS
2009-01-01
Recent years have witnessed a proliferation of quantitative methods for biogeographic inference. In particular, novel parametric approaches represent exciting new opportunities for the study of range evolution. Here, we review a selection of current methods for biogeographic analysis and discuss their respective properties. These methods include generalized parsimony approaches, weighted ancestral area analysis, dispersal-vicariance analysis, the dispersal-extinction-cladogenesis model and other maximum likelihood approaches, and Bayesian stochastic mapping of ancestral ranges, including a novel approach to inferring range evolution in the context of island biogeography. Some of these methods were developed specifically for problems of ancestral range reconstruction, whereas others were designed for more general problems of character state reconstruction and subsequently applied to the study of ancestral ranges. Methods for reconstructing ancestral history on a phylogenetic tree differ not only in the types of ancestral range states that are allowed, but also in the various historical events that may change the ancestral ranges. We explore how the form of allowed ancestral ranges and allowed transitions can both affect the outcome of ancestral range estimation. Finally, we mention some promising avenues for future work in the development of model-based approaches to biogeographic analysis.
Callot, Laurent; Kristensen, Johannes Tang
the monetary policy response to inflation and business cycle fluctuations in the US by estimating a parsimoniously time varying parameter Taylor rule.We document substantial changes in the policy response of the Fed in the 1970s and 1980s, and since 2007, but also document the stability of this response...
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
Frederick H. Sheldon
2013-03-01
Full Text Available Insertion/deletion (indel mutations, which are represented by gaps in multiple sequence alignments, have been used to examine phylogenetic hypotheses for some time. However, most analyses combine gap data with the nucleotide sequences in which they are embedded, probably because most phylogenetic datasets include few gap characters. Here, we report analyses of 12,030 gap characters from an alignment of avian nuclear genes using maximum parsimony (MP and a simple maximum likelihood (ML framework. Both trees were similar, and they exhibited almost all of the strongly supported relationships in the nucleotide tree, although neither gap tree supported many relationships that have proven difficult to recover in previous studies. Moreover, independent lines of evidence typically corroborated the nucleotide topology instead of the gap topology when they disagreed, although the number of conflicting nodes with high bootstrap support was limited. Filtering to remove short indels did not substantially reduce homoplasy or reduce conflict. Combined analyses of nucleotides and gaps resulted in the nucleotide topology, but with increased support, suggesting that gap data may prove most useful when analyzed in combination with nucleotide substitutions.
Hu, Kaifeng; Ellinger, James J; Chylla, Roger A; Markley, John L
2011-12-15
Time-zero 2D (13)C HSQC (HSQC(0)) spectroscopy offers advantages over traditional 2D NMR for quantitative analysis of solutions containing a mixture of compounds because the signal intensities are directly proportional to the concentrations of the constituents. The HSQC(0) spectrum is derived from a series of spectra collected with increasing repetition times within the basic HSQC block by extrapolating the repetition time to zero. Here we present an alternative approach to data collection, gradient-selective time-zero (1)H-(13)C HSQC(0) in combination with fast maximum likelihood reconstruction (FMLR) data analysis and the use of two concentration references for absolute concentration determination. Gradient-selective data acquisition results in cleaner spectra, and NMR data can be acquired in both constant-time and non-constant-time mode. Semiautomatic data analysis is supported by the FMLR approach, which is used to deconvolute the spectra and extract peak volumes. The peak volumes obtained from this analysis are converted to absolute concentrations by reference to the peak volumes of two internal reference compounds of known concentration: DSS (4,4-dimethyl-4-silapentane-1-sulfonic acid) at the low concentration limit (which also serves as chemical shift reference) and MES (2-(N-morpholino)ethanesulfonic acid) at the high concentration limit. The linear relationship between peak volumes and concentration is better defined with two references than with one, and the measured absolute concentrations of individual compounds in the mixture are more accurate. We compare results from semiautomated gsHSQC(0) with those obtained by the original manual phase-cycled HSQC(0) approach. The new approach is suitable for automatic metabolite profiling by simultaneous quantification of multiple metabolites in a complex mixture.
张洪艳; 沈焕锋; 张良培; 李平湘; 袁强强
2011-01-01
In this paper, a new joint Maximum A Posterior (MAP) formulation was proposed to integrate image registration into blind image Super-Resolution (SR) reconstruction to reduce image registration errors. The formulation was built upon the MAP framework, which judiciously combined image registration, blur identification and SR. A cyclic coordinate descent optimization procedure was developed to solve the MAP formulation, in which the registration parameters, blurring function and High Resolution (HR) image were estimated in an alternative manner given to the two others, respectively. The experimental results indicate that the proposed algorithm has considerable effectiveness in terms of both quantitative measurement and visual evaluation.%为了减小配准误差对盲超分辨率重建的影响,提出了一种影像配准和盲超分辨率重建联合处理的模型与方法.将配准参数、模糊函数和高分辨率影像建立在统一的最大后验估计模型框架内,并利用循环坐标下降最优化策略对模型进行求解,从而实现了配准参数、模糊函数和高分辨率影像的联合求解.实验结果证明:与传统盲超分辨率重建算法相比,该算法能够有效减少重建影像中的伪痕,在视觉评估上和定量评价上均能得到更好的结果.
Ancestral state reconstruction for Dendroctonus bark beetles: evolution of a tree killer.
Reeve, John D; Anderson, Frank E; Kelley, Scott T
2012-06-01
While most bark beetles attack only dead or weakened trees, many species in the genus Dendroctonus have the ability to kill healthy conifers through mass attack of the host tree, and can exhibit devastating outbreaks. Other species in this group are able to successfully colonize trees in small numbers without killing the host. We reconstruct the evolution of these ecological and life history traits, first classifying the extant Dendroctonus species by attack type (mass or few), outbreaks (yes or no), host genus (Pinus and others), location of attacks on the tree (bole, base, etc.), whether the host is killed (yes or no), and if the larvae are gregarious or have individual galleries (yes or no). We then estimated a molecular phylogeny for a data set of cytochrome oxidase I sequences sampled from nearly all Dendroctonus species, and used this phylogeny to reconstruct the ancestral state at various nodes on the tree, employing maximum parsimony, maximum likelihood, and Bayesian methods. Our reconstructions suggest that extant Dendroctonus species likely evolved from an ancestor that killed host pines through mass attack of the bole, had individual larvae, and exhibited outbreaks. The ability to colonize a host tree in small numbers (as well as gregarious larvae and attacks at the tree base) apparently evolved later, possibly as two separate events in different clades. It is likely that tree mortality and outbreaks have been continuing features of the interaction between conifers and Dendroctonus bark beetles.
Maximum-likelihood method in quantum estimation
Paris, M G A; Sacchi, M F
2001-01-01
The maximum-likelihood method for quantum estimation is reviewed and applied to the reconstruction of density matrix of spin and radiation as well as to the determination of several parameters of interest in quantum optics.
Salas-Leiva, Dayana E; Meerow, Alan W; Calonje, Michael; Griffith, M Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W; Lewis, Carl E; Namoff, Sandra
2013-11-01
Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree-species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree-species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia-Lepidozamia-Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial classification of Zamiaceae.
吴普; 王丽丽; 邵雪梅
2008-01-01
Having analyzed the tree ring width and maximum latewood density of Pinus den-sata from west Sichuan, we obtained different climate information from tree-ring width and maximum latewood density chronology. The growth of tree ring width was responded princi-pally to the precipitation in current May, which might be influenced by the activity of southwest monsoon, whereas the maximum latewood density reflected summer temperature (June-September). According to the correlation relationship, a transfer function had been used to reconstruct summer temperature for the study area. The explained variance of re-construction is 51% (F=-52.099, p<0.0001). In the reconstruction series: before the 1930s, the climate was relatively cold, and relatively warm from 1930 to 1960, this trend was in accor-dance with the cold-warm period of the last 100 years, west Sichuan. Compared with Chengdu, the warming break point in west Sichuan is 3 years ahead of time, indicating that the Tibetan Plateau was more sensitive to temperature change. There was an evident sum-mer warming signal after 1983. Although the last 100-year running average of summer tem-perature in the 1990s was the maximum, the running average of the early 1990s was below the average line and it was cold summer, but summer drought occurred in the late 1990s.
Equally parsimonious pathways through an RNA sequence space are not equally likely
Lee, Y. H.; DSouza, L. M.; Fox, G. E.
1997-01-01
An experimental system for determining the potential ability of sequences resembling 5S ribosomal RNA (rRNA) to perform as functional 5S rRNAs in vivo in the Escherichia coli cellular environment was devised previously. Presumably, the only 5S rRNA sequences that would have been fixed by ancestral populations are ones that were functionally valid, and hence the actual historical paths taken through RNA sequence space during 5S rRNA evolution would have most likely utilized valid sequences. Herein, we examine the potential validity of all sequence intermediates along alternative equally parsimonious trajectories through RNA sequence space which connect two pairs of sequences that had previously been shown to behave as valid 5S rRNAs in E. coli. The first trajectory requires a total of four changes. The 14 sequence intermediates provide 24 apparently equally parsimonious paths by which the transition could occur. The second trajectory involves three changes, six intermediate sequences, and six potentially equally parsimonious paths. In total, only eight of the 20 sequence intermediates were found to be clearly invalid. As a consequence of the position of these invalid intermediates in the sequence space, seven of the 30 possible paths consisted of exclusively valid sequences. In several cases, the apparent validity/invalidity of the intermediate sequences could not be anticipated on the basis of current knowledge of the 5S rRNA structure. This suggests that the interdependencies in RNA sequence space may be more complex than currently appreciated. If ancestral sequences predicted by parsimony are to be regarded as actual historical sequences, then the present results would suggest that they should also satisfy a validity requirement and that, in at least limited cases, this conjecture can be tested experimentally.
Kuss, D.J.; Shorter, G. W.; Rooij, A.J. van; Griffiths, M.D.; Schoenmakers, T.M.
2014-01-01
Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (Journal ...
Kuss, DJ; Shorter, GW; Van Rooij, AJ; Griffiths, MD; Schoenmakers, T.
2014-01-01
Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (2005), i...
Casabianca, Jodi M.; Lewis, Charles
2015-01-01
Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…
Liran Carmel
2010-01-01
Full Text Available Evolutionary binary characters are features of species or genes, indicating the absence (value zero or presence (value one of some property. Examples include eukaryotic gene architecture (the presence or absence of an intron in a particular locus, gene content, and morphological characters. In many studies, the acquisition of such binary characters is assumed to represent a rare evolutionary event, and consequently, their evolution is analyzed using various flavors of parsimony. However, when gain and loss of the character are not rare enough, a probabilistic analysis becomes essential. Here, we present a comprehensive probabilistic model to describe the evolution of binary characters on a bifurcating phylogenetic tree. A fast software tool, EREM, is provided, using maximum likelihood to estimate the parameters of the model and to reconstruct ancestral states (presence and absence in internal nodes and events (gain and loss events along branches.
Dallolio Laura
2006-08-01
Full Text Available Abstract Background Cesarean section rates is often used as an indicator of quality of care in maternity hospitals. The assumption is that lower rates reflect in developed countries more appropriate clinical practice and general better performances. Hospitals are thus often ranked on the basis of caesarean section rates. The aim of this study is to assess whether the adjustment for clinical and sociodemographic variables of the mother and the fetus is necessary for inter-hospital comparisons of cesarean section (c-section rates and to assess whether a risk adjustment model based on a limited number of variables could be identified and used. Methods Discharge abstracts of labouring women without prior cesarean were linked with abstracts of newborns discharged from 29 hospitals of the Emilia-Romagna Region (Italy from 2003 to 2004. Adjusted ORs of cesarean by hospital were estimated by using two logistic regression models: 1 a full model including the potential confounders selected by a backward procedure; 2 a parsimonious model including only actual confounders identified by the "change-in-estimate" procedure. Hospital rankings, based on ORs were examined. Results 24 risk factors for c-section were included in the full model and 7 (marital status, maternal age, infant weight, fetopelvic disproportion, eclampsia or pre-eclampsia, placenta previa/abruptio placentae, malposition/malpresentation in the parsimonious model. Hospital ranking using the adjusted ORs from both models was different from that obtained using the crude ORs. The correlation between the rankings of the two models was 0.92. The crude ORs were smaller than ORs adjusted by both models, with the parsimonious ones producing more precise estimates. Conclusion Risk adjustment is necessary to compare hospital c-section rates, it shows differences in rankings and highlights inappropriateness of some hospitals. By adjusting for only actual confounders valid and more precise estimates
Schwartz, Carolyn E; Patrick, Donald L
2014-07-01
When planning a comparative effectiveness study comparing disease-modifying treatments, competing demands influence choice of outcomes. Current practice emphasizes parsimony, although understanding multidimensional treatment impact can help to personalize medical decision-making. We discuss both sides of this 'tug of war'. We discuss the assumptions, advantages and drawbacks of composite scores and multidimensional outcomes. We describe possible solutions to the multiple comparison problem, including conceptual hierarchy distinctions, statistical approaches, 'real-world' benchmarks of effectiveness and subgroup analysis. We conclude that comparative effectiveness research should consider multiple outcome dimensions and compare different approaches that fit the individual context of study objectives.
Commitment to Sport and Exercise: Re-examining the Literature for a Practical and Parsimonious Model
2013-01-01
A commitment to physical activity is necessary for personal health, and is a primary goal of physical activity practitioners. Effective practitioners rely on theory and research as a guide to best practices. Thus, sound theory, which is both practical and parsimonious, is a key to effective practice. The purpose of this paper is to review the literature in search of such a theory - one that applies to and explains commitment to physical activity in the form of sport and exercise for youths and adults. The Sport Commitment Model has been commonly used to study commitment to sport and has more recently been applied to the exercise context. In this paper, research using the Sport Commitment Model is reviewed relative to its utility in both the sport and exercise contexts. Through this process, the relevance of the Investment Model for study of physical activity commitment emerged, and a more parsimonious framework for studying of commitment to physical activity is suggested. Lastly, links between the models of commitment and individuals' participation motives in physical activity are suggested and practical implications forwarded. PMID:23412904
Williams, Lavon
2013-01-01
A commitment to physical activity is necessary for personal health, and is a primary goal of physical activity practitioners. Effective practitioners rely on theory and research as a guide to best practices. Thus, sound theory, which is both practical and parsimonious, is a key to effective practice. The purpose of this paper is to review the literature in search of such a theory - one that applies to and explains commitment to physical activity in the form of sport and exercise for youths and adults. The Sport Commitment Model has been commonly used to study commitment to sport and has more recently been applied to the exercise context. In this paper, research using the Sport Commitment Model is reviewed relative to its utility in both the sport and exercise contexts. Through this process, the relevance of the Investment Model for study of physical activity commitment emerged, and a more parsimonious framework for studying of commitment to physical activity is suggested. Lastly, links between the models of commitment and individuals' participation motives in physical activity are suggested and practical implications forwarded.
Parsimonious wave-equation travel-time inversion for refraction waves
Fu, Lei
2017-02-14
We present a parsimonious wave-equation travel-time inversion technique for refraction waves. A dense virtual refraction dataset can be generated from just two reciprocal shot gathers for the sources at the endpoints of the survey line, with N geophones evenly deployed along the line. These two reciprocal shots contain approximately 2N refraction travel times, which can be spawned into O(N2) refraction travel times by an interferometric transformation. Then, these virtual refraction travel times are used with a source wavelet to create N virtual refraction shot gathers, which are the input data for wave-equation travel-time inversion. Numerical results show that the parsimonious wave-equation travel-time tomogram has about the same accuracy as the tomogram computed by standard wave-equation travel-time inversion. The most significant benefit is that a reciprocal survey is far less time consuming than the standard refraction survey where a source is excited at each geophone location.
Chen, Shuo; Kang, Jian; Xing, Yishi; Wang, Guoqing
2015-12-01
Group-level functional connectivity analyses often aim to detect the altered connectivity patterns between subgroups with different clinical or psychological experimental conditions, for example, comparing cases and healthy controls. We present a new statistical method to detect differentially expressed connectivity networks with significantly improved power and lower false-positive rates. The goal of our method was to capture most differentially expressed connections within networks of constrained numbers of brain regions (by the rule of parsimony). By virtue of parsimony, the false-positive individual connectivity edges within a network are effectively reduced, whereas the informative (differentially expressed) edges are allowed to borrow strength from each other to increase the overall power of the network. We develop a test statistic for each network in light of combinatorics graph theory, and provide p-values for the networks (in the weak sense) by using permutation test with multiple-testing adjustment. We validate and compare this new approach with existing methods, including false discovery rate and network-based statistic, via simulation studies and a resting-state functional magnetic resonance imaging case-control study. The results indicate that our method can identify differentially expressed connectivity networks, whereas existing methods are limited.
Kimberly J Van Meter
Full Text Available Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy and groundwater travel time distributions (hydrologic legacy. The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
Van Meter, Kimberly J; Basu, Nandita B
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
de Queiroz, K; Poe, S
2001-06-01
Advocates of cladistic parsimony methods have invoked the philosophy of Karl Popper in an attempt to argue for the superiority of those methods over phylogenetic methods based on Ronald Fisher's statistical principle of likelihood. We argue that the concept of likelihood in general, and its application to problems of phylogenetic inference in particular, are highly compatible with Popper's philosophy. Examination of Popper's writings reveals that his concept of corroboration is, in fact, based on likelihood. Moreover, because probabilistic assumptions are necessary for calculating the probabilities that define Popper's corroboration, likelihood methods of phylogenetic inference--with their explicit probabilistic basis--are easily reconciled with his concept. In contrast, cladistic parsimony methods, at least as described by certain advocates of those methods, are less easily reconciled with Popper's concept of corroboration. If those methods are interpreted as lacking probabilistic assumptions, then they are incompatible with corroboration. Conversely, if parsimony methods are to be considered compatible with corroboration, then they must be interpreted as carrying implicit probabilistic assumptions. Thus, the non-probabilistic interpretation of cladistic parsimony favored by some advocates of those methods is contradicted by an attempt by the same authors to justify parsimony methods in terms of Popper's concept of corroboration. In addition to being compatible with Popperian corroboration, the likelihood approach to phylogenetic inference permits researchers to test the assumptions of their analytical methods (models) in a way that is consistent with Popper's ideas about the provisional nature of background knowledge.
Singular Spectrum Analysis for astronomical time series: constructing a parsimonious hypothesis test
Greco, G; Kobayashi, S; Ghil, M; Branchesi, M; Guidorzi, C; Stratta, G; Ciszak, M; Marino, F; Ortolan, A
2015-01-01
We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with $1/f^{\\beta}$ power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.
Roth, Bradley J.
2017-01-01
The strength-interval curve plays a major role in understanding how cardiac tissue responds to an electrical stimulus. This complex behavior has been studied previously using the bidomain formulation incorporating the Beeler-Reuter and Luo-Rudy dynamic ionic current models. The complexity of these models renders the interpretation and extrapolation of simulation results problematic. Here we utilize a recently developed parsimonious ionic current model with only two currents—a sodium current that activates rapidly upon depolarization INa and a time-independent inwardly rectifying repolarization current IK—which reproduces many experimentally measured action potential waveforms. Bidomain tissue simulations with this ionic current model reproduce the distinctive dip in the anodal (but not cathodal) strength-interval curve. Studying model variants elucidates the necessary and sufficient physiological conditions to predict the polarity dependent dip: a voltage and time dependent INa, a nonlinear rectifying repolarization current, and bidomain tissue with unequal anisotropy ratios. PMID:28222136
Distribution patterns of Neotropical primates (Platyrrhini based on Parsimony Analysis of Endemicity
A. Goldani
Full Text Available The Parsimony Analysis of Endemicity (PAE is a method of historical biogeography that is used for detecting and connecting areas of endemism. Based on data on the distribution of Neotropical primates, we constructed matrices using quadrats, interfluvial regions and pre-determinated areas of endemism described for avians as Operative Geographic Units (OGUs. We codified the absence of a species from an OGU as 0 (zero and its presence as 1 (one. A hypothetical area with a complete absence of primate species was used as outgroup to root the trees. All three analyses resulted in similar groupings of areas of endemism, which match the distribution of biomes in the Neotropical region. One area includes Central America and the extreme Northwest of South America, other the Amazon basin, and another the Atlantic Forest, Caatinga, Cerrado and Chaco.
Goldani, A; Carvalho, G S; Bicca-Marques, J C
2006-02-01
The Parsimony Analysis of Endemicity (PAE) is a method of historical biogeography that is used for detecting and connecting areas of endemism. Based on data on the distribution of Neotropical primates, we constructed matrices using quadrats, interfluvial regions and pre-determinated areas of endemism described for avians as Operative Geographic Units (OGUs). We codified the absence of a species from an OGU as 0 (zero) and its presence as 1 (one). A hypothetical area with a complete absence of primate species was used as outgroup to root the trees. All three analyses resulted in similar groupings of areas of endemism, which match the distribution of biomes in the Neotropical region. One area includes Central America and the extreme Northwest of South America, other the Amazon basin, and another the Atlantic Forest, Caatinga, Cerrado and Chaco.
Satisfiability Parsimoniously Reduces to the Tantrix(TM) Rotation Puzzle Problem
Baumeister, Dorothea
2007-01-01
Holzer and Holzer (Discrete Applied Mathematics 144(3):345--358, 2004) proved that the Tantrix(TM) rotation puzzle problem is NP-complete. They also showed that for infinite rotation puzzles, this problem becomes undecidable. We study the counting version and the unique version of this problem. We prove that the satisfiability problem parsimoniously reduces to the Tantrix(TM) rotation puzzle problem. In particular, this reduction preserves the uniqueness of the solution, which implies that the unique Tantrix(TM) rotation puzzle problem is as hard as the unique satisfiability problem, and so is DP-complete under polynomial-time randomized reductions, where DP is the second level of the boolean hierarchy over NP.
Time-Lapse Monitoring of Subsurface Fluid Flow using Parsimonious Seismic Interferometry
Hanafy, Sherif
2017-04-21
A typical small-scale seismic survey (such as 240 shot gathers) takes at least 16 working hours to be completed, which is a major obstacle in case of time-lapse monitoring experiments. This is especially true if the subject that needs to be monitored is rapidly changing. In this work, we will discuss how to decrease the recording time from 16 working hours to less than one hour of recording. Here, the virtual data has the same accuracy as the conventional data. We validate the efficacy of parsimonious seismic interferometry with the time-lapse mentoring idea with field examples, where we were able to record 30 different data sets within a 2-hour period. The recorded data are then processed to generate 30 snapshots that shows the spread of water from the ground surface down to a few meters.
Michael Seifert
2012-01-01
Full Text Available Array-based comparative genomic hybridization (Array-CGH is an important technology in molecular biology for the detection of DNA copy number polymorphisms between closely related genomes. Hidden Markov Models (HMMs are popular tools for the analysis of Array-CGH data, but current methods are only based on first-order HMMs having constrained abilities to model spatial dependencies between measurements of closely adjacent chromosomal regions. Here, we develop parsimonious higher-order HMMs enabling the interpolation between a mixture model ignoring spatial dependencies and a higher-order HMM exhaustively modeling spatial dependencies. We apply parsimonious higher-order HMMs to the analysis of Array-CGH data of the accessions C24 and Col-0 of the model plant Arabidopsis thaliana. We compare these models against first-order HMMs and other existing methods using a reference of known deletions and sequence deviations. We find that parsimonious higher-order HMMs clearly improve the identification of these polymorphisms. Moreover, we perform a functional analysis of identified polymorphisms revealing novel details of genomic differences between C24 and Col-0. Additional model evaluations are done on widely considered Array-CGH data of human cell lines indicating that parsimonious HMMs are also well-suited for the analysis of non-plant specific data. All these results indicate that parsimonious higher-order HMMs are useful for Array-CGH analyses. An implementation of parsimonious higher-order HMMs is available as part of the open source Java library Jstacs (www.jstacs.de/index.php/PHHMM.
Seifert, Michael; Gohr, André; Strickert, Marc; Grosse, Ivo
2012-01-01
Array-based comparative genomic hybridization (Array-CGH) is an important technology in molecular biology for the detection of DNA copy number polymorphisms between closely related genomes. Hidden Markov Models (HMMs) are popular tools for the analysis of Array-CGH data, but current methods are only based on first-order HMMs having constrained abilities to model spatial dependencies between measurements of closely adjacent chromosomal regions. Here, we develop parsimonious higher-order HMMs enabling the interpolation between a mixture model ignoring spatial dependencies and a higher-order HMM exhaustively modeling spatial dependencies. We apply parsimonious higher-order HMMs to the analysis of Array-CGH data of the accessions C24 and Col-0 of the model plant Arabidopsis thaliana. We compare these models against first-order HMMs and other existing methods using a reference of known deletions and sequence deviations. We find that parsimonious higher-order HMMs clearly improve the identification of these polymorphisms. Moreover, we perform a functional analysis of identified polymorphisms revealing novel details of genomic differences between C24 and Col-0. Additional model evaluations are done on widely considered Array-CGH data of human cell lines indicating that parsimonious HMMs are also well-suited for the analysis of non-plant specific data. All these results indicate that parsimonious higher-order HMMs are useful for Array-CGH analyses. An implementation of parsimonious higher-order HMMs is available as part of the open source Java library Jstacs (www.jstacs.de/index.php/PHHMM).
Parsimony and goodness-of-fit in multi-dimensional NMR inversion
Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos
2017-01-01
Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.
... this page: //medlineplus.gov/ency/article/007208.htm ACL reconstruction To use the sharing features on this page, please enable JavaScript. ACL reconstruction is surgery to reconstruct the ligament in ...
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...
Ancalla, Lourdes Pilar Zaragoza
2005-04-15
The reconstruction of the distribution of density of potency pin upright in a heterogeneous combustible element, of the nucleus of a nuclear reactor, it is a subject that has been studied inside by a long time in Physics of Reactors area. Several methods exist to do this reconstruction, one of them is Maximum Entropy's Method, that besides being an optimization method that finds the best solution of all the possible solutions, it is a method also improved that uses multipliers of Lagrange to obtain the distribution of the flows in the faces of the combustible element. This distribution of the flows in the faces is used then as a contour condition in the calculations of a detailed distribution of flow inside the combustible element. In this work, in first place it was made the homogenization of the heterogeneous element. Soon after the factor of the multiplication executes and the medium values of the flow and of the liquid current they are computed, with the program NEM2D. These values medium nodal are, then, used upright in the reconstruction of the distribution pin of the flow inside the combustible element. The obtained results were acceptable, when compared with those obtained using fine mesh. (author)
张同文; 刘禹; 袁玉江; 魏文寿; 喻树龙; 陈峰
2011-01-01
for de-trending. After all the processes,we obtained three kinds of chronologies( STD, RES and ARS)of tree-ring width data and gray values.Based on the tree-ring data analysis, mean maximum temperature from May to August of the Gongnaisi region from 1777 to 2008 A. D. Has been reconstructed by the tree-ring average gray values. For the calibrated period (1958 ~ 2008 A. D. ) ,the predictor variable accounts for 39% of the variance of mean maximum temperature data. The mean maximum temperature reconstruction shows that there are 34 warm years and 38 cold years. The warm events (lasting for more than three years)were 1861 ~ 1864 A. D., 1873 ~ 1876A. D. And 1917 ~ 1919A. D. ; and the cold events were 1816 ~ 1818A. D., 1948 ~ 1950A. D. And 1957 - 1959A. D. Furthermore, these years and events correspond well with historical documents. By applying a 11-year moving average to our reconstruction, only one period with above average reconstructed mean maximum temperature (1777 ~ 2008A. D. ) comprise 1845 ~ 1925A. D. ; the two periods below average consist of 1788 ~ 1844A. D. And 1926~2001 A. D. The reconstructed mean maximum temperature has increased since the 1990s and agreed well with instrumental measurements in the North Western China in the recent 50 years. The power spectrum analysis shows that there are 154-,77-,2. 7- and 2. 3-years cycles in our reconstruction, which may be associated with solar activity and quasi-biennial oscillation ( QBO). The moving t-test indicates that the significant abrupt changes were presented in about 1842A. D., 1880A. D. And 1923A. D. The significant correlations between our reconstruction and the gridded dataset of the Northern Hemisphere and three kinds of index (SOI, APOI, and AOI) may imply that mean maximum temperature of the Gongnaisi region is possibly influenced not only by local,but also by multiple large-scale climate changes to some extent.
Urban water quality modelling: a parsimonious holistic approach for a complex real case study.
Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare
2010-01-01
In the past three decades, scientific research has focused on the preservation of water resources, and in particular, on the polluting impact of urban areas on natural water bodies. One approach to this research has involved the development of tools to describe the phenomena that take place on the urban catchment during both wet and dry periods. Research has demonstrated the importance of the integrated analysis of all the transformation phases that characterise the delivery and treatment of urban water pollutants from source to outfall. With this aim, numerous integrated urban drainage models have been developed to analyse the fate of pollution from urban catchments to the final receiving waters, simulating several physical and chemical processes. Such modelling approaches require calibration, and for this reason, researchers have tried to address two opposing needs: the need for reliable representation of complex systems, and the need to employ parsimonious approaches to cope with the usually insufficient, especially for urban sources, water quality data. The present paper discusses the application of a be-spoke model to a complex integrated catchment: the Nocella basin (Italy). This system is characterised by two main urban areas served by two wastewater treatment plants, and has a small river as the receiving water body. The paper describes the monitoring approach that was used for model calibration, presents some interesting considerations about the monitoring needs for integrated modelling applications, and provides initial results useful for identifying the most relevant polluting sources.
A data parsimonious model for capturing snapshots of groundwater pollution sources.
Chaubey, Jyoti; Kashyap, Deepak
2017-02-01
Presented herein is a data parsimonious model for identification of regional and local groundwater pollution sources at a reference time employing corresponding fields of head, concentration and its time derivative. The regional source flux, assumed to be uniformly distributed, is viewed as the causative factor for the widely prevalent background concentration. The localized concentration-excesses are attributed to flux from local sources distributed around the respective centroids. The groundwater pollution is parameterized by flux from regional and local sources, and distribution parameters of the latter. These parameters are estimated by minimizing the sum of squares of differences between the observed and simulated concentration fields. The concentration field is simulated by a numerical solution of the transient solute transport equation. The equation is solved assuming the temporal derivative term to be known a priori and merging it with the sink term. This strategy circumvents the requirement of dynamic concentration data. The head field is generated using discrete point head data employing a specially devised interpolator that controls the numerical-differentiation errors and simultaneously ensures micro-level mass balance. This measure eliminates the requirement of flow modeling without compromising the sanctity of head field. The model after due verification has been illustrated employing available and simulated data from an area lying between two rivers Yamuna and Krishni in India. Copyright © 2016 Elsevier B.V. All rights reserved.
Komatitsch, Dimitri
2016-06-13
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
B. P. Weissling
2007-01-01
Full Text Available Soil moisture condition plays a vital role in a watershed's hydrologic response to a precipitation event and is thus parameterized in most, if not all, rainfall-runoff models. Yet the soil moisture condition antecedent to an event has proven difficult to quantify both spatially and temporally. This study assesses the potential to parameterize a parsimonious streamflow prediction model solely utilizing precipitation records and multi-temporal remotely sensed biophysical variables (i.e.~from Moderate Resolution Imaging Spectroradiometer (MODIS/Terra satellite. This study is conducted on a 1420 km^{2} rural watershed in the Guadalupe River basin of southcentral Texas, a basin prone to catastrophic flooding from convective precipitation events. A multiple regression model, accounting for 78% of the variance of observed streamflow for calendar year 2004, was developed based on gauged precipitation, land surface temperature, and enhanced vegetation Index (EVI, on an 8-day interval. These results compared favorably with streamflow estimations utilizing the Natural Resources Conservation Service (NRCS curve number method and the 5-day antecedent moisture model. This approach has great potential for developing near real-time predictive models for flood forecasting and can be used as a tool for flood management in any region for which similar remotely sensed data are available.
Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen
2016-09-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
Weissling, B. P.; Xie, H.; Murray, K. E.
2007-01-01
Soil moisture condition plays a vital role in a watershed's hydrologic response to a precipitation event and is thus parameterized in most, if not all, rainfall-runoff models. Yet the soil moisture condition antecedent to an event has proven difficult to quantify both spatially and temporally. This study assesses the potential to parameterize a parsimonious streamflow prediction model solely utilizing precipitation records and multi-temporal remotely sensed biophysical variables (i.e.~from Moderate Resolution Imaging Spectroradiometer (MODIS)/Terra satellite). This study is conducted on a 1420 km2 rural watershed in the Guadalupe River basin of southcentral Texas, a basin prone to catastrophic flooding from convective precipitation events. A multiple regression model, accounting for 78% of the variance of observed streamflow for calendar year 2004, was developed based on gauged precipitation, land surface temperature, and enhanced vegetation Index (EVI), on an 8-day interval. These results compared favorably with streamflow estimations utilizing the Natural Resources Conservation Service (NRCS) curve number method and the 5-day antecedent moisture model. This approach has great potential for developing near real-time predictive models for flood forecasting and can be used as a tool for flood management in any region for which similar remotely sensed data are available.
Catalano, S A; Goloboff, P A
2012-05-01
All methods proposed to date for mapping landmark configurations on a phylogenetic tree start from an alignment generated by methods that make no use of phylogenetic information, usually by superimposing all configurations against a consensus configuration. In order to properly interpret differences between landmark configurations along the tree as changes in shape, the metric chosen to define the ancestral assignments should also form the basis to superimpose the configurations. Thus, we present here a method that merges both steps, map and align, into a single procedure that (for the given tree) produces a multiple alignment and ancestral assignments such that the sum of the Euclidean distances between the corresponding landmarks along tree nodes is minimized. This approach is an extension of the method proposed by Catalano et al. (2010. Phylogenetic morphometrics (I): the use of landmark data in a phylogenetic framework. Cladistics. 26:539-549) for mapping landmark data with parsimony as optimality criterion. In the context of phylogenetics, this method allows maximizing the degree to which similarity in landmark positions can be accounted for by common ancestry. In the context of morphometrics, this approach guarantees (heuristics aside) that all the transformations inferred on the tree represent changes in shape. The performance of the method was evaluated on different data sets, indicating that the method produces marked improvements in tree score (up to 5% compared with generalized superimpositions, up to 11% compared with ordinary superimpositions). These empirical results stress the importance of incorporating the phylogenetic information into the alignment step.
Komatitsch, Dimitri; Bozdag, Ebru; de Andrade, Elliott Sales; Peter, Daniel B; Liu, Qinya; Tromp, Jeroen
2016-01-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the $K_\\alpha$ sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersi...
Gopal, Judy; Muthu, Manikandan; Chun, Sechul
2016-07-28
The development of thin film coatings has been a very important development in materials science for the modification of native material surface properties. Thin film coatings are enabled through the use of sophisticated instruments and technologies that demand expertise and huge initial and running costs. Nano-thin films are yet a furtherance of thin films which require more expertise and much more sophistication. In this work for the first time we present a one-pot straightforward carbon thin film coating methodology for glass substrates. There is novelty in every single aspect of the method, with the carbon used in the nanofilm being obtained from turmeric soot, the coating technique consisting of a basic immersion technique, a dip-dry method, in combination with the phytosoot-derived carbon's inherent ability to self-assemble to form a uniform and continuous stable coating. The carbon nanofilm has been characterized using field emission scanning electron microscopy (FESEM), Energy Dispersive X-ray (EDAX) analysis, a goniometer and X-ray diffraction (XRD). This study for the first time opens a new school of thought of using such naturally available free nanomaterials as eco-friendly green coatings. The amorphous porous carbon film can be coated on any hydrophilic substrate and is not substrate specific. Its added advantages of being transparent and antibacterial in spite of being green and parsimonious are meant to realize its utility as ideal choices for solar panels, medical implants and other construction applications.
A data parsimonious model for capturing snapshots of groundwater pollution sources
Chaubey, Jyoti; Kashyap, Deepak
2017-02-01
Presented herein is a data parsimonious model for identification of regional and local groundwater pollution sources at a reference time employing corresponding fields of head, concentration and its time derivative. The regional source flux, assumed to be uniformly distributed, is viewed as the causative factor for the widely prevalent background concentration. The localized concentration-excesses are attributed to flux from local sources distributed around the respective centroids. The groundwater pollution is parameterized by flux from regional and local sources, and distribution parameters of the latter. These parameters are estimated by minimizing the sum of squares of differences between the observed and simulated concentration fields. The concentration field is simulated by a numerical solution of the transient solute transport equation. The equation is solved assuming the temporal derivative term to be known a priori and merging it with the sink term. This strategy circumvents the requirement of dynamic concentration data. The head field is generated using discrete point head data employing a specially devised interpolator that controls the numerical-differentiation errors and simultaneously ensures micro-level mass balance. This measure eliminates the requirement of flow modeling without compromising the sanctity of head field. The model after due verification has been illustrated employing available and simulated data from an area lying between two rivers Yamuna and Krishni in India.
Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.
2008-10-01
SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.
de Queiroz, Kevin; Poe, Steven
2003-06-01
Kluge's (2001, Syst. Biol. 50:322-330) continued arguments that phylogenetic methods based on the statistical principle of likelihood are incompatible with the philosophy of science described by Karl Popper are based on false premises related to Kluge's misrepresentations of Popper's philosophy. Contrary to Kluge's conjectures, likelihood methods are not inherently verificationist; they do not treat every instance of a hypothesis as confirmation of that hypothesis. The historical nature of phylogeny does not preclude phylogenetic hypotheses from being evaluated using the probability of evidence. The low absolute probabilities of hypotheses are irrelevant to the correct interpretation of Popper's concept termed degree of corroboration, which is defined entirely in terms of relative probabilities. Popper did not advocate minimizing background knowledge; in any case, the background knowledge of both parsimony and likelihood methods consists of the general assumption of descent with modification and additional assumptions that are deterministic, concerning which tree is considered most highly corroborated. Although parsimony methods do not assume (in the sense of entailing) that homoplasy is rare, they do assume (in the sense of requiring to obtain a correct phylogenetic inference) certain things about patterns of homoplasy. Both parsimony and likelihood methods assume (in the sense of implying by the manner in which they operate) various things about evolutionary processes, although violation of those assumptions does not always cause the methods to yield incorrect phylogenetic inferences. Test severity is increased by sampling additional relevant characters rather than by character reanalysis, although either interpretation is compatible with the use of phylogenetic likelihood methods. Neither parsimony nor likelihood methods assess test severity (critical evidence) when used to identify a most highly corroborated tree(s) based on a single method or model and a
Li, Pui-Sze; Thomas, Daniel C; Saunders, Richard M K
2015-01-01
Taxonomic delimitation of Disepalum (Annonaceae) is contentious, with some researchers favoring a narrow circumscription following segregation of the genus Enicosanthellum. We reconstruct the phylogeny of Disepalum and related taxa based on four chloroplast and two nuclear DNA regions as a framework for clarifying taxonomic delimitation and assessing evolutionary transitions in key morphological characters. Maximum parsimony, maximum likelihood and Bayesian methods resulted in a consistent, well-resolved and strongly supported topology. Disepalum s.l. is monophyletic and strongly supported, with Disepalum s.str. and Enicosanthellum retrieved as sister groups. Although this topology is consistent with both taxonomic delimitations, the distribution of morphological synapomorphies provides greater support for the inclusion of Enicosanthellum within Disepalum s.l. We propose a novel infrageneric classification with two subgenera. Subgen. Disepalum (= Disepalum s.str.) is supported by numerous synapomorphies, including the reduction of the calyx to two sepals and connation of petals. Subgen. Enicosanthellum lacks obvious morphological synapomorphies, but possesses several diagnostic characters (symplesiomorphies), including a trimerous calyx and free petals in two whorls. We evaluate changes in petal morphology in relation to hypotheses of the genetic control of floral development and suggest that the compression of two petal whorls into one and the associated fusion of contiguous petals may be associated with the loss of the pollination chamber, which in turn may be associated with a shift in primary pollinator. We also suggest that the formation of pollen octads may be selectively advantageous when pollinator visits are infrequent, although this would only be applicable if multiple ovules could be fertilized by each octad; since the flowers are apocarpous, this would require an extragynoecial compitum to enable intercarpellary growth of pollen tubes. We furthermore
Pui-Sze Li
Full Text Available Taxonomic delimitation of Disepalum (Annonaceae is contentious, with some researchers favoring a narrow circumscription following segregation of the genus Enicosanthellum. We reconstruct the phylogeny of Disepalum and related taxa based on four chloroplast and two nuclear DNA regions as a framework for clarifying taxonomic delimitation and assessing evolutionary transitions in key morphological characters. Maximum parsimony, maximum likelihood and Bayesian methods resulted in a consistent, well-resolved and strongly supported topology. Disepalum s.l. is monophyletic and strongly supported, with Disepalum s.str. and Enicosanthellum retrieved as sister groups. Although this topology is consistent with both taxonomic delimitations, the distribution of morphological synapomorphies provides greater support for the inclusion of Enicosanthellum within Disepalum s.l. We propose a novel infrageneric classification with two subgenera. Subgen. Disepalum (= Disepalum s.str. is supported by numerous synapomorphies, including the reduction of the calyx to two sepals and connation of petals. Subgen. Enicosanthellum lacks obvious morphological synapomorphies, but possesses several diagnostic characters (symplesiomorphies, including a trimerous calyx and free petals in two whorls. We evaluate changes in petal morphology in relation to hypotheses of the genetic control of floral development and suggest that the compression of two petal whorls into one and the associated fusion of contiguous petals may be associated with the loss of the pollination chamber, which in turn may be associated with a shift in primary pollinator. We also suggest that the formation of pollen octads may be selectively advantageous when pollinator visits are infrequent, although this would only be applicable if multiple ovules could be fertilized by each octad; since the flowers are apocarpous, this would require an extragynoecial compitum to enable intercarpellary growth of pollen tubes
Multiresolution maximum intensity volume rendering by morphological adjunction pyramids
Roerdink, Jos B.T.M.
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Multiresolution Maximum Intensity Volume Rendering by Morphological Adjunction Pyramids
Roerdink, Jos B.T.M.
2001-01-01
We describe a multiresolution extension to maximum intensity projection (MIP) volume rendering, allowing progressive refinement and perfect reconstruction. The method makes use of morphological adjunction pyramids. The pyramidal analysis and synthesis operators are composed of morphological 3-D
Permutationally invariant state reconstruction
Moroder, Tobias; Toth, Geza; Schwemmer, Christian; Niggebaum, Alexander; Gaile, Stefanie; Gühne, Otfried; Weinfurter, Harald
2012-01-01
Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, also an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a non-linear large-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed n...
Rota, Emilia; Martin, Patrick; Erséus, Christer
2001-01-01
To re-evaluate the various hypotheses on the systematic position of Parergodrilus heideri Reisinger, 1925 and Hrabeiella periglandulata Pizl & Chalupský, 1984, the sole truly terrestrial non-clitellate annelids known to date, their phylogenetic relationships were investigated using a data set of new
Rota, Emilia; Martin, Patrick; Erséus, Christer
2001-01-01
To re-evaluate the various hypotheses on the systematic position of Parergodrilus heideri Reisinger, 1925 and Hrabeiella periglandulata Pizl & Chalupský, 1984, the sole truly terrestrial non-clitellate annelids known to date, their phylogenetic relationships were investigated using a data set of new
Massad, Tariq; Jarvet, Jueri [Stockholm University, Department of Biochemistry and Biophysics (Sweden); Tanner, Risto [National Institute of Chemical Physics and Biophysics (Estonia); Tomson, Katrin; Smirnova, Julia; Palumaa, Peep [Tallinn Technical University, Inst. of Gene Technology (Estonia); Sugai, Mariko; Kohno, Toshiyuki [Mitsubishi Kagaku Institute of Life Sciences (MITILS) (Japan); Vanatalu, Kalju [Tallinn Technical University, Inst. of Gene Technology (Estonia); Damberg, Peter [Stockholm University, Department of Biochemistry and Biophysics (Sweden)], E-mail: peter.damberg@dbb.su.se
2007-06-15
In this paper, we present a new method for structure determination of flexible 'random-coil' peptides. A numerical method is described, where the experimentally measured {sup 3}J{sup H{sup N}}{sup H{sup {alpha}}} and {sup 3}J{sup H{sup {alpha}}}{sup N{sup I}+1} couplings, which depend on the {phi} and {psi} dihedral angles, are analyzed jointly with the information from a coil-library through a maximum entropy approach. The coil-library is the distribution of dihedral angles found outside the elements of the secondary structure in the high-resolution protein structures. The method results in residue specific joint {phi},{psi}-distribution functions, which are in agreement with the experimental J-couplings and minimally committal to the information in the coil-library. The 22-residue human peptide hormone motilin, uniformly {sup 15}N-labeled was studied. The {sup 3}J{sup H{sup {alpha}}}{sup N{sup I}+1} were measured from the E.COSY pattern in the sequential NOESY cross-peaks. By employing homodecoupling and an in-phase/anti-phase filter, sharp H{sup {alpha}}-resonances (about 5 Hz) were obtained enabling accurate determination of the coupling with minimal spectral overlap. Clear trends in the resulting {phi},{psi}-distribution functions along the sequence are observed, with a nascent helical structure in the central part of the peptide and more extended conformations of the receptor binding N-terminus as the most prominent characteristics. From the {phi},{psi}-distribution functions, the contribution from each residue to the thermodynamic entropy, i.e., the segmental entropies, are calculated and compared to segmental entropies estimated from {sup 15}N-relaxation data. Remarkable agreement between the relaxation and J-couplings based methods is found. Residues belonging to the nascent helix and the C-terminus show segmental entropies, of approximately -20 J K{sup -1} mol{sup -1} and -12 J K{sup -1} mol{sup -1}, respectively, in both series. The agreement
Massad, Tariq; Jarvet, Jüri; Tanner, Risto; Tomson, Katrin; Smirnova, Julia; Palumaa, Peep; Sugai, Mariko; Kohno, Toshiyuki; Vanatalu, Kalju; Damberg, Peter
2007-06-01
In this paper, we present a new method for structure determination of flexible "random-coil" peptides. A numerical method is described, where the experimentally measured 3J(H(alpha)Nalpha) and [3J(H(alpha)Nalpha+1 couplings, which depend on the phi and psi dihedral angles, are analyzed jointly with the information from a coil-library through a maximum entropy approach. The coil-library is the distribution of dihedral angles found outside the elements of the secondary structure in the high-resolution protein structures. The method results in residue specific joint phi,psi-distribution functions, which are in agreement with the experimental J-couplings and minimally committal to the information in the coil-library. The 22-residue human peptide hormone motilin, uniformly 15N-labeled was studied. The 3J(H(alpha)-N(i+1)) were measured from the E.COSY pattern in the sequential NOESY cross-peaks. By employing homodecoupling and an in-phase/anti-phase filter, sharp H(alpha)-resonances (about 5 Hz) were obtained enabling accurate determination of the coupling with minimal spectral overlap. Clear trends in the resulting phi,psi-distribution functions along the sequence are observed, with a nascent helical structure in the central part of the peptide and more extended conformations of the receptor binding N-terminus as the most prominent characteristics. From the phi,psi-distribution functions, the contribution from each residue to the thermodynamic entropy, i.e., the segmental entropies, are calculated and compared to segmental entropies estimated from 15N-relaxation data. Remarkable agreement between the relaxation and J-couplings based methods is found. Residues belonging to the nascent helix and the C-terminus show segmental entropies, of approximately -20 J K(-1) mol(-1) and -12 J K(-1) mol(-1), respectively, in both series. The agreement between the two estimates of the segmental entropy, the agreement with the observed J-couplings, the agreement with the CD experiments
... rebuild the shape of the breast. Instead of breast reconstruction, you could choose to wear a breast form ... one woman may not be right for another. Breast reconstruction may be done at the same time as ...
Fossils impact as hard as living taxa in parsimony analyses of morphology.
Cobbett, Andrea; Wilkinson, Mark; Wills, Matthew A
2007-10-01
Systematists disagree whether data from fossils should be included in parsimony analyses. In a handful of well-documented cases, the addition of fossil data radically overturns a hypothesis of relationships based on extant taxa alone. Fossils can break up long branches and preserve character combinations closer in time to deep splitting events. However, fossils usually require more interpretation than extant taxa, introducing greater potential for spurious codings. Moreover, because fossils often have more "missing" codings, they are frequently accused of increasing numbers of MPTs, frustrating resolution and reducing support. Despite the controversy, remarkably little is known about the effects of fossils more generally. Here we provide the first systematic study, investigating empirically the behavior of fossil and extant taxa in 45 published morphological data sets. First-order jackknifing is used to determine the effects that each terminal has on inferred relationships, on the number of MPTs, and on CI' and RI as measures of homoplasy. Bootstrap leaf stabilities provide a proxy for the contribution of individual taxa to the branch support in the rest of the tree. There is no significant difference in the impact of fossil versus extant taxa on relationships, numbers of MPTs, and CI' or RI. However, adding individual fossil taxa is more likely to reduce the total branch support of the tree than adding extant taxa. This must be weighed against the superior taxon sampling afforded by including judiciously coded fossils, providing data from otherwise unsampled regions of the tree. We therefore recommend that investigators should include fossils, in the absence of compelling and case specific reasons for their exclusion.
Using genes as characters and a parsimony analysis to explore the phylogenetic position of turtles.
Bin Lu
Full Text Available The phylogenetic position of turtles within the vertebrate tree of life remains controversial. Conflicting conclusions from different studies are likely a consequence of systematic error in the tree construction process, rather than random error from small amounts of data. Using genomic data, we evaluate the phylogenetic position of turtles with both conventional concatenated data analysis and a "genes as characters" approach. Two datasets were constructed, one with seven species (human, opossum, zebra finch, chicken, green anole, Chinese pond turtle, and western clawed frog and 4584 orthologous genes, and the second with four additional species (soft-shelled turtle, Nile crocodile, royal python, and tuatara but only 1638 genes. Our concatenated data analysis strongly supported turtle as the sister-group to archosaurs (the archosaur hypothesis, similar to several recent genomic data based studies using similar methods. When using genes as characters and gene trees as character-state trees with equal weighting for each gene, however, our parsimony analysis suggested that turtles are possibly sister-group to diapsids, archosaurs, or lepidosaurs. None of these resolutions were strongly supported by bootstraps. Furthermore, our incongruence analysis clearly demonstrated that there is a large amount of inconsistency among genes and most of the conflict relates to the placement of turtles. We conclude that the uncertain placement of turtles is a reflection of the true state of nature. Concatenated data analysis of large and heterogeneous datasets likely suffers from systematic error and over-estimates of confidence as a consequence of a large number of characters. Using genes as characters offers an alternative for phylogenomic analysis. It has potential to reduce systematic error, such as data heterogeneity and long-branch attraction, and it can also avoid problems associated with computation time and model selection. Finally, treating genes as
Using genes as characters and a parsimony analysis to explore the phylogenetic position of turtles.
Lu, Bin; Yang, Weizhao; Dai, Qiang; Fu, Jinzhong
2013-01-01
The phylogenetic position of turtles within the vertebrate tree of life remains controversial. Conflicting conclusions from different studies are likely a consequence of systematic error in the tree construction process, rather than random error from small amounts of data. Using genomic data, we evaluate the phylogenetic position of turtles with both conventional concatenated data analysis and a "genes as characters" approach. Two datasets were constructed, one with seven species (human, opossum, zebra finch, chicken, green anole, Chinese pond turtle, and western clawed frog) and 4584 orthologous genes, and the second with four additional species (soft-shelled turtle, Nile crocodile, royal python, and tuatara) but only 1638 genes. Our concatenated data analysis strongly supported turtle as the sister-group to archosaurs (the archosaur hypothesis), similar to several recent genomic data based studies using similar methods. When using genes as characters and gene trees as character-state trees with equal weighting for each gene, however, our parsimony analysis suggested that turtles are possibly sister-group to diapsids, archosaurs, or lepidosaurs. None of these resolutions were strongly supported by bootstraps. Furthermore, our incongruence analysis clearly demonstrated that there is a large amount of inconsistency among genes and most of the conflict relates to the placement of turtles. We conclude that the uncertain placement of turtles is a reflection of the true state of nature. Concatenated data analysis of large and heterogeneous datasets likely suffers from systematic error and over-estimates of confidence as a consequence of a large number of characters. Using genes as characters offers an alternative for phylogenomic analysis. It has potential to reduce systematic error, such as data heterogeneity and long-branch attraction, and it can also avoid problems associated with computation time and model selection. Finally, treating genes as characters provides a
Douglas D Gaffin
Full Text Available The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects' brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path's end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery.
Gaffin, Douglas D; Brayfield, Brad P
2016-01-01
The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects' brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path's end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery.
袁志辉; 邓云凯; 李飞; 王宇; 柳罡
2013-01-01
In the application of getting the earth surface’s Digital Elevation Model (DEM) through InSAR technology, multichannel (multi-frequency or multi-baseline) InSAR technique can be employed to improve the mapping ability for complex areas with high slopes or strong height discontinuities, and solve the ambiguity problem which existed in the situation of single baseline. This paper compares the performance of Maxmum Likelihood (ML) estimation techniques with Maximum A Posteriori (MAP) estimation techniques, and adds two steps of bad pixels judgment and weighted filtering after the ML estimation. Bad pixels judgment is completed through cluster analysis and the relationship between adjacent pixels. A special weighted mean filter is used to remove the bad pixels. In this way, the advantage of the ML method’s good efficiency is kept, and the accuracy of DEM also is improved. Simulation results indicate that this method can not only keep good accuracy but also improve greatly the computation efficiency under the same condition, which is advantageous for processing large scale of data sets.%在通过InSAR技术获取地表数字高程模型(DEM)的应用中，为了提高该技术对大斜坡或突变等复杂地形的测绘能力，解决单基线情况下的高度模糊问题，可以利用多通道(多频率或多基线)InSAR技术实现。该文比较了最大似然估计法(ML)和最大后验概率估计法(MAP)的性能，并在最大似然估计法的基础上增加了坏点判断和加权均值滤波的环节，通过聚类分析和与相邻点的关系来判断目标像素是否为误差比较大的坏点，然后再利用加权均值滤波的方法将这些坏点剔除。这样，既保留了ML估计法速度快的特点，又提高了DEM的精度。仿真结果表明，在相同条件下，该方法既能保持较好的精度，同时又大大提高了算法的运行效率，非常有利于大规模数据的处理。
Maximum Autocorrelation Factorial Kriging
Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete
2000-01-01
This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...
Predicting in ungauged basins using a parsimonious rainfall-runoff model
Skaugen, Thomas; Olav Peerebom, Ivar; Nilsson, Anna
2015-04-01
Prediction in ungauged basins is a demanding, but necessary test for hydrological model structures. Ideally, the relationship between model parameters and catchment characteristics (CC) should be hydrologically justifiable. Many studies, however, report on failure to obtain significant correlations between model parameters and CCs. Under the hypothesis that the lack of correlations stems from non-identifiability of model parameters caused by overparameterization, the relatively new parameter parsimonious DDD (Distance Distribution Dynamics) model was tested for predictions in ungauged basins in Norway. In DDD, the capacity of the subsurface water reservoir M is the only parameter to be calibrated whereas the runoff dynamics is completely parameterised from observed characteristics derived from GIS and runoff recession analysis. Water is conveyed through the soils to the river network by waves with celerities determined by the level of saturation in the catchment. The distributions of distances between points in the catchment to the nearest river reach and of the river network give, together with the celerities, distributions of travel times, and, consequently unit hydrographs. DDD has 6 parameters less to calibrate in the runoff module than, for example, the well-known Swedish HBV model. In this study, multiple regression equations relating CCs and model parameters were trained from 84 calibrated catchments located all over Norway and all model parameters showed significant correlations with catchment characteristics. The significant correlation coefficients (with p- value < 0.05) ranged from 0.22-0.55. The suitability of DDD for predictions in ungauged basins was tested for 17 catchments not used to estimate the multiple regression equations. For 10 of the 17 catchments, deviations in Nash-Suthcliffe Efficiency (NSE) criteria between the calibrated and regionalised model were less than 0.1. The median NSE for the regionalised DDD for the 17 catchments, for two
Urban micro-scale flood risk estimation with parsimonious hydraulic modelling and census data
C. Arrighi
2013-05-01
Full Text Available The adoption of 2007/60/EC Directive requires European countries to implement flood hazard and flood risk maps by the end of 2013. Flood risk is the product of flood hazard, vulnerability and exposure, all three to be estimated with comparable level of accuracy. The route to flood risk assessment is consequently much more than hydraulic modelling of inundation, that is hazard mapping. While hazard maps have already been implemented in many countries, quantitative damage and risk maps are still at a preliminary level. A parsimonious quasi-2-D hydraulic model is here adopted, having many advantages in terms of easy set-up. It is here evaluated as being accurate in flood depth estimation in urban areas with a high-resolution and up-to-date Digital Surface Model (DSM. The accuracy, estimated by comparison with marble-plate records of a historic flood in the city of Florence, is characterized in the downtown's most flooded area by a bias of a very few centimetres and a determination coefficient of 0.73. The average risk is found to be about 14 € m−2 yr−1, corresponding to about 8.3% of residents' income. The spatial distribution of estimated risk highlights a complex interaction between the flood pattern and the building characteristics. As a final example application, the estimated risk values have been used to compare different retrofitting measures. Proceeding through the risk estimation steps, a new micro-scale potential damage assessment method is proposed. This is based on the georeferenced census system as the optimal compromise between spatial detail and open availability of socio-economic data. The results of flood risk assessment at the census section scale resolve most of the risk spatial variability, and they can be easily aggregated to whatever upper scale is needed given that they are geographically defined as contiguous polygons. Damage is calculated through stage–damage curves, starting from census data on building type and
A search for model parsimony in a real time flood forecasting system
Grossi, G.; Balistrocchi, M.
2009-04-01
As regards the hydrological simulation of flood events, a physically based distributed approach is the most appealing one, especially in those areas where the spatial variability of the soil hydraulic properties as well as of the meteorological forcing cannot be left apart, such as in mountainous regions. On the other hand, dealing with real time flood forecasting systems, less detailed models requiring a minor number of parameters may be more convenient, reducing both the computational costs and the calibration uncertainty. In fact in this case a precise quantification of the entire hydrograph pattern is not necessary, while the expected output of a real time flood forecasting system is just an estimate of the peak discharge, the time to peak and in some cases the flood volume. In this perspective a parsimonious model has to be found in order to increase the efficiency of the system. A suitable case study was identified in the northern Apennines: the Taro river is a right tributary to the Po river and drains about 2000 km2 of mountains, hills and floodplain, equally distributed . The hydrometeorological monitoring of this medium sized watershed is managed by ARPA Emilia Romagna through a dense network of uptodate gauges (about 30 rain gauges and 10 hydrometers). Detailed maps of the surface elevation, land use and soil texture characteristics are also available. Five flood events were recorded by the new monitoring network in the years 2003-2007: during these events the peak discharge was higher than 1000 m3/s, which is actually quite a high value when compared to the mean discharge rate of about 30 m3/s. The rainfall spatial patterns of such storms were analyzed in previous works by means of geostatistical tools and a typical semivariogram was defined, with the aim of establishing a typical storm structure leading to flood events in the Taro river. The available information was implemented into a distributed flood event model with a spatial resolution of 90m
Salas-Leiva, Dayana E; Meerow, Alan W; Calonje, Michael; Griffith, M Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W; Lewis, Carl E; Namoff, Sandra
2013-01-01
.... The specific aim is to evaluate several gene tree-species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis...
Matthew R. Borths
2016-11-01
the topologies recovered from each phylogenetic method, we reconstructed the biogeographic history of Hyaenodonta using parsimony optimization (PO, likelihood optimization (LO, and Bayesian Binary Markov chain Monte Carlo (MCMC to examine support for the Afro-Arabian origin of Hyaenodonta. Across all analyses, we found that Hyaenodonta most likely originated in Europe, rather than Afro-Arabia. The clade is estimated by tip-dating analysis to have undergone a rapid radiation in the Late Cretaceous and Paleocene; a radiation currently not documented by fossil evidence. During the Paleocene, lineages are reconstructed as dispersing to Asia, Afro-Arabia, and North America. The place of origin of Hyainailouroidea is likely Afro-Arabia according to the Bayesian topologies but it is ambiguous using parsimony. All topologies support the constituent clades–Hyainailourinae, Apterodontinae, and Teratodontinae–as Afro-Arabian and tip-dating estimates that each clade is established in Afro-Arabia by the middle Eocene.
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...
Maximum likelihood reconstruction for Ising models with asynchronous updates
Zeng, Hong-Li; Aurell, Erik; Hertz, John; Roudi, Yasser
2012-01-01
We describe how the couplings in a non-equilibrium Ising model can be inferred from observing the model history. Two cases of an asynchronous update scheme are considered: one in which we know both the spin history and the update times (times at which an attempt was made to flip a spin) and one in which we only know the spin history (i.e., the times at which spins were actually flipped). In both cases, maximizing the likelihood of the data leads to exact learning rules for the couplings in the model. For the first case, we show that one can average over all possible choices of update times to obtain a learning rule that depends only on spin correlations and not on the specific spin history. For the second case, the same rule can be derived within a further decoupling approximation. We study all methods numerically for fully asymmetric Sherrington-Kirkpatrick models, varying the data length, system size, temperature, and external field. Good convergence is observed in accordance with the theoretical expectatio...
Giulio Garaffa; Salvatore Sansalone; David J Ralph
2013-01-01
During the most recent years,a variety of new techniques of penile reconstruction have been described in the literature.This paper focuses on the most recent advances in male genital reconstruction after trauma,excision of benign and malignant disease,in gender reassignment surgery and aphallia with emphasis on surgical technique,cosmetic and functional outcome.
Rowland, Michael A; Perkins, Edward J; Mayo, Michael L
2017-03-11
Physiologically-based toxicokinetic (PBTK) models are often developed to facilitate in vitro to in vivo extrapolation (IVIVE) using a top-down, compartmental approach, favoring architectural simplicity over physiological fidelity despite the lack of general guidelines relating model design to dynamical predictions. Here we explore the impact of design choice (high vs. low fidelity) on chemical distribution throughout an animal's organ system. We contrast transient dynamics and steady states of three previously proposed PBTK models of varying complexity in response to chemical exposure. The steady states for each model were determined analytically to predict exposure conditions from tissue measurements. Steady state whole-body concentrations differ between models, despite identical environmental conditions, which originates from varying levels of physiological fidelity captured by the models. These differences affect the relative predictive accuracy of the inverted models used in exposure reconstruction to link effects-based exposure data with whole-organism response thresholds obtained from in vitro assay measurements. Our results demonstrate how disregarding physiological fideltiy in favor of simpler models affects the internal dynamics and steady state estimates for chemical accumulation within tissues, which, in turn, poses significant challenges for the exposure reconstruction efforts that underlie many IVIVE methods. Developing standardized systems-level models for ecological organisms would not only ensure predictive consistency among future modeling studies, but also ensure pragmatic extrapolation of in vivo effects from in vitro data or modeling exposure-response relationships.
Maximum Likelihood Associative Memories
Gripon, Vincent; Rabbat, Michael
2013-01-01
Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...
Maximum likely scale estimation
Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo
2005-01-01
A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....
Glickel, Steven Z; Gupta, Salil
2006-05-01
Volar ligament reconstruction is an effective technique for treating symptomatic laxity of the CMC joint of the thumb. The laxity may bea manifestation of generalized ligament laxity,post-traumatic, or metabolic (Ehler-Danlos). There construction reduces the shear forces on the joint that contribute to the development and persistence of inflammation. Although there have been only a few reports of the results of volar ligament reconstruction, the use of the procedure to treat Stage I and Stage II disease gives good to excellent results consistently. More advanced stages of disease are best treated by trapeziectomy, with or without ligament reconstruction.
Mirus, Benjamin B.; Nimmo, J.R.
2013-01-01
The impact of preferential flow on recharge and contaminant transport poses a considerable challenge to water-resources management. Typical hydrologic models require extensive site characterization, but can underestimate fluxes when preferential flow is significant. A recently developed source-responsive model incorporates film-flow theory with conservation of mass to estimate unsaturated-zone preferential fluxes with readily available data. The term source-responsive describes the sensitivity of preferential flow in response to water availability at the source of input. We present the first rigorous tests of a parsimonious formulation for simulating water table fluctuations using two case studies, both in arid regions with thick unsaturated zones of fractured volcanic rock. Diffuse flow theory cannot adequately capture the observed water table responses at both sites; the source-responsive model is a viable alternative. We treat the active area fraction of preferential flow paths as a scaled function of water inputs at the land surface then calibrate the macropore density to fit observed water table rises. Unlike previous applications, we allow the characteristic film-flow velocity to vary, reflecting the lag time between source and deep water table responses. Analysis of model performance and parameter sensitivity for the two case studies underscores the importance of identifying thresholds for initiation of film flow in unsaturated rocks, and suggests that this parsimonious approach is potentially of great practical value.
Large-scale parsimony analysis of metazoan indels in protein-coding genes.
Belinky, Frida; Cohen, Ofir; Huchon, Dorothée
2010-02-01
Insertions and deletions (indels) are considered to be rare evolutionary events, the analysis of which may resolve controversial phylogenetic relationships. Indeed, indel characters are often assumed to be less homoplastic than amino acid and nucleotide substitutions and, consequently, more reliable markers for phylogenetic reconstruction. In this study, we analyzed indels from over 1,000 metazoan orthologous genes. We studied the impact of different species sampling, ortholog data sets, lengths of included indels, and indel-coding methods on the resulting metazoan tree. Our results show that, similar to sequence substitutions, indels are homoplastic characters, and their analysis is sensitive to the long-branch attraction artifact. Furthermore, improving the taxon sampling and choosing a closely related outgroup greatly impact the phylogenetic inference. Our indel-based inferences support the Ecdysozoa hypothesis over the Coelomata hypothesis and suggest that sponges are a sister clade to other animals.
F. TopsÃƒÂ¸e
2001-09-01
Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over
... senos Preguntas Para el Médico Datos Para la Vida Komen El cuidado de sus senos:Consejos útiles ... can help . Cost Federal law requires most insurance plans cover the cost of breast reconstruction. Learn more ...
National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Paleoclimatology Program archives reconstructions of past climatic conditions derived from paleoclimate proxies, in addition to the Program's large holdings...
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Casewell, Nicholas R; Wagstaff, Simon C; Harrison, Robert A; Wüster, Wolfgang
2011-03-01
The proliferation of gene data from multiple loci of large multigene families has been greatly facilitated by considerable recent advances in sequence generation. The evolution of such gene families, which often undergo complex histories and different rates of change, combined with increases in sequence data, pose complex problems for traditional phylogenetic analyses, and in particular, those that aim to successfully recover species relationships from gene trees. Here, we implement gene tree parsimony analyses on multicopy gene family data sets of snake venom proteins for two separate groups of taxa, incorporating Bayesian posterior distributions as a rigorous strategy to account for the uncertainty present in gene trees. Gene tree parsimony largely failed to infer species trees congruent with each other or with species phylogenies derived from mitochondrial and single-copy nuclear sequences. Analysis of four toxin gene families from a large expressed sequence tag data set from the viper genus Echis failed to produce a consistent topology, and reanalysis of a previously published gene tree parsimony data set, from the family Elapidae, suggested that species tree topologies were predominantly unsupported. We suggest that gene tree parsimony failure in the family Elapidae is likely the result of unequal and/or incomplete sampling of paralogous genes and demonstrate that multiple parallel gene losses are likely responsible for the significant species tree conflict observed in the genus Echis. These results highlight the potential for gene tree parsimony analyses to be undermined by rapidly evolving multilocus gene families under strong natural selection.
Equalized near maximum likelihood detector
2012-01-01
This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.
Cheeseman, Peter; Stutz, John
2005-01-01
A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].
Carlos Alberto Gonçalves
2013-09-01
Full Text Available This work aims to presents the settings and the degree of intensity on the Organizational Performance based on significant antecedents in two sectors categorized as manufacturing and service. It is present measurements of the effects and combinations set composed by Managerial Factors, External Environment, Internal Organizational efforts, Strategy Process in the Organizational Performance. The research used data collection by interview, survey research and it was made statistical analysis by of Structural Equation Modeling methods and Qualitative Comparative Analysis - ACQ. It can be seen that the construct Strategy Process is the most important in explaining Organizational Performance in relation to other reports. It was also observed that the industry and service sectors have different sets parsimonious explanation for the Organizational Performance.
Xiao-Lei Huang
2010-05-01
Full Text Available Parsimony analysis of endemicity (PAE was used to identify areas of endemism (AOEs for Chinese birds at the subregional level. Four AOEs were identified based on a distribution database of 105 endemic species and using 18 avifaunal subregions as the operating geographical units (OGUs. The four AOEs are the Qinghai-Zangnan Subregion, the Southwest Mountainous Subregion, the Hainan Subregion and the Taiwan Subregion. Cladistic analysis of subregions generally supports the division of China’s avifauna into Palaearctic and Oriental realms. Two PAE area trees were produced from two different distribution datasets (year 1976 and 2007. The 1976 topology has four distinct subregional branches; however, the 2007 topology has three distinct branches. Moreover, three Palaearctic subregions in the 1976 tree clustered together with the Oriental subregions in the 2007 tree. Such topological differences may reflect changes in the distribution of bird species through circa three decades.
Demirhan Erdal
2015-01-01
Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.
Tarasov, Sergei; Génier, François
2015-01-01
Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a
C. Hahn
2013-10-01
Full Text Available Eutrophication of surface waters due to diffuse phosphorus (P losses continues to be a severe water quality problem worldwide, causing the loss of ecosystem functions of the respective water bodies. Phosphorus in runoff often originates from a small fraction of a catchment only. Targeting mitigation measures to these critical source areas (CSAs is expected to be most efficient and cost-effective, but requires suitable tools. Here we investigated the capability of the parsimonious Rainfall-Runoff-Phosphorus (RRP model to identify CSAs in grassland-dominated catchments based on readily available soil and topographic data. After simultaneous calibration on runoff data from four small hilly catchments on the Swiss Plateau, the model was validated on a different catchment in the same region without further calibration. The RRP model adequately simulated the discharge and dissolved reactive P (DRP export from the validation catchment. Sensitivity analysis showed that the model predictions were robust with respect to the classification of soils into "poorly drained" and "well drained", based on the available soil map. Comparing spatial hydrological model predictions with field data from the validation catchment provided further evidence that the assumptions underlying the model are valid and that the model adequately accounts for the dominant P export processes in the target region. Thus, the parsimonious RRP model is a valuable tool that can be used to determine CSAs. Despite the considerable predictive uncertainty regarding the spatial extent of CSAs, the RRP can provide guidance for the implementation of mitigation measures. The model helps to identify those parts of a catchment where high DRP losses are expected or can be excluded with high confidence. Legacy P was predicted to be the dominant source for DRP losses and thus, in combination with hydrologic active areas, a high risk for water quality.
Tarasov, Sergei; Génier, François
2015-01-01
Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a
Zhu, Hong-Ming; Pen, Ue-Li; Chen, Xuelei; Yu, Hao-Ran
2016-01-01
We present a direct approach to non-parametrically reconstruct the linear density field from an observed non-linear map. We solve for the unique displacement potential consistent with the non-linear density and positive definite coordinate transformation using a multigrid algorithm. We show that we recover the linear initial conditions up to $k\\sim 1\\ h/\\mathrm{Mpc}$ with minimal computational cost. This reconstruction approach generalizes the linear displacement theory to fully non-linear fields, potentially substantially expanding the BAO and RSD information content of dense large scale structure surveys, including for example SDSS main sample and 21cm intensity mapping.
Iterative reconstruction of a region of interest for transmission tomography.
Ziegler, Andy; Nielsen, Tim; Grass, Michael
2008-04-01
It was shown that images reconstructed for transmission tomography with iterative maximum likelihood (ML) algorithms exhibit a higher signal-to-noise ratio than images reconstructed with filtered back-projection type algorithms. However, a drawback of ML reconstruction in particular and iterative reconstruction in general is the requirement that the reconstructed field of view (FOV) has to cover the whole volume that contributes to the absorption. In the case of a high resolution reconstruction, this demands a huge number of voxels. This article shows how an iterative ML reconstruction can be limited to a region of interest (ROI) without losing the advantages of a ML reconstruction. Compared with a full FOV ML reconstruction, the reconstruction speed is mainly increased by reducing the number of voxels which are necessary for a ROI reconstruction. In addition, the speed of convergence is increased.
ACL reconstruction - discharge
Anterior cruciate ligament reconstruction - discharge; ACL reconstruction - discharge ... had surgery to reconstruct your anterior cruciate ligament (ACL). The surgeon drilled holes in the bones of ...
Breast Reconstruction Alternatives
... Breast Reconstruction Surgery Breast Cancer Breast Reconstruction Surgery Breast Reconstruction Alternatives Some women who have had a ... chest. What if I choose not to get breast reconstruction? Some women decide not to have any ...
Maximum principle and convergence of central schemes based on slope limiters
Mehmetoglu, Orhan
2012-01-01
A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.
Bayesian image reconstruction: Application to emission tomography
Nunez, J.; Llacer, J.
1989-02-01
In this paper we propose a Maximum a Posteriori (MAP) method of image reconstruction in the Bayesian framework for the Poisson noise case. We use entropy to define the prior probability and likelihood to define the conditional probability. The method uses sharpness parameters which can be theoretically computed or adjusted, allowing us to obtain MAP reconstructions without the problem of the grey'' reconstructions associated with the pre Bayesian reconstructions. We have developed several ways to solve the reconstruction problem and propose a new iterative algorithm which is stable, maintains positivity and converges to feasible images faster than the Maximum Likelihood Estimate method. We have successfully applied the new method to the case of Emission Tomography, both with simulated and real data. 41 refs., 4 figs., 1 tab.
2009-01-01
Eighty percent of the reconstruction projects in Sichuan Province will be completed by the end of the year Despite ruins still seen everywhere in the earthquake-hit areas in Sichuan (Province, new buildings have been completed, and many people have moved into new houses. Through cameras of the media, the faces, once painful and melancholy after last year’s earthquake, now look confident and firm, gratifying people all over the
Brown James
2007-12-01
Full Text Available This article aims to discuss the various defects that occur with maxillectomy with a full review of the literature and discussion of the advantages and disadvantages of the various techniques described. Reconstruction of the maxilla can be relatively simple for the standard low maxillectomy that does not involve the orbital floor (Class 2. In this situation the structure of the face is less damaged and the there are multiple reconstructive options for the restoration of the maxilla and dental alveolus. If the maxillectomy includes the orbit (Class 4 then problems involving the eye (enopthalmos, orbital dystopia, ectropion and diplopia are avoided which simplifies the reconstruction. Most controversy is associated with the maxillectomy that involves the orbital floor and dental alveolus (Class 3. A case is made for the use of the iliac crest with internal oblique as an ideal option but there are other methods, which may provide a similar result. A multidisciplinary approach to these patients is emphasised which should include a prosthodontist with a special expertise for these defects.
Gomez-Velez, J. D.; Harvey, J. W.
2014-12-01
Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data as well as models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically-based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS is a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, bankfull width, median grain size, sinuosity, channel slope, and regional groundwater gradients. Multi-scale hyporheic flow is computed based on combining simple but powerful analytical and numerical expressions that have been previously published. We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bedforms dominates hyporheic fluxes and turnover rates along the river corridor. Moreover, the hyporheic zone's potential for biogeochemical transformations is comparable across stream orders, but the abundance of lower-order channels results in a considerably higher cumulative effect for low-order streams. Thus, vertical exchange beneath submerged bedforms has more potential for biogeochemical transformations than lateral exchange beneath banks, although lateral exchange through meanders may be important in large rivers. These results have implications for predicting outcomes of river and basin management practices.
Matthews, Luke J; Rosenberger, Alfred L
2008-11-01
The classifications of primates, in general, and platyrrhine primates, in particular, have been greatly revised subsequent to the rationale for taxonomic decisions shifting from one rooted in the biological species concept to one rooted solely in phylogenetic affiliations. Given the phylogenetic justification provided for revised taxonomies, the scientific validity of taxonomic distinctions can be rightly judged by the robusticity of the phylogenetic results supporting them. In this study, we empirically investigated taxonomic-sampling effects on a cladogram previously inferred from craniodental data for the woolly monkeys (Lagothrix). We conducted the study primarily through much greater sampling of species-level taxa (OTUs) after improving some character codings and under a variety of outgroup choices. The results indicate that alternative selections of species subsets from within genera produce various tree topologies. These results stand even after adjusting the character set and considering the potential role of interobserver disagreement. We conclude that specific taxon combinations, in this case, generic or species pairings, of the primary study group has a biasing effect in parsimony analysis, and that the cladistic rationale for resurrecting the Oreonax generic distinction for the yellow-tailed woolly monkey (Lagothrix flavicauda) is based on an artifact of idiosyncratic sampling within the study group below the genus level. Some recommendations to minimize the problem, which is prevalent in all cladistic analyses, are proposed.
Dispersal routes reconstruction and the minimum cost arborescence problem.
Hordijk, Wim; Broennimann, Olivier
2012-09-01
We show that the dispersal routes reconstruction problem can be stated as an instance of a graph theoretical problem known as the minimum cost arborescence problem, for which there exist efficient algorithms. Furthermore, we derive some theoretical results, in a simplified setting, on the possible optimal values that can be obtained for this problem. With this, we place the dispersal routes reconstruction problem on solid theoretical grounds, establishing it as a tractable problem that also lends itself to formal mathematical and computational analysis. Finally, we present an insightful example of how this framework can be applied to real data. We propose that our computational method can be used to define the most parsimonious dispersal (or invasion) scenarios, which can then be tested using complementary methods such as genetic analysis.
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Fu Xiaoqiang
2006-01-01
@@ The Karzai regime has made some progress over the past four years and a half in the post-war reconstruction.However, Taliban's destruction and drug economy are still having serious impacts on the security and stability of Afghanistan.Hence the settlement of the two problems has become a crux of affecting the country' s future.Moreover, the Karzai regime is yet to handle a series of hot potatoes in the fields of central government' s authority, military and police building-up and foreign relations as well.
Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda
2016-08-01
With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.
Montgomery, Stephen H; Capellini, Isabella; Barton, Robert A; Mundy, Nicholas I
2010-01-27
Brain size is a key adaptive trait. It is often assumed that increasing brain size was a general evolutionary trend in primates, yet recent fossil discoveries have documented brain size decreases in some lineages, raising the question of how general a trend there was for brains to increase in mass over evolutionary time. We present the first systematic phylogenetic analysis designed to answer this question. We performed ancestral state reconstructions of three traits (absolute brain mass, absolute body mass, relative brain mass) using 37 extant and 23 extinct primate species and three approaches to ancestral state reconstruction: parsimony, maximum likelihood and Bayesian Markov-chain Monte Carlo. Both absolute and relative brain mass generally increased over evolutionary time, but body mass did not. Nevertheless both absolute and relative brain mass decreased along several branches. Applying these results to the contentious case of Homo floresiensis, we find a number of scenarios under which the proposed evolution of Homo floresiensis' small brain appears to be consistent with patterns observed along other lineages, dependent on body mass and phylogenetic position. Our results confirm that brain expansion began early in primate evolution and show that increases occurred in all major clades. Only in terms of an increase in absolute mass does the human lineage appear particularly striking, with both the rate of proportional change in mass and relative brain size having episodes of greater expansion elsewhere on the primate phylogeny. However, decreases in brain mass also occurred along branches in all major clades, and we conclude that, while selection has acted to enlarge primate brains, in some lineages this trend has been reversed. Further analyses of the phylogenetic position of Homo floresiensis and better body mass estimates are required to confirm the plausibility of the evolution of its small brain mass. We find that for our dataset the Bayesian analysis for
Barton Robert A
2010-01-01
Full Text Available Abstract Background Brain size is a key adaptive trait. It is often assumed that increasing brain size was a general evolutionary trend in primates, yet recent fossil discoveries have documented brain size decreases in some lineages, raising the question of how general a trend there was for brains to increase in mass over evolutionary time. We present the first systematic phylogenetic analysis designed to answer this question. Results We performed ancestral state reconstructions of three traits (absolute brain mass, absolute body mass, relative brain mass using 37 extant and 23 extinct primate species and three approaches to ancestral state reconstruction: parsimony, maximum likelihood and Bayesian Markov-chain Monte Carlo. Both absolute and relative brain mass generally increased over evolutionary time, but body mass did not. Nevertheless both absolute and relative brain mass decreased along several branches. Applying these results to the contentious case of Homo floresiensis, we find a number of scenarios under which the proposed evolution of Homo floresiensis' small brain appears to be consistent with patterns observed along other lineages, dependent on body mass and phylogenetic position. Conclusions Our results confirm that brain expansion began early in primate evolution and show that increases occurred in all major clades. Only in terms of an increase in absolute mass does the human lineage appear particularly striking, with both the rate of proportional change in mass and relative brain size having episodes of greater expansion elsewhere on the primate phylogeny. However, decreases in brain mass also occurred along branches in all major clades, and we conclude that, while selection has acted to enlarge primate brains, in some lineages this trend has been reversed. Further analyses of the phylogenetic position of Homo floresiensis and better body mass estimates are required to confirm the plausibility of the evolution of its small brain
2010-01-01
Background Brain size is a key adaptive trait. It is often assumed that increasing brain size was a general evolutionary trend in primates, yet recent fossil discoveries have documented brain size decreases in some lineages, raising the question of how general a trend there was for brains to increase in mass over evolutionary time. We present the first systematic phylogenetic analysis designed to answer this question. Results We performed ancestral state reconstructions of three traits (absolute brain mass, absolute body mass, relative brain mass) using 37 extant and 23 extinct primate species and three approaches to ancestral state reconstruction: parsimony, maximum likelihood and Bayesian Markov-chain Monte Carlo. Both absolute and relative brain mass generally increased over evolutionary time, but body mass did not. Nevertheless both absolute and relative brain mass decreased along several branches. Applying these results to the contentious case of Homo floresiensis, we find a number of scenarios under which the proposed evolution of Homo floresiensis' small brain appears to be consistent with patterns observed along other lineages, dependent on body mass and phylogenetic position. Conclusions Our results confirm that brain expansion began early in primate evolution and show that increases occurred in all major clades. Only in terms of an increase in absolute mass does the human lineage appear particularly striking, with both the rate of proportional change in mass and relative brain size having episodes of greater expansion elsewhere on the primate phylogeny. However, decreases in brain mass also occurred along branches in all major clades, and we conclude that, while selection has acted to enlarge primate brains, in some lineages this trend has been reversed. Further analyses of the phylogenetic position of Homo floresiensis and better body mass estimates are required to confirm the plausibility of the evolution of its small brain mass. We find that for our
Dohrmann, Martin; Kelley, Christopher; Kelly, Michelle; Pisera, Andrzej; Hooper, John N A; Reiswig, Henry M
2017-01-01
Glass sponges (Class Hexactinellida) are important components of deep-sea ecosystems and are of interest from geological and materials science perspectives. The reconstruction of their phylogeny with molecular data has only recently begun and shows a better agreement with morphology-based systematics than is typical for other sponge groups, likely because of a greater number of informative morphological characters. However, inconsistencies remain that have far-reaching implications for hypotheses about the evolution of their major skeletal construction types (body plans). Furthermore, less than half of all described extant genera have been sampled for molecular systematics, and several taxa important for understanding skeletal evolution are still missing. Increased taxon sampling for molecular phylogenetics of this group is therefore urgently needed. However, due to their remote habitat and often poorly preserved museum material, sequencing all 126 currently recognized extant genera will be difficult to achieve. Utilizing morphological data to incorporate unsequenced taxa into an integrative systematics framework therefore holds great promise, but it is unclear which methodological approach best suits this task. Here, we increase the taxon sampling of four previously established molecular markers (18S, 28S, and 16S ribosomal DNA, as well as cytochrome oxidase subunit I) by 12 genera, for the first time including representatives of the order Aulocalycoida and the type genus of Dactylocalycidae, taxa that are key to understanding hexactinellid body plan evolution. Phylogenetic analyses suggest that Aulocalycoida is diphyletic and provide further support for the paraphyly of order Hexactinosida; hence these orders are abolished from the Linnean classification. We further assembled morphological character matrices to integrate so far unsequenced genera into phylogenetic analyses in maximum parsimony (MP), maximum likelihood (ML), Bayesian, and morphology-based binning
俞凯峰; 鹿化煜; Frank Lehmkuhl; Veit Nottebaum
2013-01-01
中国北方典型沙地处于东亚季风边缘区半干旱气候带,对气候变化响应敏感.本文对已有的该区域古气候记录文献资料进行整理分析和评估,对末次盛冰期与全新世大暖期两个特征时间段的古温度与古降雨量空间格局进行定量化重建,获得了其分布特征.结果表明,在末次盛冰期(约26 ～ 16ka)温度降低了5～11℃,变率为60％～200％,降温极值在黄土高原南缘；降雨量减少180 ～ 350mm,变率为50％左右,东北地区降雨量变化不大.在全新世大暖期(约9 ～ 5ka),温度升高了1.0 ～3.5℃,变率为20％～130％；降雨量增加了30 ～ 400mm,变率为10％ ～120％,其中存在一些需剔除的奇异点,从东部沿海向西北内陆降雨量增幅有加大趋势.上述结果为进一步探讨中国北方干旱-半干旱区古气候变化机理、检验东亚季风区古气候数值模拟结果等提供了定量化数据.%Dune fields of arid and semi-arid regions of Northern China, including Mu Us field, Otindag, Horqin, Songnen, Hunlunbuir and some parts of Badain Jaran and Tengger deserts,Loess Plateau(32°～50°N,100°～ 135°E) ,are situated at margin of the area dominated by East Asian Monsoon circulation, and the environment is quite sensitive to paleoclimate change.Many paleoclimatic reconstructions have been undertaken in this region,typically from pollen,peat bog,loess-paleosol sequences,lake sediments etc,and most of these records are in good chronology sequence. In this paper,we collect and analyze the paleoclimate proxy data from more than 300 studies,datasets and relevant atlas.We find that, compared with the modern annual temperature and precipitation, the temperature decreased by 5 ～ lit and the precipitation decreased by 180 ～ 350mm during the Last Glacial Maximum ( LGM, 26 ～ 16ka) ; the temperature increased by 1.0 ～ 3.5℃ ,and the precipitation increased by 30 ～ 400mm in Mid-Holocene Optimum (HO, 9～5ka).With the
Meij, E.; Trieschnigg, D.; Rijke, M.de; Kraaij, W.
2008-01-01
In many collections, documents are annotated using concepts from a structured knowledge source such as an ontology or thesaurus. Examples include the news domain [7], where each news item is categorized according to the nature of the event that took place, and Wikipedia, with its per-article categor
Eder, Wolfgang; Ives Torres-Silva, Ana; Hohenegger, Johann
2017-04-01
Phylogenetic analysis and trees based on molecular data are broadly applied and used to infer genetical and biogeographic relationship in recent larger foraminifera. Molecular phylogenetic is intensively used within recent nummulitids, however for fossil representatives these trees are only of minor informational value. Hence, within paleontological studies a phylogenetic approach through morphometric analysis is of much higher value. To tackle phylogenetic relationships within the nummulitid family, a much higher number of morphological character must be measured than are commonly used in biometric studies, where mostly parameters describing embryonic size (e.g., proloculus diameter, deuteroloculus diameter) and/or the marginal spiral (e.g., spiral diagrams, spiral indices) are studied. For this purpose 11 growth-independent and/or growth-invariant characters have been used to describe the morphological variability of equatorial thin sections of seven Carribbean nummulitid taxa (Nummulites striatoreticulatus, N. macgillavry, Palaeonummulites willcoxi, P.floridensis, P. soldadensis, P.trinitatensis and P.ocalanus) and one outgroup taxon (Ranikothalia bermudezi). Using these characters, phylogenetic trees were calculated using a restricted maximum likelihood algorithm (REML), and results are cross-checked by ordination and cluster analysis. Square-change parsimony method has been run to reconstruct ancestral states, as well as to simulate the evolution of the chosen characters along the calculated phylogenetic tree and, independent - contrast analysis was used to estimate confidence intervals. Based on these simulations, phylogenetic tendencies of certain characters proposed for nummulitids (e.g., Cope's rule or nepionic acceleration) can be tested, whether these tendencies are valid for the whole family or only for certain clades. At least, within the Carribean nummulitids, phylogenetic trends along some growth-independent characters of the embryo (e.g., first
Iqbal, Abdullah; Valous, Nektarios A; Sun, Da-Wen; Allen, Paul
2011-02-01
Lacunarity is about quantifying the degree of spatial heterogeneity in the visual texture of imagery through the identification of the relationships between patterns and their spatial configurations in a two-dimensional setting. The computed lacunarity data can designate a mathematical index of spatial heterogeneity, therefore the corresponding feature vectors should possess the necessary inter-class statistical properties that would enable them to be used for pattern recognition purposes. The objectives of this study is to construct a supervised parsimonious classification model of binary lacunarity data-computed by Valous et al. (2009)-from pork ham slice surface images, with the aid of kernel principal component analysis (KPCA) and artificial neural networks (ANNs), using a portion of informative salient features. At first, the dimension of the initial space (510 features) was reduced by 90% in order to avoid any noise effects in the subsequent classification. Then, using KPCA, the first nineteen kernel principal components (99.04% of total variance) were extracted from the reduced feature space, and were used as input in the ANN. An adaptive feedforward multilayer perceptron (MLP) classifier was employed to obtain a suitable mapping from the input dataset. The correct classification percentages for the training, test and validation sets were 86.7%, 86.7%, and 85.0%, respectively. The results confirm that the classification performance was satisfactory. The binary lacunarity spatial metric captured relevant information that provided a good level of differentiation among pork ham slice images. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
OECD Maximum Residue Limit Calculator
With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.
Baumeister, Dorothea
2007-01-01
Holzer and Holzer (Discrete Applied Mathematics 144(3):345--358, 2004) proved the Tantrix(TM) rotation puzzle problem with four colors NP-complete. Baumeister and Rothe (MCU 2007) modified their construction to achieve a parsimonious reduction from satisfiability to this problem. Since parsimonious reductions preserve the number of solutions, it follows that the unique version of the four-color Tantrix(TM) rotation puzzle problem is DP-complete under randomized reductions. In this paper, we study the three-color and the two-color Tantrix(TM) rotation puzzle problem. Restricting the number of allowed colors to three (respectively, to two) reduces the set of available Tantrix(TM) tiles from 56 to 14 (respectively, to 8). We prove that both the three-color and the two-color Tantrix(TM) rotation puzzle problem is NP-complete, which answers a question raised by Holzer and Holzer in the affirmative. Since both these reductions are parsimonious, it follows that both the unique three-color and the unique two-color Ta...
Breast Reconstruction with Implants
Breast reconstruction with implants Overview By Mayo Clinic Staff Breast reconstruction is a surgical procedure that restores shape to ... treat or prevent breast cancer. One type of breast reconstruction uses breast implants — silicone devices filled with silicone ...
Maximum likelihood for genome phylogeny on gene content.
Zhang, Hongmei; Gu, Xun
2004-01-01
With the rapid growth of entire genome data, reconstructing the phylogenetic relationship among different genomes has become a hot topic in comparative genomics. Maximum likelihood approach is one of the various approaches, and has been very successful. However, there is no reported study for any applications in the genome tree-making mainly due to the lack of an analytical form of a probability model and/or the complicated calculation burden. In this paper we studied the mathematical structure of the stochastic model of genome evolution, and then developed a simplified likelihood function for observing a specific phylogenetic pattern under four genome situation using gene content information. We use the maximum likelihood approach to identify phylogenetic trees. Simulation results indicate that the proposed method works well and can identify trees with a high correction rate. Real data application provides satisfied results. The approach developed in this paper can serve as the basis for reconstructing phylogenies of more than four genomes.
Fikret Fatih Önol
2014-11-01
Full Text Available In the treatment of urethral stricture, Buccal Mucosa Graft (BMG and reconstruction is applied with different patch techniques. Recently often prefered, this approach is, in bulber urethra strictures of BMG’s; by “ventral onley”, in pendulous urethra because of thinner spingiosis body, which provides support and nutrition of graft; by means of “dorsal inley” being anastomosis. In the research that Cordon et al. did, they compared conventional BMJ “onley” urethroplast and “pseudo-spongioplasty” which base on periurethral vascular tissues to be nourished by closing onto graft. In repairment of front urethras that spongiosis supportive tissue is insufficient, this method is defined as peripheral dartos [çevre dartos?] and buck’s fascia being mobilized and being combined on BMG patch. Between the years 2007 and 2012, assessment of 56 patients with conventional “ventral onley” BMG urethroplast and 46 patients with “pseudo-spongioplasty” were reported to have similar success rates (80% to 84% in 3.5 year follow-up on average. While 74% of the patients that were applied pseudo-spongioplasty had disease present at distal urethra (pendulous, bulbopendulous, 82% of the patients which were applied conventional onley urethroplast had stricture at proximal (bulber urethra yet. Also lenght of the stricture at the pseudo-spongioplasty group was longer in a statistically significant way (5.8 cm to 4.7 cm on average, p=0.028. This study which Cordon et al. did, shows that conditions in which conventional sponjiyoplasti is not possible, periurethral vascular tissues are adequate to nourish BMG. Even it is an important technique in terms of bringing a new point of view to today’s practice, data especially about complications that may show up after pseudo-spongioplasty usage on long distal strictures (e.g. appearance of urethral diverticulum is not reported. Along with this we think that, providing an oppurtinity to patch directly
Goloboff, Pablo A
2014-10-01
Three different types of data sets, for which the uniquely most parsimonious tree can be known exactly but is hard to find with heuristic tree search methods, are studied. Tree searches are complicated more by the shape of the tree landscape (i.e. the distribution of homoplasy on different trees) than by the sheer abundance of homoplasy or character conflict. Data sets of Type 1 are those constructed by Radel et al. (2013). Data sets of Type 2 present a very rugged landscape, with narrow peaks and valleys, but relatively low amounts of homoplasy. For such a tree landscape, subjecting the trees to TBR and saving suboptimal trees produces much better results when the sequence of clipping for the tree branches is randomized instead of fixed. An unexpected finding for data sets of Types 1 and 2 is that starting a search from a random tree instead of a random addition sequence Wagner tree may increase the probability that the search finds the most parsimonious tree; a small artificial example where these probabilities can be calculated exactly is presented. Data sets of Type 3, the most difficult data sets studied here, comprise only congruent characters, and a single island with only one most parsimonious tree. Even if there is a single island, missing entries create a very flat landscape which is difficult to traverse with tree search algorithms because the number of equally parsimonious trees that need to be saved and swapped to effectively move around the plateaus is too large. Minor modifications of the parameters of tree drifting, ratchet, and sectorial searches allow travelling around these plateaus much more efficiently than saving and swapping large numbers of equally parsimonious trees with TBR. For these data sets, two new related criteria for selecting taxon addition sequences in Wagner trees (the "selected" and "informative" addition sequences) produce much better results than the standard random or closest addition sequences. These new methods for Wagner
Maximum-likelihood estimation prevents unphysical Mueller matrices
Aiello, A; Voigt, D; Woerdman, J P
2005-01-01
We show that the method of maximum-likelihood estimation, recently introduced in the context of quantum process tomography, can be applied to the determination of Mueller matrices characterizing the polarization properties of classical optical systems. Contrary to linear reconstruction algorithms, the proposed method yields physically acceptable Mueller matrices even in presence of uncontrolled experimental errors. We illustrate the method on the case of an unphysical measured Mueller matrix taken from the literature.
Electronic noise modeling in statistical iterative reconstruction.
Xu, Jingyan; Tsui, Benjamin M W
2009-06-01
We consider electronic noise modeling in tomographic image reconstruction when the measured signal is the sum of a Gaussian distributed electronic noise component and another random variable whose log-likelihood function satisfies a certain linearity condition. Examples of such likelihood functions include the Poisson distribution and an exponential dispersion (ED) model that can approximate the signal statistics in integration mode X-ray detectors. We formulate the image reconstruction problem as a maximum-likelihood estimation problem. Using an expectation-maximization approach, we demonstrate that a reconstruction algorithm can be obtained following a simple substitution rule from the one previously derived without electronic noise considerations. To illustrate the applicability of the substitution rule, we present examples of a fully iterative reconstruction algorithm and a sinogram smoothing algorithm both in transmission CT reconstruction when the measured signal contains additive electronic noise. Our simulation studies show the potential usefulness of accurate electronic noise modeling in low-dose CT applications.
Maximum margin Bayesian network classifiers.
Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian
2012-03-01
We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.
Maximum Entropy in Drug Discovery
Chih-Yuan Tseng
2014-07-01
Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.
Skaugen, Thomas; Haddeland, Ingjerd
2014-05-01
A new parameter-parsimonious rainfall-runoff model, DDD (Distance Distribution Dynamics) has been run operationally at the Norwegian Flood Forecasting Service for approximately a year. DDD has been calibrated for, altogether, 104 catchments throughout Norway, and provide runoff forecasts 8 days ahead on a daily temporal resolution driven by precipitation and temperature from the meteorological forecast models AROME (48 hrs) and EC (192 hrs). The current version of DDD differs from the standard model used for flood forecasting in Norway, the HBV model, in its description of the subsurface and runoff dynamics. In DDD, the capacity of the subsurface water reservoir M, is the only parameter to be calibrated whereas the runoff dynamics is completely parameterised from observed characteristics derived from GIS and runoff recession analysis. Water is conveyed through the soils to the river network by waves with celerities determined by the level of saturation in the catchment. The distributions of distances between points in the catchment to the nearest river reach and of the river network give, together with the celerities, distributions of travel times, and, consequently unit hydrographs. DDD has 6 parameters less to calibrate in the runoff module than the HBV model. Experiences using DDD show that especially the timing of flood peaks has improved considerably and in a comparison between DDD and HBV, when assessing timeseries of 64 years for 75 catchments, DDD had a higher hit rate and a lower false alarm rate than HBV. For flood peaks higher than the mean annual flood the median hit rate is 0.45 and 0.41 for the DDD and HBV models respectively. Corresponding number for the false alarm rate is 0.62 and 0.75 For floods over the five year return interval, the median hit rate is 0.29 and 0.28 for the DDD and HBV models, respectively with false alarm rates equal to 0.67 and 0.80. During 2014 the Norwegian flood forecasting service will run DDD operationally at a 3h temporal
M. Coustau
2012-04-01
Full Text Available Rainfall-runoff models are crucial tools for the statistical prediction of flash floods and real-time forecasting. This paper focuses on a karstic basin in the South of France and proposes a distributed parsimonious event-based rainfall-runoff model, coherent with the poor knowledge of both evaporative and underground fluxes. The model combines a SCS runoff model and a Lag and Route routing model for each cell of a regular grid mesh. The efficiency of the model is discussed not only to satisfactorily simulate floods but also to get powerful relationships between the initial condition of the model and various predictors of the initial wetness state of the basin, such as the base flow, the Hu2 index from the Meteo-France SIM model and the piezometric levels of the aquifer. The advantage of using meteorological radar rainfall in flood modelling is also assessed. Model calibration proved to be satisfactory by using an hourly time step with Nash criterion values, ranging between 0.66 and 0.94 for eighteen of the twenty-one selected events. The radar rainfall inputs significantly improved the simulations or the assessment of the initial condition of the model for 5 events at the beginning of autumn, mostly in September–October (mean improvement of Nash is 0.09; correction in the initial condition ranges from −205 to 124 mm, but were less efficient for the events at the end of autumn. In this period, the weak vertical extension of the precipitation system and the low altitude of the 0 °C isotherm could affect the efficiency of radar measurements due to the distance between the basin and the radar (~60 km. The model initial condition S is correlated with the three tested predictors (R^{2} > 0.6. The interpretation of the model suggests that groundwater does not affect the first peaks of the flood, but can strongly impact subsequent peaks in the case of a multi-storm event. Because this kind of model is based on a limited
36 CFR 212.10 - Maximum economy National Forest System roads.
2010-07-01
... 36 Parks, Forests, and Public Property 2 2010-07-01 2010-07-01 false Maximum economy National... economy National Forest System roads. The Chief may acquire, construct, reconstruct, improve, and maintain... Forest Service in locations and according to specifications which will permit maximum economy in...
Modern methods of image reconstruction.
Puetter, R. C.
The author reviews the image restoration or reconstruction problem in its general setting. He first discusses linear methods for solving the problem of image deconvolution, i.e. the case in which the data are a convolution of a point-spread function and an underlying unblurred image. Next, non-linear methods are introduced in the context of Bayesian estimation, including maximum likelihood and maximum entropy methods. Then, the author discusses the role of language and information theory concepts for data compression and solving the inverse problem. The concept of algorithmic information content (AIC) is introduced and is shown to be crucial to achieving optimal data compression and optimized Bayesian priors for image reconstruction. The dependence of the AIC on the selection of language then suggests how efficient coordinate systems for the inverse problem may be selected. The author also introduced pixon-based image restoration and reconstruction methods. The relation between image AIC and the Bayesian incarnation of Occam's Razor is discussed, as well as the relation of multiresolution pixon languages and image fractal dimension. Also discussed is the relation of pixons to the role played by the Heisenberg uncertainty principle in statistical physics and how pixon-based image reconstruction provides a natural extension to the Akaike information criterion for maximum likelihood. The author presents practical applications of pixon-based Bayesian estimation to the restoration of astronomical images. He discusses the effects of noise, effects of finite sampling on resolution, and special problems associated with spatially correlated noise introduced by mosaicing. Comparisons to other methods demonstrate the significant improvements afforded by pixon-based methods and illustrate the science that such performance improvements allow.
Wahl, E. R.; Cook, E.; Diaz, H. F.; Meko, D. M.
2012-12-01
Well-verified tree ring-based reconstructions of the surface temperature field over the past 500 years in western North America have recently been completed using the principal component spatial regression (PCSR) method. In conjunction with the North American Drought Atlas (NADA) reconstructions of drought index values, constructed using the point-by-point regression (PPR) method, the new spatial temperature reconstructions make it possible to estimate direct moisture fields over western North America for a significant portion of the past millennium. To achieve this goal, experiments will be conducted in which reconstructed temperature, or its equivalent in the form of potential evapotranspiration, will be regressed out of the NADA reconstructions to 'back out' in residual form the contribution of precipitation in the NADA with its regional seasonalities intact. To ensure non-overlap of the temperature and PDSI tree chronology data used, an implementation of the NADA will be done that excludes the proxy data used in the temperature reconstructions. To facilitate examination of maximum comparability of the drought and temperature data, the annual temperature reconstructions also will be calibrated to summer (JJA) temperatures, the NADA seasonality. Bootstrapping methods recently implemented for paleoclimate field reconstruction, the maximum entropy bootstrap for PPR and a modification of bootstrapping from residuals for PCSR, will be evaluated for generation of uncertainty ensemble distributions associated with the derived precipitation reconstructions. Generation of a reconstruction ensemble allows, for example, estimation of the distribution of extreme values or the uncertainty in a temporally smoothed time series, results that cannot readily be obtained from traditional confidence intervals associated with expected value estimates. More generally, the ensemble distribution will allow these regression-based reconstructions to be more meaningfully compared with
Greenslade, Thomas B., Jr.
1985-01-01
Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)
Abolishing the maximum tension principle
Dabrowski, Mariusz P
2015-01-01
We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Abolishing the maximum tension principle
Mariusz P. Da̧browski
2015-09-01
Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.
Maximum Genus of Strong Embeddings
Er-ling Wei; Yan-pei Liu; Han Ren
2003-01-01
The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.
Remizov, Ivan D
2009-01-01
In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.
The Testability of Maximum Magnitude
Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.
2012-12-01
Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.
Alternative Multiview Maximum Entropy Discrimination.
Chao, Guoqing; Sun, Shiliang
2016-07-01
Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.
Maximum likelihood estimates of pairwise rearrangement distances.
Serdoz, Stuart; Egri-Nagy, Attila; Sumner, Jeremy; Holland, Barbara R; Jarvis, Peter D; Tanaka, Mark M; Francis, Andrew R
2017-06-21
Accurate estimation of evolutionary distances between taxa is important for many phylogenetic reconstruction methods. Distances can be estimated using a range of different evolutionary models, from single nucleotide polymorphisms to large-scale genome rearrangements. Corresponding corrections for genome rearrangement distances fall into 3 categories: Empirical computational studies, Bayesian/MCMC approaches, and combinatorial approaches. Here, we introduce a maximum likelihood estimator for the inversion distance between a pair of genomes, using a group-theoretic approach to modelling inversions introduced recently. This MLE functions as a corrected distance: in particular, we show that because of the way sequences of inversions interact with each other, it is quite possible for minimal distance and MLE distance to differently order the distances of two genomes from a third. The second aspect tackles the problem of accounting for the symmetries of circular arrangements. While, generally, a frame of reference is locked, and all computation made accordingly, this work incorporates the action of the dihedral group so that distance estimates are free from any a priori frame of reference. The philosophy of accounting for symmetries can be applied to any existing correction method, for which examples are offered. Copyright © 2017 Elsevier Ltd. All rights reserved.
Soon, Villu; Saarma, Urmas
2011-07-01
The ignita species group within the genus Chrysis includes over 100 cuckoo wasp species, which all lead a parasitic lifestyle and exhibit very similar morphology. The lack of robust, diagnostic morphological characters has hindered phylogenetic reconstructions and contributed to frequent misidentification and inconsistent interpretations of species in this group. Therefore, molecular phylogenetic analysis is the most suitable approach for resolving the phylogeny and taxonomy of this group. We present a well-resolved phylogeny of the Chrysis ignita species group based on mitochondrial sequence data from 41 ingroup and six outgroup taxa. Although our emphasis was on European taxa, we included samples from most of the distribution range of the C. ignita species group to test for monophyly. We used a continuous mitochondrial DNA sequence consisting of 16S rRNA, tRNA(Val), 12S rRNA and ND4. The location of the ND4 gene at the 3' end of this continuous sequence, following 12S rRNA, represents a novel mitochondrial gene arrangement for insects. Due to difficulties in aligning rRNA genes, two different Bayesian approaches were employed to reconstruct phylogeny: (1) using a reduced data matrix including only those positions that could be aligned with confidence; or (2) using the full sequence dataset while estimating alignment and phylogeny simultaneously. In addition maximum-parsimony and maximum-likelihood analyses were performed to test the robustness of the Bayesian approaches. Although all approaches yielded trees with similar topology, considerably more nodes were resolved with analyses using the full data matrix. Phylogenetic analysis supported the monophyly of the C. ignita species group and divided its species into well-supported clades. The resultant phylogeny was only partly in accordance with published subgroupings based on morphology. Our results suggest that several taxa currently treated as subspecies or names treated as synonyms may in fact constitute
Neuromagnetic source reconstruction
Lewis, P.S.; Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States)
1994-12-31
In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.
Neuromagnetic source reconstruction
Lewis, P.S.; Mosher, J.C. [Los Alamos National Lab., NM (United States); Leahy, R.M. [University of Southern California, Los Angeles, CA (United States)
1994-12-31
In neuromagnetic source reconstruction, a functional map of neural activity is constructed from noninvasive magnetoencephalographic (MEG) measurements. The overall reconstruction problem is under-determined, so some form of source modeling must be applied. We review the two main classes of reconstruction techniques-parametric current dipole models and nonparametric distributed source reconstructions. Current dipole reconstructions use a physically plausible source model, but are limited to cases in which the neural currents are expected to be highly sparse and localized. Distributed source reconstructions can be applied to a wider variety of cases, but must incorporate an implicit source, model in order to arrive at a single reconstruction. We examine distributed source reconstruction in a Bayesian framework to highlight the implicit nonphysical Gaussian assumptions of minimum norm based reconstruction algorithms. We conclude with a brief discussion of alternative non-Gaussian approachs.
Iterative Reconstruction for Differential Phase Contrast Imaging
Koehler, T.; Brendel, B.; Roessl, E.
2011-01-01
Purpose: The purpose of this work is to combine two areas of active research in tomographic x-ray imaging. The first one is the use of iterative reconstruction techniques. The second one is differential phase contrast imaging (DPCI). Method: We derive an SPS type maximum likelihood (ML) reconstructi
Noise and physical limits to maximum resolution of PET images
Herraiz, J.L.; Espana, S. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain); Vicente, E.; Vaquero, J.J.; Desco, M. [Unidad de Medicina y Cirugia Experimental, Hospital GU ' Gregorio Maranon' , E-28007 Madrid (Spain); Udias, J.M. [Dpto. Fisica Atomica, Molecular y Nuclear, Facultad de Ciencias Fisicas, Universidad Complutense de Madrid, Avda. Complutense s/n, E-28040 Madrid (Spain)], E-mail: jose@nuc2.fis.ucm.es
2007-10-01
In this work we show that there is a limit for the maximum resolution achievable with a high resolution PET scanner, as well as for the best signal-to-noise ratio, which are ultimately related to the physical effects involved in the emission and detection of the radiation and thus they cannot be overcome with any particular reconstruction method. These effects prevent the spatial high frequency components of the imaged structures to be recorded by the scanner. Therefore, the information encoded in these high frequencies cannot be recovered by any reconstruction technique. Within this framework, we have determined the maximum resolution achievable for a given acquisition as a function of data statistics and scanner parameters, like the size of the crystals or the inter-crystal scatter. In particular, the noise level in the data as a limitation factor to yield high-resolution images in tomographs with small crystal sizes is outlined. These results have implications regarding how to decide the optimal number of voxels of the reconstructed image or how to design better PET scanners.
Cacti with maximum Kirchhoff index
Wang, Wen-Rui; Pan, Xiang-Feng
2015-01-01
The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...
Generic maximum likely scale selection
Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo
2007-01-01
The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....
Economics and Maximum Entropy Production
Lorenz, R. D.
2003-04-01
Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.
Breast reconstruction after mastectomy
Daniel eSchmauss
2016-01-01
Full Text Available Breast cancer is the leading cause of cancer death in women worldwide. Its surgical approach has become less and less mutilating in the last decades. However, the overall number of breast reconstructions has significantly increased lately. Nowadays breast reconstruction should be individualized at its best, first of all taking into consideration oncological aspects of the tumor, neo-/adjuvant treatment and genetic predisposition, but also its timing (immediate versus delayed breast reconstruction, as well as the patient’s condition and wish. This article gives an overview over the various possibilities of breast reconstruction, including implant- and expander-based reconstruction, flap-based reconstruction (vascularized autologous tissue, the combination of implant and flap, reconstruction using non-vascularized autologous fat, as well as refinement surgery after breast reconstruction.
Reoperative midface reconstruction.
Acero, Julio; García, Eloy
2011-02-01
Reoperative reconstruction of the midface is a challenging issue because of the complexity of this region and the severity of the aesthetic and functional sequela related to the absence or failure of a primary reconstruction. The different situations that can lead to the indication of a reoperative reconstructive procedure after previous oncologic ablative procedures in the midface are reviewed. Surgical techniques, anatomic problems, and limitations affecting the reoperative reconstruction in this region of the head and neck are discussed.
Objects of maximum electromagnetic chirality
Fernandez-Corbaton, Ivan
2015-01-01
We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
The strong maximum principle revisited
Pucci, Patrizia; Serrin, James
In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.
de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie
2011-12-14
We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/.
Surfaces, Digitisations and Reconstructions
2015-01-01
We present a new digital reconstruction of r-regular sets in three-dimensional Euclidean space. We introduce a vector field and analyse the relation between the topologies of the boundaries of the r-regular set and its reconstruction. This reconstruction can be carried out faster than prior models...... based on the same digitisation, making it attractive for computing....
Should I Have Breast Reconstruction?
... Reconstruction Surgery Breast Cancer Breast Reconstruction Surgery Should I Get Breast Reconstruction Surgery? Women who have surgery ... It usually responds well to treatment. What if I choose not to have breast reconstruction? Many women ...
Improving predictability of time series using maximum entropy methods
Chliamovitch, G.; Dupuis, A.; Golub, A.; Chopard, B.
2015-04-01
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, which provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Voss, C. I.; Soliman, S. M.; Aggarwal, P. K.
2013-12-01
Important information for management of large aquifer systems can be obtained via a parsimonious approach to groundwater modeling, in part, employing isotope-interpreted groundwater ages. ';Parsimonious' modeling implies active avoidance of overly-complex representations when constructing models. This approach is essential for evaluation of aquifer systems that lack informative hydrogeologic databases. Even in the most remote aquifers, despite lack of typical data, groundwater ages can be interpreted from isotope samples at only a few downstream locations. These samples incorporate hydrogeologic information from the entire upstream groundwater flowpath; thus, interpreted ages are among the most-effective information sources for groundwater model development. This approach is applied to the world's largest non-renewable aquifer, the transboundary Nubian Aquifer System (NAS) of Chad, Egypt, Libya and Sudan. In the NAS countries, water availability is a critical problem and NAS can reliably serve as a water supply for an extended future period. However, there are national concerns about transboundary impacts of water use by neighbors. These concerns include excessive depletion of shared groundwater by individual countries and the spread of water-table drawdown across borders, where neighboring country near-border shallow wells and oases may dry. Development of a parsimonious groundwater flow model, based on limited available NAS hydrogeologic data and on 81Kr groundwater ages below oases in Egypt, is a key step in providing a technical basis for international discussion concerning management of this non-renewable water resource. Simply-structured model analyses, undertaken as part of an IAEA/UNDP/GEF project, show that although the main transboundary issue is indeed drawdown crossing national boundaries, given the large scale of NAS and its plausible ranges of aquifer parameter values, the magnitude of transboundary drawdown will likely be small and may not be a
Ice-sheet configuration in the CMIP5/PMIP3 Last Glacial Maximum experiments
A. Abe-Ouchi
2015-06-01
Full Text Available We describe the creation of boundary conditions related to the presence of ice sheets, including ice sheet extent and height, ice shelf extent, and the distribution and altitude of ice-free land, at the Last Glacial Maximum (LGM for use in LGM experiments conducted as part of the fifth phase of the Coupled Modelling Intercomparison Project (CMIP5 and the third phase of the Palaeoclimate Modelling Intercomparison Project (PMIP3. The CMIP5/PMIP3 data sets were created from reconstructions made by three different groups, which were all obtained using a model-inversion approach but differ in the assumptions used in the modelling and in the type of data used as constraints. The ice sheet extent, and thus the albedo mask, for the Northern Hemisphere (NH does not vary substantially between the three individual data sources. The difference in the topography of the NH ice sheets is also moderate, and smaller than the differences between these reconstructions (and the resultant composite reconstruction and ice-sheet reconstructions used in previous generations of PMIP. Only two of the individual reconstructions provide information for Antarctica. The discrepancy between these two reconstructions is larger than the difference for the NH ice sheets although still less than the difference between the composite reconstruction and previous PMIP ice-sheet reconstructions. Differences in the climate response to the individual LGM reconstructions, and between these reconstructions and the CMIP5/PMIP3 composite, are largely confined to the ice-covered regions, but also extend over North Atlantic Ocean and Northern Hemisphere continents through atmospheric stationary waves. There are much larger differences in the climate response to the latest reconstructions (or the derived composite and ice-sheet reconstructions used in previous phases of PMIP.
[Breast reconstruction after mastectomy].
Ho Quoc, C; Delay, E
2013-02-01
The mutilating surgery for breast cancer causes deep somatic and psychological sequelae. Breast reconstruction can mitigate these effects and permit the patient to help rebuild their lives. The purpose of this paper is to focus on breast reconstruction techniques and on factors involved in breast reconstruction. The methods of breast reconstruction are presented: objectives, indications, different techniques, operative risks, and long-term monitoring. Many different techniques can now allow breast reconstruction in most patients. Clinical cases are also presented in order to understand the results we expect from a breast reconstruction. Breast reconstruction provides many benefits for patients in terms of rehabilitation, wellness, and quality of life. In our mind, breast reconstruction should be considered more as an opportunity and a positive choice (the patient can decide to do it), than as an obligation (that the patient would suffer). The consultation with the surgeon who will perform the reconstruction is an important step to give all necessary informations. It is really important that the patient could speak again with him before undergoing reconstruction, if she has any doubt. The quality of information given by medical doctors is essential to the success of psychological intervention. This article was written in a simple, and understandable way to help gynecologists giving the best information to their patients. It is maybe also possible to let them a copy of this article, which would enable them to have a written support and would facilitate future consultation with the surgeon who will perform the reconstruction.
Maximum entropy production in daisyworld
Maunu, Haley A.; Knuth, Kevin H.
2012-05-01
Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.
Maximum stellar iron core mass
F W Giacobbe
2003-03-01
An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is signiﬁcantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.
Maximum Matchings via Glauber Dynamics
Jindal, Anant; Pal, Manjish
2011-01-01
In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...
2011-01-10
...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...
4D image reconstruction for emission tomography
Reader, Andrew J.; Verhaeghe, Jeroen
2014-11-01
An overview of the theory of 4D image reconstruction for emission tomography is given along with a review of the current state of the art, covering both positron emission tomography and single photon emission computed tomography (SPECT). By viewing 4D image reconstruction as a matter of either linear or non-linear parameter estimation for a set of spatiotemporal functions chosen to approximately represent the radiotracer distribution, the areas of so-called ‘fully 4D’ image reconstruction and ‘direct kinetic parameter estimation’ are unified within a common framework. Many choices of linear and non-linear parameterization of these functions are considered (including the important case where the parameters have direct biological meaning), along with a review of the algorithms which are able to estimate these often non-linear parameters from emission tomography data. The other crucial components to image reconstruction (the objective function, the system model and the raw data format) are also covered, but in less detail due to the relatively straightforward extension from their corresponding components in conventional 3D image reconstruction. The key unifying concept is that maximum likelihood or maximum a posteriori (MAP) estimation of either linear or non-linear model parameters can be achieved in image space after carrying out a conventional expectation maximization (EM) update of the dynamic image series, using a Kullback-Leibler distance metric (comparing the modeled image values with the EM image values), to optimize the desired parameters. For MAP, an image-space penalty for regularization purposes is required. The benefits of 4D and direct reconstruction reported in the literature are reviewed, and furthermore demonstrated with simple simulation examples. It is clear that the future of reconstructing dynamic or functional emission tomography images, which often exhibit high levels of spatially correlated noise, should ideally exploit these 4D
A. Townsend Peterson
2008-12-01
Full Text Available Parsimony analysis of endemism (PAE has become a popular analytical approach in efforts to map the biogeography of Mexican biotas. Although attractive, the technique has serious drawbacks that make correct inferences of biogeographic history unlikely, which has been noted amply in the broader literature.El PAE se ha convertido en un método popular en los esfuerzos por resumir, en forma de mapas, la biogeografía de la biota de México. A pesar de su atractivo, la técnica tiene problemas serios que impiden que las conclusiones resultantes sean las correctas. Estos problemas se han hecho ampliamente evidentes en la literatura sobre este campo.
The Sherpa Maximum Likelihood Estimator
Nguyen, D.; Doe, S.; Evans, I.; Hain, R.; Primini, F.
2011-07-01
A primary goal for the second release of the Chandra Source Catalog (CSC) is to include X-ray sources with as few as 5 photon counts detected in stacked observations of the same field, while maintaining acceptable detection efficiency and false source rates. Aggressive source detection methods will result in detection of many false positive source candidates. Candidate detections will then be sent to a new tool, the Maximum Likelihood Estimator (MLE), to evaluate the likelihood that a detection is a real source. MLE uses the Sherpa modeling and fitting engine to fit a model of a background and source to multiple overlapping candidate source regions. A background model is calculated by simultaneously fitting the observed photon flux in multiple background regions. This model is used to determine the quality of the fit statistic for a background-only hypothesis in the potential source region. The statistic for a background-plus-source hypothesis is calculated by adding a Gaussian source model convolved with the appropriate Chandra point spread function (PSF) and simultaneously fitting the observed photon flux in each observation in the stack. Since a candidate source may be located anywhere in the field of view of each stacked observation, a different PSF must be used for each observation because of the strong spatial dependence of the Chandra PSF. The likelihood of a valid source being detected is a function of the two statistics (for background alone, and for background-plus-source). The MLE tool is an extensible Python module with potential for use by the general Chandra user.
Vestige: Maximum likelihood phylogenetic footprinting
Maxwell Peter
2005-05-01
Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational
Bayesian History Reconstruction of Complex Human Gene Clusters on a Phylogeny
Vinař, Tomáš; Song, Giltae; Siepel, Adam
2009-01-01
Clusters of genes that have evolved by repeated segmental duplication present difficult challenges throughout genomic analysis, from sequence assembly to functional analysis. Improved understanding of these clusters is of utmost importance, since they have been shown to be the source of evolutionary innovation, and have been linked to multiple diseases, including HIV and a variety of cancers. Previously, Zhang et al. (2008) developed an algorithm for reconstructing parsimonious evolutionary histories of such gene clusters, using only human genomic sequence data. In this paper, we propose a probabilistic model for the evolution of gene clusters on a phylogeny, and an MCMC algorithm for reconstruction of duplication histories from genomic sequences in multiple species. Several projects are underway to obtain high quality BAC-based assemblies of duplicated clusters in multiple species, and we anticipate that our method will be useful in analyzing these valuable new data sets.
Reconstruction of the portal vein with 64-slice spiral CT of bile duct obstruction
Xia, Yunbao; PAN, GONGMAO; Xue, Feng; Geng, Chengjun
2013-01-01
The aim of this study was to evaluate 64-slice spiral CT image reconstruction of the portal vein in biliary obstruction. A total of 34 clinical patients with biliary obstruction were confirmed by 64-slice spiral CT scanning with portal venous phase multi-planar reconstruction (MPR) of the biliary tract, curved planar reconstruction (CPR), thin-slab minimum-intensity projection (TS-MinIP) and maximum intensity projection (MIP). The reconstructed images were reviewed to further assess the posit...
Alstrup, Jan; Jørgensen, Mikkel; Medford, Andrew James
2010-01-01
itself takes less than 4−8 min and requires 15−30 mg each of donor and acceptor material. The optimum donor−acceptor composition of P3HT and PCBM was found to be a broad maximum centered on a 1:1 ratio. We demonstrate how the optimal thickness of the active layer can be found by the same method...... and materials usage by variation of the layer thickness in small steps of 1.5−4 nm. Contrary to expectation we did not find oscillatory variation of the device performance with device thickness because of optical interference. We ascribe this to the nature of the solar cell type explored in this example...... that employs nonreflective or semitransparent printed electrodes. We further found that very thick active layers on the order of 1 μm can be prepared without loss in performance and estimate the active layer thickness could easily approach 4−5 μm while maintaining photovoltaic properties....
Zhu, Hong-Ming; Yu, Yu; Er, Xinzhong; Chen, Xuelei
2015-01-01
The gravitational coupling of a long wavelength tidal field with small scale density fluctuations leads to anisotropic distortions of the locally measured small scale matter correlation function. Since the local correlation function is statistically isotropic in the absence of such tidal interactions, the tidal distortions can be used to reconstruct the long wavelength tidal field and large scale density field in analogy with the cosmic microwave background lensing reconstruction. In this paper we present in detail a formalism for the cosmic tidal reconstruction and test the reconstruction in numerical simulations. We find that the density field on large scales can be reconstructed with good accuracy and the cross correlation coefficient between the reconstructed density field and the original density field is greater than 0.9 on large scales ($k\\lesssim0.1h/\\mathrm{Mpc}$). This is useful in the 21cm intensity mapping survey, where the long wavelength radial modes are lost due to foreground subtraction proces...
Ptychographic ultrafast pulse reconstruction
Spangenberg, D; Brügmann, M H; Feurer, T
2014-01-01
We demonstrate a new ultrafast pulse reconstruction modality which is somewhat reminiscent of frequency resolved optical gating but uses a modified setup and a conceptually different reconstruction algorithm that is derived from ptychography. Even though it is a second order correlation scheme it shows no time ambiguity. Moreover, the number of spectra to record is considerably smaller than in most other related schemes which, together with a robust algorithm, leads to extremely fast convergence of the reconstruction.
Modelling and Simulation of Seasonal Rainfall Using the Principle of Maximum Entropy
Jonathan Borwein
2014-02-01
Full Text Available We use the principle of maximum entropy to propose a parsimonious model for the generation of simulated rainfall during the wettest three-month season at a typical location on the east coast of Australia. The model uses a checkerboard copula of maximum entropy to model the joint probability distribution for total seasonal rainfall and a set of two-parameter gamma distributions to model each of the marginal monthly rainfall totals. The model allows us to match the grade correlation coefficients for the checkerboard copula to the observed Spearman rank correlation coefficients for the monthly rainfalls and, hence, provides a model that correctly describes the mean and variance for each of the monthly totals and also for the overall seasonal total. Thus, we avoid the need for a posteriori adjustment of simulated monthly totals in order to correctly simulate the observed seasonal statistics. Detailed results are presented for the modelling and simulation of seasonal rainfall in the town of Kempsey on the mid-north coast of New South Wales. Empirical evidence from extensive simulations is used to validate this application of the model. A similar analysis for Sydney is also described.
Singh, Harpreet; Arvind; Dorai, Kavita, E-mail: kavita@iisermohali.ac.in
2016-09-07
Estimation of quantum states is an important step in any quantum information processing experiment. A naive reconstruction of the density matrix from experimental measurements can often give density matrices which are not positive, and hence not physically acceptable. How do we ensure that at all stages of reconstruction, we keep the density matrix positive? Recently a method has been suggested based on maximum likelihood estimation, wherein the density matrix is guaranteed to be positive definite. We experimentally implement this protocol on an NMR quantum information processor. We discuss several examples and compare with the standard method of state estimation. - Highlights: • State estimation using maximum likelihood method was performed on an NMR quantum information processor. • Physically valid density matrices were obtained every time in contrast to standard quantum state tomography. • Density matrices of several different entangled and separable states were reconstructed for two and three qubits.
Receiver function estimated by maximum entropy deconvolution
吴庆举; 田小波; 张乃铃; 李卫平; 曾融生
2003-01-01
Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.
Tsuihiji, Takanobu
2010-08-01
The insertions of the cervical axial musculature on the occiput in marginocephalian and tyrannosaurid dinosaurs have been reconstructed in several studies with a view to their functional implications. Most of the past reconstructions on marginocephalians, however, relied on the anatomy of just one clade of reptiles, Lepidosauria, and lack phylogenetic justification. In this study, these past reconstructions were evaluated using the Extant Phylogenetic Bracket approach based on the anatomy of various extant diapsids. Many muscle insertions reconstructed in this study were substantially different from those in the past studies, demonstrating the importance of phylogenetically justified inferences based on the conditions of Aves and Crocodylia for reconstructing the anatomy of non-avian dinosaurs. The present reconstructions show that axial muscle insertions were generally enlarged in derived marginocephalians, apparently correlated with expansion of their parietosquamosal shelf/frill. Several muscle insertions on the occiput in tyrannosaurids reconstructed in this study using the Extant Phylogenetic Bracket approach were also rather different from recent reconstructions based on the same, phylogenetic and parsimony-based method. Such differences are mainly due to differences in initial identifications of muscle insertion areas or different hypotheses on muscle homologies in extant diapsids. This result emphasizes the importance of accurate and detailed observations on the anatomy of extant animals as the basis for paleobiological inferences such as anatomical reconstructions and functional analyses.
Delayed breast implant reconstruction
Hvilsom, Gitte B.; Hölmich, Lisbet R.; Steding-Jessen, Marianne;
2012-01-01
We evaluated the association between radiation therapy and severe capsular contracture or reoperation after 717 delayed breast implant reconstruction procedures (288 1- and 429 2-stage procedures) identified in the prospective database of the Danish Registry for Plastic Surgery of the Breast during...... reconstruction approaches other than implants should be seriously considered among women who have received radiation therapy....
Breast reconstruction - slideshow
... this page: //medlineplus.gov/ency/presentations/100156.htm Breast reconstruction - series—Indication, part 1 To use the sharing ... A.M. Editorial team. Related MedlinePlus Health Topics Breast Reconstruction A.D.A.M., Inc. is accredited by ...
Maximum Power from a Solar Panel
Michael Miller
2010-01-01
Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.
Tomographic reconstructions using map algorithms - application to the SPIDR mission
Ghosh Roy, D.N.; Wilton, K.; Cook, T.A.; Chakrabarti, S.; Qi, J.; Gullberg, G.T.
2004-01-21
The spectral image of an astronomical scene is reconstructed from noisy tomographic projections using maximum a posteriori (MAP) and filtered backprojection (FBP) algorithms. Both maximum entropy (ME) and Gibbs prior are used in the MAP reconstructions. The scene, which is a uniform background with a localized emissive source superimposed on it, is reconstructed for a broad range of source counts. The algorithms are compared regarding their ability to detect the source in the background. Detectability is defined in terms of a contrast-to-noise ratio (CNR) which is a Monte Carlo ensemble average of spatially averaged CNRs for the individual reconstructions. Overall, MAP was found to yield improved CNR relative to FBP. Moreover, as a function of the total source counts, the CNR varies distinctly different for source and background regions. This may be important in separating a weak source from the background.
Breast Reconstruction with Flap Surgery
Breast reconstruction with flap surgery Overview By Mayo Clinic Staff Breast reconstruction is a surgical procedure that restores shape to ... breast tissue to treat or prevent breast cancer. Breast reconstruction with flap surgery is a type of breast ...
Experimental reconstruction of photon statistics without photon counting.
Zambra, Guido; Andreoni, Alessandra; Bondani, Maria; Gramegna, Marco; Genovese, Marco; Brida, Giorgio; Rossi, Andrea; Paris, Matteo G A
2005-08-05
Experimental reconstructions of photon number distributions of both continuous-wave and pulsed light beams are reported. Our scheme is based on on/off avalanche photo-detection assisted by maximum-likelihood estimation and does not involve photon counting. Reconstructions of the distribution for both semiclassical and quantum states of light are reported for single-mode as well as for multi-mode beams.
Gardner Benjamin
2012-08-01
Full Text Available Abstract Background The twelve-item Self-Report Habit Index (SRHI is the most popular measure of energy-balance related habits. This measure characterises habit by automatic activation, behavioural frequency, and relevance to self-identity. Previous empirical research suggests that the SRHI may be abbreviated with no losses in reliability or predictive utility. Drawing on recent theorising suggesting that automaticity is the ‘active ingredient’ of habit-behaviour relationships, we tested whether an automaticity-specific SRHI subscale could capture habit-based behaviour patterns in self-report data. Methods A content validity task was undertaken to identify a subset of automaticity indicators within the SRHI. The reliability, convergent validity and predictive validity of the automaticity item subset was subsequently tested in secondary analyses of all previous SRHI applications, identified via systematic review, and in primary analyses of four raw datasets relating to energy‐balance relevant behaviours (inactive travel, active travel, snacking, and alcohol consumption. Results A four-item automaticity subscale (the ‘Self-Report Behavioural Automaticity Index’; ‘SRBAI’ was found to be reliable and sensitive to two hypothesised effects of habit on behaviour: a habit-behaviour correlation, and a moderating effect of habit on the intention-behaviour relationship. Conclusion The SRBAI offers a parsimonious measure that adequately captures habitual behaviour patterns. The SRBAI may be of particular utility in predicting future behaviour and in studies tracking habit formation or disruption.
Voss, Clifford I.; Soliman, Safaa M.
2014-01-01
Parsimonious groundwater modeling provides insight into hydrogeologic functioning of the Nubian Aquifer System (NAS), the world’s largest non-renewable groundwater system (belonging to Chad, Egypt, Libya, and Sudan). Classical groundwater-resource issues exist (magnitude and lateral extent of drawdown near pumping centers) with joint international management questions regarding transboundary drawdown. Much of NAS is thick, containing a large volume of high-quality groundwater, but receives insignificant recharge, so water-resource availability is time-limited. Informative aquifer data are lacking regarding large-scale response, providing only local-scale information near pumps. Proxy data provide primary underpinning for understanding regional response: Holocene water-table decline from the previous pluvial period, after thousands of years, results in current oasis/sabkha locations where the water table still intersects the ground. Depletion is found to be controlled by two regional parameters, hydraulic diffusivity and vertical anisotropy of permeability. Secondary data that provide insight are drawdowns near pumps and isotope-groundwater ages (million-year-old groundwaters in Egypt). The resultant strong simply structured three-dimensional model representation captures the essence of NAS regional groundwater-flow behavior. Model forecasts inform resource management that transboundary drawdown will likely be minimal—a nonissue—whereas drawdown within pumping centers may become excessive, requiring alternative extraction schemes; correspondingly, significant water-table drawdown may occur in pumping centers co-located with oases, causing oasis loss and environmental impacts.
Nicola Magnavita
2012-01-01
Full Text Available Purpose. To perform a parsimonious measurement of workplace psychosocial stress in routine occupational health surveillance, this study tests the psychometric properties of a short version of the original Italian effort-reward imbalance (ERI questionnaire. Methods. 1,803 employees (63 percent women from 19 service companies in the Italian region of Latium participated in a cross-sectional survey containing the short version of the ERI questionnaire (16 items and questions related to self-reported health, musculoskeletal complaints and job satisfaction. Exploratory factor analysis, internal consistency of scales and criterion validity were utilized. Results. The internal consistency of scales was satisfactory. Principal component analysis enabled to identify the model’s main factors. Significant associations with health and job satisfaction in the majority of cases support the notion of criterion validity. A high score on the effort-reward ratio was associated with an elevated odds ratio (OR = 2.71; 95% CI 1.86–3.95 of musculoskeletal complaints in the upper arm. Conclusions. The short form of the Italian ERI questionnaire provides a psychometrically useful tool for routine occupational health surveillance, although further validation is recommended.
Toda, M.; Yokozawa, M.; Richardson, A. D.; Kohyama, T.
2011-12-01
The effects of wind disturbance on interannual variability in ecosystem CO2 exchange have been assessed in two forests in northern Japan, i.e., a young, even-aged, monocultured, deciduous forest and an uneven-aged mixed forest of evergreen and deciduous trees, including some over 200 years old using eddy covariance (EC) measurements during 2004-2008. The EC measurements have indicated that photosynthetic recovery of trees after a huge typhoon occurred during early September in 2004 activated annual carbon uptake of both forests due to changes in physiological response of tree leaves during their growth stages. However, little have been resolved about what biotic and abiotic factors regulated interannual variability in heat, water and carbon exchange between an atmosphere and forests. In recent years, an inverse modeling analysis has been utilized as a powerful tool to estimate biotic and abiotic parameters that might affect heat, water and CO2 exchange between the atmosphere and forest of a parsimonious physiologically based model. We conducted the Bayesian inverse model analysis for the model with the EC measurements. The preliminary result showed that the above model-derived NEE values were consistent with observed ones on the hourly basis with optimized parameters by Baysian inversion. In the presentation, we would examine interannual variability in biotic and abiotic parameters related to heat, water and carbon exchange between the atmosphere and forests after disturbance by typhoon.
Voss, Clifford I.; Soliman, Safaa M.
2014-03-01
Parsimonious groundwater modeling provides insight into hydrogeologic functioning of the Nubian Aquifer System (NAS), the world's largest non-renewable groundwater system (belonging to Chad, Egypt, Libya, and Sudan). Classical groundwater-resource issues exist (magnitude and lateral extent of drawdown near pumping centers) with joint international management questions regarding transboundary drawdown. Much of NAS is thick, containing a large volume of high-quality groundwater, but receives insignificant recharge, so water-resource availability is time-limited. Informative aquifer data are lacking regarding large-scale response, providing only local-scale information near pumps. Proxy data provide primary underpinning for understanding regional response: Holocene water-table decline from the previous pluvial period, after thousands of years, results in current oasis/sabkha locations where the water table still intersects the ground. Depletion is found to be controlled by two regional parameters, hydraulic diffusivity and vertical anisotropy of permeability. Secondary data that provide insight are drawdowns near pumps and isotope-groundwater ages (million-year-old groundwaters in Egypt). The resultant strong simply structured three-dimensional model representation captures the essence of NAS regional groundwater-flow behavior. Model forecasts inform resource management that transboundary drawdown will likely be minimal—a nonissue—whereas drawdown within pumping centers may become excessive, requiring alternative extraction schemes; correspondingly, significant water-table drawdown may occur in pumping centers co-located with oases, causing oasis loss and environmental impacts.
The inverse maximum dynamic flow problem
BAGHERIAN; Mehri
2010-01-01
We consider the inverse maximum dynamic flow (IMDF) problem.IMDF problem can be described as: how to change the capacity vector of a dynamic network as little as possible so that a given feasible dynamic flow becomes a maximum dynamic flow.After discussing some characteristics of this problem,it is converted to a constrained minimum dynamic cut problem.Then an efficient algorithm which uses two maximum dynamic flow algorithms is proposed to solve the problem.
2008-01-01
On May 22,10 days after the Wenchuan earthquake in Sichuan Province,the State Council formed the Post-earthquake Reconstruction Planning Group,deciding to work out a general recon- struction plan within a period of three months. Sichuan was the worst-hit area of China,so reconstruction work there will have a direct influence on how plans proceed in other areas.On July 18,Beijing Review reporter Feng Jianhua interviewed Wang Guangsi,Vice Director of the Sichuan Development and Reform Commission,about Sichuan’s reconstruction plan.
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)
2014-06-15
Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Resolution extension and exit wave reconstruction in complex HREM.
Hsieh, Wen-Kuo; Chen, Fu-Rong; Kai, Ji-Jung; Kirkland, A I
2004-01-01
Direct methods in real and reciprocal space are developed for structural reversion. The direct method in real space involves the use of a novel method to retrieve the phase in the image plane using transport of intensity equation/maximum entropy method (TIE/MEM) and exit wave reconstruction by self-consistent propagation. Since the exit wave is restored from the complex signal in the image planes, no image model between the exit wave and image is assumed. The structural information in the reconstructed exit wave is then further extended by a "complex" maximum entropy method as a direct method in reciprocal space to extrapolate the phase to higher frequencies.
Lora, Juan M.; Mitchell, Jonathan L.; Risi, Camille; Tripati, Aradhna E.
2017-01-01
Southwestern North America was wetter than present during the Last Glacial Maximum. The causes of increased water availability have been recently debated, and quantitative precipitation reconstructions have been underutilized in model-data comparisons. We investigate the climatological response of North Pacific atmospheric rivers to the glacial climate using model simulations and paleoclimate reconstructions. Atmospheric moisture transport due to these features shifted toward the southeast relative to modern. Enhanced southwesterly moisture delivery between Hawaii and California increased precipitation in the southwest while decreasing it in the Pacific Northwest, in agreement with reconstructions. Coupled climate models that are best able to reproduce reconstructed precipitation changes simulate decreases in sea level pressure across the eastern North Pacific and show the strongest southeastward shifts of moisture transport relative to a modern climate. Precipitation increases of ˜1 mm d-1, due largely to atmospheric rivers, are of the right magnitude to account for reconstructed pluvial conditions in parts of southwestern North America during the Last Glacial Maximum.
Test images for the maximum entropy image restoration method
Mackey, James E.
1990-01-01
One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.
The tropical lapse rate steepened during the Last Glacial Maximum.
Loomis, Shannon E; Russell, James M; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F Alayne; Kelly, Meredith A
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted.
The tropical lapse rate steepened during the Last Glacial Maximum
Loomis, Shannon E.; Russell, James M.; Verschuren, Dirk; Morrill, Carrie; De Cort, Gijs; Sinninghe Damsté, Jaap S.; Olago, Daniel; Eggermont, Hilde; Street-Perrott, F. Alayne; Kelly, Meredith A.
2017-01-01
The gradient of air temperature with elevation (the temperature lapse rate) in the tropics is predicted to become less steep during the coming century as surface temperature rises, enhancing the threat of warming in high-mountain environments. However, the sensitivity of the lapse rate to climate change is uncertain because of poor constraints on high-elevation temperature during past climate states. We present a 25,000-year temperature reconstruction from Mount Kenya, East Africa, which demonstrates that cooling during the Last Glacial Maximum was amplified with elevation and hence that the lapse rate was significantly steeper than today. Comparison of our data with paleoclimate simulations indicates that state-of-the-art models underestimate this lapse-rate change. Consequently, future high-elevation tropical warming may be even greater than predicted. PMID:28138544
Breast Reconstruction After Mastectomy
... It also does not involve cutting of the abdominal muscle and is a free flap. This type of ... NCI fact sheet Mammograms . What are some new developments in breast reconstruction after mastectomy? Oncoplastic surgery. In ...
Prairie Reconstruction Initiative
US Fish and Wildlife Service, Department of the Interior — The purpose of the Prairie Reconstruction Initiative Advisory Team (PRIAT) is to identify and take steps to resolve uncertainties in the process of prairie...
... work together. Head and neck surgeons also perform craniofacial reconstruction operations. The surgery is done while you are deep asleep and pain-free (under general anesthesia ). The surgery may take ...
Reconstructions of eyelid defects
Nirmala Subramanian
2011-01-01
Full Text Available Eyelids are the protective mechanism of the eyes. The upper and lower eyelids have been formed for their specific functions by Nature. The eyelid defects are encountered in congenital anomalies, trauma, and postexcision for neoplasm. The reconstructions should be based on both functional and cosmetic aspects. The knowledge of the basic anatomy of the lids is a must. There are different techniques for reconstructing the upper eyelid, lower eyelid, and medial and lateral canthal areas. Many a times, the defects involve more than one area. For the reconstruction of the lid, the lining should be similar to the conjunctiva, a cover by skin and the middle layer to give firmness and support. It is important to understand the availability of various tissues for reconstruction. One layer should have the vascularity to support the other layer which can be a graft. A proper plan and execution of it is very important.
Dydak, F; Nefedov, Y; Wotschack, J; Zhemchugov, A
2004-01-01
For a bias-free momentum measurement of TPC tracks, the correct determination of cluster positions is mandatory. We argue in particular that (i) the reconstruction of the entire longitudinal signal shape in view of longitudinal diffusion, electronic pulse shaping, and track inclination is important both for the polar angle reconstruction and for optimum r phi resolution; and that (ii) self-crosstalk of pad signals calls for special measures for the reconstruction of the z coordinate. The problem of 'shadow clusters' is resolved. Algorithms are presented for accepting clusters as 'good' clusters, and for the reconstruction of the r phi and z cluster coordinates, including provisions for 'bad' pads and pads next to sector boundaries, respectively.
Prairie Reconstruction Initiative Project
US Fish and Wildlife Service, Department of the Interior — The purpose of the Prairie Reconstruction Initiative Advisory Team (PRIAT) is to identify and take steps to resolve uncertainties in the process of prairie...
Permutationally invariant state reconstruction
Moroder, Tobias; Hyllus, Philipp; Tóth, Géza;
2012-01-01
Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti......Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large...... likelihood and least squares methods, which are the preferred choices in today's experiments. This high efficiency is achieved by greatly reducing the dimensionality of the problem employing a particular representation of permutationally invariant states known from spin coupling combined with convex...
The evolving breast reconstruction
Thomsen, Jørn Bo; Gunnarsson, Gudjon Leifur
2014-01-01
The aim of this editorial is to give an update on the use of the propeller thoracodorsal artery perforator flap (TAP/TDAP-flap) within the field of breast reconstruction. The TAP-flap can be dissected by a combined use of a monopolar cautery and a scalpel. Microsurgical instruments are generally...... not needed. The propeller TAP-flap can be designed in different ways, three of these have been published: (I) an oblique upwards design; (II) a horizontal design; (III) an oblique downward design. The latissimus dorsi-flap is a good and reliable option for breast reconstruction, but has been criticized...... for oncoplastic and reconstructive breast surgery and will certainly become an invaluable addition to breast reconstructive methods....
无
2010-01-01
The earthquake-hit Yushu shifts its focus from rescuing survivors to post-quake reconstruction The first phase of earthquake relief, in which rescuing lives was the priority, finished 12 days after a 7.1-magnitude earthquake struck the Tibetan Autonomous Prefecture of Yushu in northwest China’s Qinghai Province on April 14, and reconstruction of the area is now ready to begin.
Modeling Mediterranean ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2010-10-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the last glacial maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions nontrivial. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of the salinity in the Mediterranean in spite of reduced net evaporation.
Modeling Mediterranean Ocean climate of the Last Glacial Maximum
U. Mikolajewicz
2011-03-01
Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.
Generalised maximum entropy and heterogeneous technologies
Oude Lansink, A.G.J.M.
1999-01-01
Generalised maximum entropy methods are used to estimate a dual model of production on panel data of Dutch cash crop farms over the period 1970-1992. The generalised maximum entropy approach allows a coherent system of input demand and output supply equations to be estimated for each farm in the sam
20 CFR 229.48 - Family maximum.
2010-04-01
... month on one person's earnings record is limited. This limited amount is called the family maximum. The family maximum used to adjust the social security overall minimum rate is based on the employee's Overall..., when any of the persons entitled to benefits on the insured individual's compensation would, except...
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously
Three-dimensional sheaf of ultrasound planes reconstruction (SOUPR) of ablated volumes.
Ingle, Atul; Varghese, Tomy
2014-08-01
This paper presents an algorithm for 3-D reconstruction of tumor ablations using ultrasound shear wave imaging with electrode vibration elastography. Radio-frequency ultrasound data frames are acquired over imaging planes that form a subset of a sheaf of planes sharing a common axis of intersection. Shear wave velocity is estimated separately on each imaging plane using a piecewise linear function fitting technique with a fast optimization routine. An interpolation algorithm then computes velocity maps on a fine grid over a set of C-planes that are perpendicular to the axis of the sheaf. A full 3-D rendering of the ablation can then be created from this stack of C-planes; hence the name "Sheaf Of Ultrasound Planes Reconstruction" or SOUPR. The algorithm is evaluated through numerical simulations and also using data acquired from a tissue mimicking phantom. Reconstruction quality is gauged using contrast and contrast-to-noise ratio measurements and changes in quality from using increasing number of planes in the sheaf are quantified. The highest contrast of 5 dB is seen between the stiffest and softest regions of the phantom. Under certain idealizing assumptions on the true shape of the ablation, good reconstruction quality while maintaining fast processing rate can be obtained with as few as six imaging planes suggesting that the method is suited for parsimonious data acquisitions with very few sparsely chosen imaging planes.
Duality of Maximum Entropy and Minimum Divergence
Shinto Eguchi
2014-06-01
Full Text Available We discuss a special class of generalized divergence measures by the use of generator functions. Any divergence measure in the class is separated into the difference between cross and diagonal entropy. The diagonal entropy measure in the class associates with a model of maximum entropy distributions; the divergence measure leads to statistical estimation via minimization, for arbitrarily giving a statistical model. The dualistic relationship between the maximum entropy model and the minimum divergence estimation is explored in the framework of information geometry. The model of maximum entropy distributions is characterized to be totally geodesic with respect to the linear connection associated with the divergence. A natural extension for the classical theory for the maximum likelihood method under the maximum entropy model in terms of the Boltzmann-Gibbs-Shannon entropy is given. We discuss the duality in detail for Tsallis entropy as a typical example.
Lukeš, Tomáš; Křížek, Pavel; Švindrych, Zdeněk; Benda, Jakub; Ovesný, Martin; Fliegel, Karel; Klíma, Miloš; Hagen, Guy M
2014-12-01
We introduce and demonstrate a new high performance image reconstruction method for super-resolution structured illumination microscopy based on maximum a posteriori probability estimation (MAP-SIM). Imaging performance is demonstrated on a variety of fluorescent samples of different thickness, labeling density and noise levels. The method provides good suppression of out of focus light, improves spatial resolution, and allows reconstruction of both 2D and 3D images of cells even in the case of weak signals. The method can be used to process both optical sectioning and super-resolution structured illumination microscopy data to create high quality super-resolution images.
Crimi, Alessandro; Lillholm, Martin; Nielsen, Mads
2011-01-01
the estimates' influence on a missing-data reconstruction task, where high resolution vertebra and cartilage models are reconstructed from incomplete and lower dimensional representations. Our results demonstrate that our methods outperform the traditional ML method and Tikhonov regularization......., and may lead to unreliable results. In this paper, we discuss regularization by prior knowledge using maximum a posteriori (MAP) estimates. We compare ML to MAP using a number of priors and to Tikhonov regularization. We evaluate the covariance estimates on both synthetic and real data, and we analyze...
Tang, Yin; Hooshyar, Milad; Zhu, Tingju; Ringler, Claudia; Sun, Alexander Y.; Long, Di; Wang, Dingbao
2017-08-01
A two-parameter annual water balance model was developed for reconstructing annual terrestrial water storage change (ΔTWS) and groundwater storage change (ΔGWS). The model was integrated with the Gravity Recovery and Climate Experiment (GRACE) data and applied to the Punjab province in Pakistan for reconstructing ΔTWS and ΔGWS during 1980-2015 based on multiple input data sources. Model parameters were estimated through minimizing the root-mean-square error between the Budyko-modeled and GRACE-derived ΔTWS during 2003-2015. The correlation of ensemble means between Budyko-modeled and GRACE-derived ΔTWS is 0.68 with p-value irrigation regions with parsimonious models.
Ohri, Nisha [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Cordeiro, Peter G. [Department of Plastic Surgery, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Keam, Jennifer [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Ballangrud, Ase [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Shi Weiji; Zhang Zhigang [Department of Biostatistics and Epidemiology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Nerbun, Claire T.; Woch, Katherine M.; Stein, Nicholas F.; Zhou Ying [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); McCormick, Beryl; Powell, Simon N. [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Ho, Alice Y., E-mail: HoA1234@mskcc.org [Department of Radiation Oncology, Memorial Sloan-Kettering Cancer Center, New York, New York (United States)
2012-10-01
Purpose: To assess the impact of immediate breast reconstruction on postmastectomy radiation (PMRT) using dose-volume histogram (DVH) data. Methods and Materials: Two hundred forty-seven women underwent PMRT at our center, 196 with implant reconstruction and 51 without reconstruction. Patients with reconstruction were treated with tangential photons, and patients without reconstruction were treated with en-face electron fields and customized bolus. Twenty percent of patients received internal mammary node (IMN) treatment. The DVH data were compared between groups. Ipsilateral lung parameters included V20 (% volume receiving 20 Gy), V40 (% volume receiving 40 Gy), mean dose, and maximum dose. Heart parameters included V25 (% volume receiving 25 Gy), mean dose, and maximum dose. IMN coverage was assessed when applicable. Chest wall coverage was assessed in patients with reconstruction. Propensity-matched analysis adjusted for potential confounders of laterality and IMN treatment. Results: Reconstruction was associated with lower lung V20, mean dose, and maximum dose compared with no reconstruction (all P<.0001). These associations persisted on propensity-matched analysis (all P<.0001). Heart doses were similar between groups (P=NS). Ninety percent of patients with reconstruction had excellent chest wall coverage (D95 >98%). IMN coverage was superior in patients with reconstruction (D95 >92.0 vs 75.7%, P<.001). IMN treatment significantly increased lung and heart parameters in patients with reconstruction (all P<.05) but minimally affected those without reconstruction (all P>.05). Among IMN-treated patients, only lower lung V20 in those without reconstruction persisted (P=.022), and mean and maximum heart doses were higher than in patients without reconstruction (P=.006, P=.015, respectively). Conclusions: Implant reconstruction does not compromise the technical quality of PMRT when the IMNs are untreated. Treatment technique, not reconstruction, is the primary
Primordial density and BAO reconstruction
Zhu, Hong-Ming; Chen, Xuelei
2016-01-01
We present a new method to reconstruct the primordial (linear) density field using the estimated nonlinear displacement field. The divergence of the displacement field gives the reconstructed density field. We solve the nonlinear displacement field in the 1D cosmology and show the reconstruction results. The new reconstruction algorithm recovers a lot of linear modes and reduces the nonlinear damping scale significantly. The successful 1D reconstruction results imply the new algorithm should also be a promising technique in the 3D case.
Kirkpatrick Mark
2005-01-01
Full Text Available Abstract Principal component analysis is a widely used 'dimension reduction' technique, albeit generally at a phenotypic level. It is shown that we can estimate genetic principal components directly through a simple reparameterisation of the usual linear, mixed model. This is applicable to any analysis fitting multiple, correlated genetic effects, whether effects for individual traits or sets of random regression coefficients to model trajectories. Depending on the magnitude of genetic correlation, a subset of the principal component generally suffices to capture the bulk of genetic variation. Corresponding estimates of genetic covariance matrices are more parsimonious, have reduced rank and are smoothed, with the number of parameters required to model the dispersion structure reduced from k(k + 1/2 to m(2k - m + 1/2 for k effects and m principal components. Estimation of these parameters, the largest eigenvalues and pertaining eigenvectors of the genetic covariance matrix, via restricted maximum likelihood using derivatives of the likelihood, is described. It is shown that reduced rank estimation can reduce computational requirements of multivariate analyses substantially. An application to the analysis of eight traits recorded via live ultrasound scanning of beef cattle is given.
Interferometric phase reconstruction using simplified coherence network
Zhang, Kui; Song, Ruiqing; Wang, Hui; Wu, Di; Wang, Hua
2016-09-01
Interferometric time-series analysis techniques, which extend the traditional differential radar interferometry, have demonstrated a strong capability for monitoring ground surface displacement. Such techniques are able to obtain the temporal evolution of ground deformation within millimeter accuracy by using a stack of synthetic aperture radar (SAR) images. In order to minimize decorrelation between stacked SAR images, the phase reconstruction technique has been developed recently. The main idea of this technique is to reform phase observations along a SAR stack by taking advantage of a maximum likelihood estimator which is defined on the coherence matrix estimated from each target. However, the phase value of a coherence matrix element might be considerably biased when its corresponding coherence is low. In this case, it will turn to an outlying sample affecting the corresponding phase reconstruction process. In order to avoid this problem, a new approach is developed in this paper. This approach considers a coherence matrix element to be an arc in a network. A so-called simplified coherence network (SCN) is constructed to decrease the negative impact of outlying samples. Moreover, a pointed iterative strategy is designed to resolve the transformed phase reconstruction problem defined on a SCN. For validation purposes, the proposed method is applied to 29 real SAR images. The results demonstrate that the proposed method has an excellent computational efficiency and could obtain more reliable phase reconstruction solutions compared to the traditional method using phase triangulation algorithm.
On the Threshold of Maximum-Distance Separable Codes
Kindarji, Bruno; Chabanne, Hervé
2010-01-01
Starting from a practical use of Reed-Solomon codes in a cryptographic scheme published in Indocrypt'09, this paper deals with the threshold of linear $q$-ary error-correcting codes. The security of this scheme is based on the intractability of polynomial reconstruction when there is too much noise in the vector. Our approach switches from this paradigm to an Information Theoretical point of view: is there a class of elements that are so far away from the code that the list size is always superpolynomial? Or, dually speaking, is Maximum-Likelihood decoding almost surely impossible? We relate this issue to the decoding threshold of a code, and show that when the minimal distance of the code is high enough, the threshold effect is very sharp. In a second part, we explicit lower-bounds on the threshold of Maximum-Distance Separable codes such as Reed-Solomon codes, and compute the threshold for the toy example that motivates this study.
Augmented Likelihood Image Reconstruction.
Stille, Maik; Kleine, Matthias; Hägele, Julian; Barkhausen, Jörg; Buzug, Thorsten M
2016-01-01
The presence of high-density objects remains an open problem in medical CT imaging. Data of projections passing through objects of high density, such as metal implants, are dominated by noise and are highly affected by beam hardening and scatter. Reconstructed images become less diagnostically conclusive because of pronounced artifacts that manifest as dark and bright streaks. A new reconstruction algorithm is proposed with the aim to reduce these artifacts by incorporating information about shape and known attenuation coefficients of a metal implant. Image reconstruction is considered as a variational optimization problem. The afore-mentioned prior knowledge is introduced in terms of equality constraints. An augmented Lagrangian approach is adapted in order to minimize the associated log-likelihood function for transmission CT. During iterations, temporally appearing artifacts are reduced with a bilateral filter and new projection values are calculated, which are used later on for the reconstruction. A detailed evaluation in cooperation with radiologists is performed on software and hardware phantoms, as well as on clinically relevant patient data of subjects with various metal implants. Results show that the proposed reconstruction algorithm is able to outperform contemporary metal artifact reduction methods such as normalized metal artifact reduction.
A dual method for maximum entropy restoration
Smith, C. B.
1979-01-01
A simple iterative dual algorithm for maximum entropy image restoration is presented. The dual algorithm involves fewer parameters than conventional minimization in the image space. Minicomputer test results for Fourier synthesis with inadequate phantom data are given.
Maximum Throughput in Multiple-Antenna Systems
Zamani, Mahdi
2012-01-01
The point-to-point multiple-antenna channel is investigated in uncorrelated block fading environment with Rayleigh distribution. The maximum throughput and maximum expected-rate of this channel are derived under the assumption that the transmitter is oblivious to the channel state information (CSI), however, the receiver has perfect CSI. First, we prove that in multiple-input single-output (MISO) channels, the optimum transmission strategy maximizing the throughput is to use all available antennas and perform equal power allocation with uncorrelated signals. Furthermore, to increase the expected-rate, multi-layer coding is applied. Analogously, we establish that sending uncorrelated signals and performing equal power allocation across all available antennas at each layer is optimum. A closed form expression for the maximum continuous-layer expected-rate of MISO channels is also obtained. Moreover, we investigate multiple-input multiple-output (MIMO) channels, and formulate the maximum throughput in the asympt...
Photoemission spectromicroscopy with MAXIMUM at Wisconsin
Ng, W.; Ray-Chaudhuri, A.K.; Cole, R.K.; Wallace, J.; Crossley, S.; Crossley, D.; Chen, G.; Green, M.; Guo, J.; Hansen, R.W.C.; Cerrina, F.; Margaritondo, G. (Dept. of Electrical Engineering, Dept. of Physics and Synchrotron Radiation Center, Univ. of Wisconsin, Madison (USA)); Underwood, J.H.; Korthright, J.; Perera, R.C.C. (Center for X-ray Optics, Accelerator and Fusion Research Div., Lawrence Berkeley Lab., CA (USA))
1990-06-01
We describe the development of the scanning photoemission spectromicroscope MAXIMUM at the Wisoncsin Synchrotron Radiation Center, which uses radiation from a 30-period undulator. The article includes a discussion of the first tests after the initial commissioning. (orig.).
Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states
Grünewald Stefan
2011-01-01
Full Text Available Abstract Background As one of the most widely used parsimony methods for ancestral reconstruction, the Fitch method minimizes the total number of hypothetical substitutions along all branches of a tree to explain the evolution of a character. Due to the extensive usage of this method, it has become a scientific endeavor in recent years to study the reconstruction accuracies of the Fitch method. However, most studies are restricted to 2-state evolutionary models and a study for higher-state models is needed since DNA sequences take the format of 4-state series and protein sequences even have 20 states. Results In this paper, the ambiguous and unambiguous reconstruction accuracy of the Fitch method are studied for N-state evolutionary models. Given an arbitrary phylogenetic tree, a recurrence system is first presented to calculate iteratively the two accuracies. As complete binary tree and comb-shaped tree are the two extremal evolutionary tree topologies according to balance, we focus on the reconstruction accuracies on these two topologies and analyze their asymptotic properties. Then, 1000 Yule trees with 1024 leaves are generated and analyzed to simulate real evolutionary scenarios. It is known that more taxa not necessarily increase the reconstruction accuracies under 2-state models. The result under N-state models is also tested. Conclusions In a large tree with many leaves, the reconstruction accuracies of using all taxa are sometimes less than those of using a leaf subset under N-state models. For complete binary trees, there always exists an equilibrium interval [a, b] of conservation probability, in which the limiting ambiguous reconstruction accuracy equals to the probability of randomly picking a state. The value b decreases with the increase of the number of states, and it seems to converge. When the conservation probability is greater than b, the reconstruction accuracies of the Fitch method increase rapidly. The reconstruction
The maximum entropy technique. System's statistical description
Belashev, B Z
2002-01-01
The maximum entropy technique (MENT) is applied for searching the distribution functions of physical values. MENT takes into consideration the demand of maximum entropy, the characteristics of the system and the connection conditions, naturally. It is allowed to apply MENT for statistical description of closed and open systems. The examples in which MENT had been used for the description of the equilibrium and nonequilibrium states and the states far from the thermodynamical equilibrium are considered
19 CFR 114.23 - Maximum period.
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Maximum period. 114.23 Section 114.23 Customs... CARNETS Processing of Carnets § 114.23 Maximum period. (a) A.T.A. carnet. No A.T.A. carnet with a period of validity exceeding 1 year from date of issue shall be accepted. This period of validity cannot be...
Maximum-Likelihood Detection Of Noncoherent CPM
Divsalar, Dariush; Simon, Marvin K.
1993-01-01
Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.
Pollen-based biome reconstruction for southern Europe and Africa 18,000 yr BP
Elenga, H; Peyron, O; Bonnefille, R; Jolly, D; Cheddadi, R; Guiot, J; Andrieu, [No Value; Bottema, S; Buchet, G; de Beaulieu, JL; Hamilton, AC; Maley, J; Marchant, R; Perez-Obiol, R; Reille, M; Riollet, G; Scott, L; Straka, H; Taylor, D; Van Campo, E; Vincens, A; Laarif, F; Jonson, H
Pollen data from 18,000 C-14 yr sp were compiled in order to reconstruct biome distributions at the last glacial maximum in southern Europe and Africa. Biome reconstructions were made using the objective biomization method applied to pollen counts using a complete list of dryland taxa wherever
Climatic reconstruction in Europe for 18,000 yr B.P. from pollen data
Peyron, O; Guiot, J; Cheddadi, R; Tarasov, P; Reille, M; de Beaulieu, JL; Bottema, S; Andrieu, [No Value
1998-01-01
An improved concept of the best analogs method is used to reconstruct the climate of the last glacial maximum from pollen data in Europe. In order to deal with the lack of perfect analogs of fossil assemblages and therefore to obtain a more accurate climate reconstruction, we used a combination of p
Pollen-based biome reconstruction for southern Europe and Africa 18,000 yr BP
Elenga, H; Peyron, O; Bonnefille, R; Jolly, D; Cheddadi, R; Guiot, J; Andrieu, [No Value; Bottema, S; Buchet, G; de Beaulieu, JL; Hamilton, AC; Maley, J; Marchant, R; Perez-Obiol, R; Reille, M; Riollet, G; Scott, L; Straka, H; Taylor, D; Van Campo, E; Vincens, A; Laarif, F; Jonson, H
2000-01-01
Pollen data from 18,000 C-14 yr sp were compiled in order to reconstruct biome distributions at the last glacial maximum in southern Europe and Africa. Biome reconstructions were made using the objective biomization method applied to pollen counts using a complete list of dryland taxa wherever possi
Pollen-based biome reconstruction for southern Europe and Africa 18,000 yr BP
Elenga, H; Peyron, O; Bonnefille, R; Jolly, D; Cheddadi, R; Guiot, J; Andrieu, [No Value; Bottema, S; Buchet, G; de Beaulieu, JL; Hamilton, AC; Maley, J; Marchant, R; Perez-Obiol, R; Reille, M; Riollet, G; Scott, L; Straka, H; Taylor, D; Van Campo, E; Vincens, A; Laarif, F; Jonson, H
2000-01-01
Pollen data from 18,000 C-14 yr sp were compiled in order to reconstruct biome distributions at the last glacial maximum in southern Europe and Africa. Biome reconstructions were made using the objective biomization method applied to pollen counts using a complete list of dryland taxa wherever possi
SEXUAL DIMORPHISM OF MAXIMUM FEMORAL LENGTH
Pandya A M
2011-04-01
Full Text Available Sexual identification from the skeletal parts has medico legal and anthropological importance. Present study aims to obtain values of maximum femoral length and to evaluate its possible usefulness in determining correct sexual identification. Study sample consisted of 184 dry, normal, adult, human femora (136 male & 48 female from skeletal collections of Anatomy department, M. P. Shah Medical College, Jamnagar, Gujarat. Maximum length of femur was considered as maximum vertical distance between upper end of head of femur and the lowest point on femoral condyle, measured with the osteometric board. Mean Values obtained were, 451.81 and 417.48 for right male and female, and 453.35 and 420.44 for left male and female respectively. Higher value in male was statistically highly significant (P< 0.001 on both sides. Demarking point (D.P. analysis of the data showed that right femora with maximum length more than 476.70 were definitely male and less than 379.99 were definitely female; while for left bones, femora with maximum length more than 484.49 were definitely male and less than 385.73 were definitely female. Maximum length identified 13.43% of right male femora, 4.35% of right female femora, 7.25% of left male femora and 8% of left female femora. [National J of Med Res 2011; 1(2.000: 67-70
Reconstruction of images from radiofrequency electron paramagnetic resonance spectra.
Smith, C M; Stevens, A D
1994-12-01
This paper discusses methods for obtaining image reconstructions from electron paramagnetic resonance (EPR) spectra which constitute object projections. An automatic baselining technique is described which treats each spectrum consistently; rotating the non-horizontal baselines which are caused by stray magnetic effects onto the horizontal axis. The convolved backprojection method is described for both two- and three-dimensional reconstruction and the effect of cut-off frequency on the reconstruction is illustrated. A slower, indirect, iterative method, which does a non-linear fit to the projection data, is shown to give a far smoother reconstructed image when the method of maximum entropy is used to determine the value of the final residual sum of squares. Although this requires more computing time than the convolved backprojection method, it is more flexible and overcomes the problem of numerical instability encountered in deconvolution. Images from phantom samples in vitro are discussed. The spectral data for these have been accumulated quickly and have a low signal-to-noise ratio. The results show that as few as 16 spectra can still be processed to give an image. Artifacts in the image due to a small number of projections using the convolved backprojection reconstruction method can be removed by applying a threshold, i.e. only plotting contours higher than a given value. These artifacts are not present in an image which has been reconstructed by the maximum entropy technique. At present these techniques are being applied directly to in vivo studies.
Iterative Reconstruction of Coded Source Neutron Radiographs
Santos-Villalobos, Hector J [ORNL; Bingham, Philip R [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK)
2013-01-01
Use of a coded source facilitates high-resolution neutron imaging through magnifications but requires that the radiographic data be deconvolved. A comparison of direct deconvolution with two different iterative algorithms has been performed. One iterative algorithm is based on a maximum likelihood estimation (MLE)-like framework and the second is based on a geometric model of the neutron beam within a least squares formulation of the inverse imaging problem. Simulated data for both uniform and Gaussian shaped source distributions was used for testing to understand the impact of non-uniformities present in neutron beam distributions on the reconstructed images. Results indicate that the model based reconstruction method will match resolution and improve on contrast over convolution methods in the presence of non-uniform sources. Additionally, the model based iterative algorithm provides direct calculation of quantitative transmission values while the convolution based methods must be normalized base on known values.
Primary Vertex Reconstruction for Upgrade at LHCb
Wanczyk, Joanna
2016-01-01
The aim of the LHCb experiment is the study of beauty and charm hadron decays with the main focus on CP violating phenomena and searches for physics beyond the Standard Model through rare decays. At the present, the second data taking period is ongoing, which is called Run II. After 2018 during the long shutdown, the replacement of signicant parts of the LHCb detector is planned. One of main changes is upgrade of the present software and hardware trigger to a more rapid full software trigger. Primary Vertex (PV) is a basis for the further tracking and it is sensitive to the LHC running conditions, which are going to change for the Upgrade. In particular, the center-of-mass collision energy should reach the maximum value of 14 TeV. As a result the quality of the reconstruction has to be studied and the reconstruction algorithms have to be optimized.
A Bayesian nonparametric approach to reconstruction and prediction of random dynamical systems
Merkatas, Christos; Kaloudis, Konstantinos; Hatjispyros, Spyridon J.
2017-06-01
We propose a Bayesian nonparametric mixture model for the reconstruction and prediction from observed time series data, of discretized stochastic dynamical systems, based on Markov Chain Monte Carlo methods. Our results can be used by researchers in physical modeling interested in a fast and accurate estimation of low dimensional stochastic models when the size of the observed time series is small and the noise process (perhaps) is non-Gaussian. The inference procedure is demonstrated specifically in the case of polynomial maps of an arbitrary degree and when a Geometric Stick Breaking mixture process prior over the space of densities, is applied to the additive errors. Our method is parsimonious compared to Bayesian nonparametric techniques based on Dirichlet process mixtures, flexible and general. Simulations based on synthetic time series are presented.
Iterative initial condition reconstruction
Schmittfull, Marcel; Baldauf, Tobias; Zaldarriaga, Matias
2017-07-01
Motivated by recent developments in perturbative calculations of the nonlinear evolution of large-scale structure, we present an iterative algorithm to reconstruct the initial conditions in a given volume starting from the dark matter distribution in real space. In our algorithm, objects are first moved back iteratively along estimated potential gradients, with a progressively reduced smoothing scale, until a nearly uniform catalog is obtained. The linear initial density is then estimated as the divergence of the cumulative displacement, with an optional second-order correction. This algorithm should undo nonlinear effects up to one-loop order, including the higher-order infrared resummation piece. We test the method using dark matter simulations in real space. At redshift z =0 , we find that after eight iterations the reconstructed density is more than 95% correlated with the initial density at k ≤0.35 h Mpc-1 . The reconstruction also reduces the power in the difference between reconstructed and initial fields by more than 2 orders of magnitude at k ≤0.2 h Mpc-1 , and it extends the range of scales where the full broadband shape of the power spectrum matches linear theory by a factor of 2-3. As a specific application, we consider measurements of the baryonic acoustic oscillation (BAO) scale that can be improved by reducing the degradation effects of large-scale flows. In our idealized dark matter simulations, the method improves the BAO signal-to-noise ratio by a factor of 2.7 at z =0 and by a factor of 2.5 at z =0.6 , improving standard BAO reconstruction by 70% at z =0 and 30% at z =0.6 , and matching the optimal BAO signal and signal-to-noise ratio of the linear density in the same volume. For BAO, the iterative nature of the reconstruction is the most important aspect.
Maximum a posteriori estimation of crystallographic phases in X-ray diffraction tomography
Gürsoy, Doǧa; Bicer, Tekin; Almer, Jonathan D.; Kettimuthu, Rajkumar; Stock, Stuart; De Carlo, Francesco
2015-06-13
A maximum a posteriori approach is proposed for X-ray diffraction tomography for reconstructing three-dimensional spatial distribution of crystallographic phases and orientations of polycrystalline materials. The approach maximizes the a posteriori density which includes a Poisson log-likelihood and an a priori term that reinforces expected solution properties such as smoothness or local continuity. The reconstruction method is validated with experimental data acquired from a section of the spinous process of a porcine vertebra collected at the 1-ID-C beamline of the Advanced Photon Source, at Argonne National Laboratory. The reconstruction results show significant improvement in the reduction of aliasing and streaking artefacts, and improved robustness to noise and undersampling compared to conventional analytical inversion approaches. The approach has the potential to reduce data acquisition times, and significantly improve beamtime efficiency.
Espinoza, Gabriela Mabel; Prost, Angela Michelle
2016-05-01
Reconstruction of the upper eyelid is complicated because the eyelid must retain mobility, flexibility, function, and a suitable mucosal surface over the delicate cornea. Defects of the upper eyelid may be due to congenital defects or traumatic injury or follow oncologic resection. This article focuses on reconstruction due to loss of tissue. Multiple surgeries may be needed to reach the desired results, addressing loss of tissue and then loss of function. Each defect is unique and the laxity and availability of surrounding tissue vary. Knowing the most common techniques for repair assists surgeons in the multifaceted planning that takes place.
Chabanat, E; D'Hondt, J; Vanlaer, P; Prokofiev, K; Speer, T; Frühwirth, R; Waltenberger, W
2005-01-01
Because of the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ("vertex finding") and an estimation problem ("vertex fitting"). Starting from least-square methods, ways to render the classical algorithms more robust are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels.
Reconstruction of inflation models
Myrzakulov, Ratbay; Sebastiani, Lorenzo [Eurasian National University, Department of General and Theoretical Physics and Eurasian Center for Theoretical Physics, Astana (Kazakhstan); Zerbini, Sergio [Universita di Trento, Dipartimento di Fisica, Trento (Italy); TIFPA, Istituto Nazionale di Fisica Nucleare, Trento (Italy)
2015-05-15
In this paper, we reconstruct viable inflationary models by starting from spectral index and tensor-to-scalar ratio from Planck observations. We analyze three different kinds of models: scalar field theories, fluid cosmology, and f(R)-modified gravity. We recover the well-known R{sup 2} inflation in Jordan-frame and Einstein-frame representation, the massive scalar inflaton models and two models of inhomogeneous fluid. A model of R{sup 2} correction to Einstein's gravity plus a ''cosmological constant'' with an exact solution for early-time acceleration is reconstructed. (orig.)
RECONSTRUCTIVE SURGERY IN ORAL CANCER
Brahman Pally Balkishan
2016-10-01
Full Text Available BACKGROUND To study the technical details involving commonly performed flap surgeries - Narayanan flap (forehead and scalp flap, Pectoralis Major Myocutaneous (PMMC flap and deltopectoral flap. MATERIALS AND METHODS It is a prospective study, which consisted of sample size of 30 in a period of 2 years. All the patients who presented to OGH with symptoms and signs of carcinoma cheek were diagnosed and confirmed by edge biopsy, FNAC of nodes in neck, x-ray mandible, orthopantomogram, liver function tests. RESULTS Out of the 30 patients, the malignant lesion was arising from alveolar ridge in 2 (6.7%, buccal mucosa in 16 (53.3%, floor of the mouth in 2 (6.7%, hard palate in 2 (6.7%, retromolar trigone in 1 (3.3%, lip in 3 (10% and tongue in 4 (13.3% patients. Highest number of patients were found to be in the age groups of 61-70 (40% and 51-60 (26.7% each. Median age is 64.3 years. Out of 30 cases in the study, 18 (60% were male and another 12 (40% were females. About 66.7% of the patients indulged in chewing tobacco, 13.35 in tobacco chewing and smoking and about 20% in tobacco chewing along with smoking and alcohol consumption. Out of 30 cases in the study, 20 (66.7% presented with ulcer, 8 (26.7% with ulcer and swelling while 2 (6.6% presented with trismus. Stage IV (40% were more in number. Histopathology report confirms well-differentiated SCC in 80% and in this sample no specimen is poorly differentiated. All patients underwent wide local excision. Out of the 30 cases, 6 patients were stage I who did not undergo neck dissection. Maximum number of patients underwent hemimandibulectomy (53.3%, 20% of the patients did not undergo mandibulectomy. Maximum number of reconstructions were done with pectoralis major myocutaneous flaps+deltopectoral flap (PMMC+DP in 18 (60% cases .Out of the 24 cases in our study who underwent reconstruction with the flaps described above, marginal necrosis occurred in 2 (8.3% patients with PMMC+DP flaps while
Moore, S K; Hunter, W C J; Furenlid, L.R.; Barrett, H. H.
2007-01-01
We present a simple 3D event position-estimation method using raw list-mode acquisition and maximum-likelihood estimation in a modular gamma camera with a thick (25mm) monolithic scintillation crystal. This method involves measuring 2D calibration scans with a well-collimated 511 keV source and fitting each point to a simple depth-dependent light distribution model. Preliminary results show that angled collimated beams appear properly reconstructed.
A multiscale/multiframe approach to 3D PET data reconstruction
Mendes, Luis; Ferreira, Nuno [Coimbra Univ. (Portugal). Inst. de Biofisica/Biomatematica; ICNAS - Instituto de Ciencias Nucleares Aplicadas a Saude, Coimbra (Portugal); Comtat, Claude [CEA/DSV/12BM, Orsay (France). Service Hospitalier Frederic Joliot
2011-07-01
A multiscale/multiframe 3D reconstruction scheme for Positron Emission Tomography is presented. Usually the dimensions of the reconstructed volume or the projection space binning do not change during the image reconstruction process. In this paper we introduce the concept of time frame to the multiscale reconstruction proposed by Raheja et al. This approach can be used for the generation of images reconstructed in near real time using a suitable scale, taking full advantage of list mode reconstruction techniques. When compared with the Maximum Likelihood - Expectation Maximization algorithm (single scale ML-EM), the Multiscale/Multiframe proposed in this work improves the convergence speed in particular in cold regions, as well as performing a fast reconstruction. The generation of different image sequences at different spatial scales and times may be useful to optimize the acquisition clinical protocols on the fly. (orig.)
Muon track reconstruction and data selection techniques in AMANDA
Ahrens, J.; Bai, X.; Bay, R.; Barwick, S.W.; Becka, T.; Becker, J.K.; Becker, K.-H.; Bernardini, E.; Bertrand, D.; Biron, A.; Boersma, D.J.; Boeser, S.; Botner, O.; Bouchta, A.; Bouhali, O.; Burgess, T.; Carius, S.; Castermans, T.; Chirkin, D.; Collin, B.; Conrad, J.; Cooley, J.; Cowen, D.F.; Davour, A.; De Clercq, C.; DeYoung, T.; Desiati, P.; Dewulf, J.-P.; Ekstroem, P.; Feser, T.; Gaug, M.; Gaisser, T.K.; Ganugapati, R.; Geenen, H.; Gerhardt, L.; Gross, A.; Goldschmidt, A.; Hallgren, A.; Halzen, F.; Hanson, K.; Hardtke, R.; Harenberg, T.; Hauschildt, T.; Helbing, K.; Hellwig, M.; Herquet, P.; Hill, G.C.; Hubert, D.; Hughey, B.; Hulth, P.O.; Hultqvist, K.; Hundertmark, S.; Jacobsen, J.; Karle, A.; Kestel, M.; Koepke, L.; Kowalski, M.; Kuehn, K.; Lamoureux, J.I.; Leich, H.; Leuthold, M.; Lindahl, P.; Liubarsky, I.; Madsen, J.; Marciniewski, P.; Matis, H.S.; McParland, C.P.; Messarius, T.; Minaeva, Y.; Miocinovic, P.; Mock, P.C.; Morse, R.; Muenich, K.S.; Nam, J.; Nahnhauer, R.; Neunhoeffer, T.; Niessen, P.; Nygren, D.R.; Oegelman, H.; Olbrechts, Ph.; Perez de los Heros, C.; Pohl, A.C.; Porrata, R.; Price, P.B.; Przybylski, G.T.; Rawlins, K.; Resconi, E.; Rhode, W.; Ribordy, M.; Richter, S.; Rodriguez Martino, J.; Ross, D.; Sander, H.-G.; Schinarakis, K.; Schlenstedt, S.; Schmidt, T.; Schneider, D.; Schwarz, R.; Silvestri, A.; Solarz, M.; Spiczak, G.M.; Spiering, C.; Stamatikos, M.; Steele, D.; Steffen, P.; Stokstad, R.G.; Sulanke, K.-H.; Streicher, O.; Taboada, I.; Thollander, L.; Tilav, S.; Wagner, W.; Walck, C.; Wang, Y.-R.; Wiebusch, C.H. E-mail: wiebusch@physik.uni-wuppertal.de; Wiedemann, C.; Wischnewski, R.; Wissing, H.; Woschnagg, K.; Yodh, G
2004-05-21
The Antarctic Muon And Neutrino Detector Array (AMANDA) is a high-energy neutrino telescope operating at the geographic South Pole. It is a lattice of photo-multiplier tubes buried deep in the polar ice between 1500 and 2000 m. The primary goal of this detector is to discover astrophysical sources of high-energy neutrinos. A high-energy muon neutrino coming through the earth from the Northern Hemisphere can be identified by the secondary muon moving upward through the detector. The muon tracks are reconstructed with a maximum likelihood method. It models the arrival times and amplitudes of Cherenkov photons registered by the photo-multipliers. This paper describes the different methods of reconstruction, which have been successfully implemented within AMANDA. Strategies for optimizing the reconstruction performance and rejecting background are presented. For a typical analysis procedure the direction of tracks are reconstructed with about 2 deg. accurac000.
Image Reconstruction Using a Genetic Algorithm for Electrical Capacitance Tomography
MOU Changhua; PENG Lihui; YAO Danya; XIAO Deyun
2005-01-01
Electrical capacitance tomography (ECT) has been used for more than a decade for imaging dielectric processes. However, because of its ill-posedness and non-linearity, ECT image reconstruction has always been a challenge. A new genetic algorithm (GA) developed for ECT image reconstruction uses initial results from a linear back-projection, which is widely used for ECT image reconstruction to optimize the threshold and the maximum and minimum gray values for the image. The procedure avoids optimizing the gray values pixel by pixel and significantly reduces the search space dimension. Both simulations and static experimental results show that the method is efficient and capable of reconstructing high quality images. Evaluation criteria show that the GA-based method has smaller image error and greater correlation coefficients. In addition, the GA-based method converges quickly with a small number of iterations.
The maximum rotation of a galactic disc
Bottema, R
1997-01-01
The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...
Maximum permissible voltage of YBCO coated conductors
Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z.; Hong, Z.; Wang, D.; Zhou, H.; Shen, X.; Shen, C.
2014-06-01
Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (Ic) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the Ic degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.
Computing Rooted and Unrooted Maximum Consistent Supertrees
van Iersel, Leo
2009-01-01
A chief problem in phylogenetics and database theory is the computation of a maximum consistent tree from a set of rooted or unrooted trees. A standard input are triplets, rooted binary trees on three leaves, or quartets, unrooted binary trees on four leaves. We give exact algorithms constructing rooted and unrooted maximum consistent supertrees in time O(2^n n^5 m^2 log(m)) for a set of m triplets (quartets), each one distinctly leaf-labeled by some subset of n labels. The algorithms extend to weighted triplets (quartets). We further present fast exact algorithms for constructing rooted and unrooted maximum consistent trees in polynomial space. Finally, for a set T of m rooted or unrooted trees with maximum degree D and distinctly leaf-labeled by some subset of a set L of n labels, we compute, in O(2^{mD} n^m m^5 n^6 log(m)) time, a tree distinctly leaf-labeled by a maximum-size subset X of L that all trees in T, when restricted to X, are consistent with.
Maximum magnitude earthquakes induced by fluid injection
McGarr, Arthur F.
2014-01-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Maximum magnitude earthquakes induced by fluid injection
McGarr, A.
2014-02-01
Analysis of numerous case histories of earthquake sequences induced by fluid injection at depth reveals that the maximum magnitude appears to be limited according to the total volume of fluid injected. Similarly, the maximum seismic moment seems to have an upper bound proportional to the total volume of injected fluid. Activities involving fluid injection include (1) hydraulic fracturing of shale formations or coal seams to extract gas and oil, (2) disposal of wastewater from these gas and oil activities by injection into deep aquifers, and (3) the development of enhanced geothermal systems by injecting water into hot, low-permeability rock. Of these three operations, wastewater disposal is observed to be associated with the largest earthquakes, with maximum magnitudes sometimes exceeding 5. To estimate the maximum earthquake that could be induced by a given fluid injection project, the rock mass is assumed to be fully saturated, brittle, to respond to injection with a sequence of earthquakes localized to the region weakened by the pore pressure increase of the injection operation and to have a Gutenberg-Richter magnitude distribution with a b value of 1. If these assumptions correctly describe the circumstances of the largest earthquake, then the maximum seismic moment is limited to the volume of injected liquid times the modulus of rigidity. Observations from the available case histories of earthquakes induced by fluid injection are consistent with this bound on seismic moment. In view of the uncertainties in this analysis, however, this should not be regarded as an absolute physical limit.
Urogenital Reconstructive Surgery
Jakobsen, Lotte Kaasgaard
Urogenital reconstructive surgery Lotte Kaasgaard Jakobsen1 Professor Henning Olsen1 Overlæge Gitte Hvistendahl1 Professor Karl-Erik Andersson2 1 – Dept. of Urology, Aarhus University Hospital 2 – Dept. of Gynecology and Obstetrics, Aarhus University hospital Background: Congenital obstruction...
YIN PUMIN
2010-01-01
@@ The first phase of earthquake relief,in which rescuing lives was the priority,finished 12 days after a 7.1-magnitude earthquake struck the Tibetan Autonomous Prefecture of Yushu in northwest China's Qinghai Province on April 14,and reconstruction of the area is now ready to begin.
Pangloss: Reconstructing lensing mass
Collett, Thomas E.; Marshall, Philip J.; Mason, Charlotte
2015-11-01
Pangloss reconstructs all the mass within a light cone through the Universe. Understanding complex mass distributions like this is important for accurate time delay lens cosmography, and also for accurate lens magnification estimation. It aspires to use all available data in an attempt to make the best of all mass maps.
Anđelkov Katarina
2011-01-01
Full Text Available Introduction. Improved psychophysical condition after breast reconstruction in women has been well documented Objective. To determine the most optimal technique with minimal morbidity, the authors examined their results and complications based on reconstruction timing (immediate and delayed reconstruction and three reconstruction methods: TRAM flap, latissimus dorsi flap and reconstruction with tissue expanders and implants. Methods. Reconstruction was performed in 60 women of mean age 51.1 years. We analyzed risk factors: age, body mass index (BMI, smoking history and radiation therapy in correlation with timing and method of reconstruction. Complications of all three methods of reconstruction were under 1.5-2-year follow-up after the reconstruction. All data were statistically analyzed. Results. Only radiation had significant influence on the occurrence of complications both before and after the reconstruction, while age, smoking and BMI had no considerable influence of the development of complications. There were no statistically significant correlation between the incidence of complications, time and method of reconstruction. Conclusion. Any of the aforementioned breast reconstruction techniques can yield good results and a low rate of re-operations. To choose the best method, the patient needs to be as well informed as possible about the options including the risks and benefits of each method.
Maximum Multiflow in Wireless Network Coding
Zhou, Jin-Yi; Jiang, Yong; Zheng, Hai-Tao
2012-01-01
In a multihop wireless network, wireless interference is crucial to the maximum multiflow (MMF) problem, which studies the maximum throughput between multiple pairs of sources and sinks. In this paper, we observe that network coding could help to decrease the impacts of wireless interference, and propose a framework to study the MMF problem for multihop wireless networks with network coding. Firstly, a network model is set up to describe the new conflict relations modified by network coding. Then, we formulate a linear programming problem to compute the maximum throughput and show its superiority over one in networks without coding. Finally, the MMF problem in wireless network coding is shown to be NP-hard and a polynomial approximation algorithm is proposed.
Maximum-likelihood estimation of haplotype frequencies in nuclear families.
Becker, Tim; Knapp, Michael
2004-07-01
The importance of haplotype analysis in the context of association fine mapping of disease genes has grown steadily over the last years. Since experimental methods to determine haplotypes on a large scale are not available, phase has to be inferred statistically. For individual genotype data, several reconstruction techniques and many implementations of the expectation-maximization (EM) algorithm for haplotype frequency estimation exist. Recent research work has shown that incorporating available genotype information of related individuals largely increases the precision of haplotype frequency estimates. We, therefore, implemented a highly flexible program written in C, called FAMHAP, which calculates maximum likelihood estimates (MLEs) of haplotype frequencies from general nuclear families with an arbitrary number of children via the EM-algorithm for up to 20 SNPs. For more loci, we have implemented a locus-iterative mode of the EM-algorithm, which gives reliable approximations of the MLEs for up to 63 SNP loci, or less when multi-allelic markers are incorporated into the analysis. Missing genotypes can be handled as well. The program is able to distinguish cases (haplotypes transmitted to the first affected child of a family) from pseudo-controls (non-transmitted haplotypes with respect to the child). We tested the performance of FAMHAP and the accuracy of the obtained haplotype frequencies on a variety of simulated data sets. The implementation proved to work well when many markers were considered and no significant differences between the estimates obtained with the usual EM-algorithm and those obtained in its locus-iterative mode were observed. We conclude from the simulations that the accuracy of haplotype frequency estimation and reconstruction in nuclear families is very reliable in general and robust against missing genotypes.
The Wiener maximum quadratic assignment problem
Cela, Eranda; Woeginger, Gerhard J
2011-01-01
We investigate a special case of the maximum quadratic assignment problem where one matrix is a product matrix and the other matrix is the distance matrix of a one-dimensional point set. We show that this special case, which we call the Wiener maximum quadratic assignment problem, is NP-hard in the ordinary sense and solvable in pseudo-polynomial time. Our approach also yields a polynomial time solution for the following problem from chemical graph theory: Find a tree that maximizes the Wiener index among all trees with a prescribed degree sequence. This settles an open problem from the literature.
Maximum confidence measurements via probabilistic quantum cloning
Zhang Wen-Hai; Yu Long-Bao; Cao Zhuo-Liang; Ye Liu
2013-01-01
Probabilistic quantum cloning (PQC) cannot copy a set of linearly dependent quantum states.In this paper,we show that if incorrect copies are allowed to be produced,linearly dependent quantum states may also be cloned by the PQC.By exploiting this kind of PQC to clone a special set of three linearly dependent quantum states,we derive the upper bound of the maximum confidence measure of a set.An explicit transformation of the maximum confidence measure is presented.
Maximum floodflows in the conterminous United States
Crippen, John R.; Bue, Conrad D.
1977-01-01
Peak floodflows from thousands of observation sites within the conterminous United States were studied to provide a guide for estimating potential maximum floodflows. Data were selected from 883 sites with drainage areas of less than 10,000 square miles (25,900 square kilometers) and were grouped into regional sets. Outstanding floods for each region were plotted on graphs, and envelope curves were computed that offer reasonable limits for estimates of maximum floods. The curves indicate that floods may occur that are two to three times greater than those known for most streams.
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...
Maximum entropy analysis of EGRET data
Pohl, M.; Strong, A.W.
1997-01-01
EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....
Revealing the Maximum Strength in Nanotwinned Copper
Lu, L.; Chen, X.; Huang, Xiaoxu
2009-01-01
The strength of polycrystalline materials increases with decreasing grain size. Below a critical size, smaller grains might lead to softening, as suggested by atomistic simulations. The strongest size should arise at a transition in deformation mechanism from lattice dislocation activities to grain...... boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...
Maximum phytoplankton concentrations in the sea
Jackson, G.A.; Kiørboe, Thomas
2008-01-01
A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collected...... in the North Atlantic as part of the Bermuda Atlantic Time Series program as well as data collected off Southern California as part of the Southern California Bight Study program. The observed maximum particulate organic carbon and volumetric particle concentrations are consistent with the predictions...
Preparing for Breast Reconstruction Surgery
... Cancer Breast Reconstruction Surgery Preparing for Breast Reconstruction Surgery Your surgeon can help you know what to ... The plan for follow-up Costs Understanding your surgery costs Health insurance policies often cover most or ...
无
2008-01-01
Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public's attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A'nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578-1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.
Jacoby; GORDON
2008-01-01
Ecological systems in the headwaters of the Yellow River, characterized by hash natural environmental conditions, are very vulnerable to climatic change. In the most recent decades, this area greatly attracted the public’s attention for its more and more deteriorating environmental conditions. Based on tree-ring samples from the Xiqing Mountain and A’nyêmagên Mountains at the headwaters of the Yellow River in the Northeastern Tibetan Plateau, we reconstructed the minimum temperatures in the winter half year over the last 425 years and the maximum temperatures in the summer half year over the past 700 years in this region. The variation of minimum temperature in the winter half year during the time span of 1578―1940 was a relatively stable trend, which was followed by an abrupt warming trend since 1941. However, there is no significant warming trend for the maximum temperature in the summer half year over the 20th century. The asymmetric variation patterns between the minimum and maximum temperatures were observed in this study over the past 425 years. During the past 425 years, there are similar variation patterns between the minimum and maximum temperatures; however, the minimum temperatures vary about 25 years earlier compared to the maximum temperatures. If such a trend of variation patterns between the minimum and maximum temperatures over the past 425 years continues in the future 30 years, the maximum temperature in this region will increase significantly.
Marchant, R.; Behling, H.; Berrío, J.C.; Cleef, A.M.; Duivenvoorden, J.; Hooghiemstra, H.; Kuhry, P.; Melief, B.; Schreve-Brinkman, E.; Geel, van B.; Hammen, van der T.; Reenen, van G.
2002-01-01
Colombian biomes are reconstructed at 45 sites from the modern period extending to the Last Glacial Maximum (LGM). The basis for our reconstruction is pollen data assigned to plant functional types and biomes at six 3000-yr intervals. A reconstruction of modern biomes is used to check the treatment
Marchant, R.; Behling, H.; Berrío, J.C.; Cleef, A.M.; Duivenvoorden, J.; Hooghiemstra, H.; Kuhry, P.; Melief, B.; Schreve-Brinkman, E.; Geel, van B.; Hammen, van der T.; Reenen, van G.
2002-01-01
Colombian biomes are reconstructed at 45 sites from the modern period extending to the Last Glacial Maximum (LGM). The basis for our reconstruction is pollen data assigned to plant functional types and biomes at six 3000-yr intervals. A reconstruction of modern biomes is used to check the treatment
Analysis of Photovoltaic Maximum Power Point Trackers
Veerachary, Mummadi
The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.
On maximum cycle packings in polyhedral graphs
Peter Recht
2014-04-01
Full Text Available This paper addresses upper and lower bounds for the cardinality of a maximum vertex-/edge-disjoint cycle packing in a polyhedral graph G. Bounds on the cardinality of such packings are provided, that depend on the size, the order or the number of faces of G, respectively. Polyhedral graphs are constructed, that attain these bounds.
Hard graphs for the maximum clique problem
Hoede, Cornelis
1988-01-01
The maximum clique problem is one of the NP-complete problems. There are graphs for which a reduction technique exists that transforms the problem for these graphs into one for graphs with specific properties in polynomial time. The resulting graphs do not grow exponentially in order and number. Gra
Maximum Likelihood Estimation of Search Costs
J.L. Moraga-Gonzalez (José Luis); M.R. Wildenbeest (Matthijs)
2006-01-01
textabstractIn a recent paper Hong and Shum (forthcoming) present a structural methodology to estimate search cost distributions. We extend their approach to the case of oligopoly and present a maximum likelihood estimate of the search cost distribution. We apply our method to a data set of online p
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2015-01-01
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak scale from the maximum entropy principle
Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu
2015-03-01
The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.
Global characterization of the Holocene Thermal Maximum
Renssen, H.; Seppä, H.; Crosta, X.; Goosse, H.; Roche, D.M.V.A.P.
2012-01-01
We analyze the global variations in the timing and magnitude of the Holocene Thermal Maximum (HTM) and their dependence on various forcings in transient simulations covering the last 9000 years (9 ka), performed with a global atmosphere-ocean-vegetation model. In these experiments, we consider the i
Instance Optimality of the Adaptive Maximum Strategy
L. Diening; C. Kreuzer; R. Stevenson
2016-01-01
In this paper, we prove that the standard adaptive finite element method with a (modified) maximum marking strategy is instance optimal for the total error, being the square root of the squared energy error plus the squared oscillation. This result will be derived in the model setting of Poisson’s e
Maximum phonation time: variability and reliability.
Speyer, Renée; Bogaardt, Hans C A; Passos, Valéria Lima; Roodenburg, Nel P H D; Zumach, Anne; Heijnen, Mariëlle A M; Baijens, Laura W J; Fleskens, Stijn J H M; Brunings, Jan W
2010-05-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia versus a group of healthy control subjects matched by age and gender. Over a period of maximally 6 weeks, three video recordings were made of five subjects' maximum phonation time trials. A panel of five experts were responsible for all measurements, including a repeated measurement of the subjects' first recordings. Patients showed significantly shorter maximum phonation times compared with healthy controls (on average, 6.6 seconds shorter). The averaged interclass correlation coefficient (ICC) over all raters per trial for the first day was 0.998. The averaged reliability coefficient per rater and per trial for repeated measurements of the first day's data was 0.997, indicating high intrarater reliability. The mean reliability coefficient per day for one trial was 0.939. When using five trials, the reliability increased to 0.987. The reliability over five trials for a single day was 0.836; for 2 days, 0.911; and for 3 days, 0.935. To conclude, the maximum phonation time has proven to be a highly reliable measure in voice assessment. A single rater is sufficient to provide highly reliable measurements.
Maximum Phonation Time: Variability and Reliability
R. Speyer; H.C.A. Bogaardt; V.L. Passos; N.P.H.D. Roodenburg; A. Zumach; M.A.M. Heijnen; L.W.J. Baijens; S.J.H.M. Fleskens; J.W. Brunings
2010-01-01
The objective of the study was to determine maximum phonation time reliability as a function of the number of trials, days, and raters in dysphonic and control subjects. Two groups of adult subjects participated in this reliability study: a group of outpatients with functional or organic dysphonia v
Maximum likelihood estimation of fractionally cointegrated systems
Lasak, Katarzyna
In this paper we consider a fractionally cointegrated error correction model and investigate asymptotic properties of the maximum likelihood (ML) estimators of the matrix of the cointe- gration relations, the degree of fractional cointegration, the matrix of the speed of adjustment...
Maximum likelihood estimation for integrated diffusion processes
Baltazar-Larios, Fernando; Sørensen, Michael
EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...
Maximum gain of Yagi-Uda arrays
Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.
1971-01-01
Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....
Flantua, S.G.A.; Hooghiemstra, H.; van Boxel, J.H.; Cabrera, M.; González-Carranza, Z.; González-Arango, C.; Stevens, W.D.; Montiel, O.M.; Raven, P.H.
2014-01-01
We provide an innovative pollen-driven connectivity framework of the dynamic altitudinal distribution of North Andean biomes since the Last Glacial Maximum (LGM). Altitudinally changing biome distributions reconstructed from a pollen record from Lake La Cocha (2780 m) are assessed in terms of their
Arctic Sea Level Reconstruction
Svendsen, Peter Limkilde
Reconstruction of historical Arctic sea level is very difficult due to the limited coverage and quality of tide gauge and altimetry data in the area. This thesis addresses many of these issues, and discusses strategies to help achieve a stable and plausible reconstruction of Arctic sea level from...... 1950 to today.The primary record of historical sea level, on the order of several decades to a few centuries, is tide gauges. Tide gauge records from around the world are collected in the Permanent Service for Mean Sea Level (PSMSL) database, and includes data along the Arctic coasts. A reasonable...... amount of data is available along the Norwegian and Russian coasts since 1950, and most published research on Arctic sea level extends cautiously from these areas. Very little tide gauge data is available elsewhere in the Arctic, and records of a length of several decades,as generally recommended for sea...
Autologous Costochondral Microtia Reconstruction.
Patel, Sapna A; Bhrany, Amit D; Murakami, Craig S; Sie, Kathleen C Y
2016-04-01
Reconstruction with autologous costochondral cartilage is one of the mainstays of surgical management of congenital microtia. We review the literature, present our current technique for microtia reconstruction with autologous costochondral graft, and discuss the evolution of our technique over the past 20 years. We aim to minimize donor site morbidity and create the most durable and natural appearing ear possible using a stacked framework to augment the antihelical fold and antitragal-tragal complex. Assessment of outcomes is challenging due to the paucity of available objective measures with which to evaluate aesthetic outcomes. Various instruments are used to assess outcomes, but none is universally accepted as the standard. The challenges we continue to face are humbling, but ongoing work on tissue engineering, application of 3D models, and use of validated questionnaires can help us get closer to achieving a maximal aesthetic outcome.
Stochastic reconstruction of sandstones
Manwart; Torquato; Hilfer
2000-07-01
A simulated annealing algorithm is employed to generate a stochastic model for a Berea sandstone and a Fontainebleau sandstone, with each a prescribed two-point probability function, lineal-path function, and "pore size" distribution function, respectively. We find that the temperature decrease of the annealing has to be rather quick to yield isotropic and percolating configurations. A comparison of simple morphological quantities indicates good agreement between the reconstructions and the original sandstones. Also, the mean survival time of a random walker in the pore space is reproduced with good accuracy. However, a more detailed investigation by means of local porosity theory shows that there may be significant differences of the geometrical connectivity between the reconstructed and the experimental samples.
Reconstructing the Tengger calendar
Ian Proudfoot
2008-12-01
Full Text Available The survival of an Indic calendar among the Tengger people of the Brama highlands in east Java opens a window on Java’s calendar history. Its hybrid form reflects accommodations between this non-Muslim Javanese group and the increasingly dominant Muslim Javanese culture. Reconstruction is challenging because of this hybridity, because of inconsistencies in practice, and because the historical evidence is sketchy and often difficult to interpret.
Sparsity-constrained PET image reconstruction with learned dictionaries
Tang, Jing; Yang, Bao; Wang, Yanhua; Ying, Leslie
2016-09-01
PET imaging plays an important role in scientific and clinical measurement of biochemical and physiological processes. Model-based PET image reconstruction such as the iterative expectation maximization algorithm seeking the maximum likelihood solution leads to increased noise. The maximum a posteriori (MAP) estimate removes divergence at higher iterations. However, a conventional smoothing prior or a total-variation (TV) prior in a MAP reconstruction algorithm causes over smoothing or blocky artifacts in the reconstructed images. We propose to use dictionary learning (DL) based sparse signal representation in the formation of the prior for MAP PET image reconstruction. The dictionary to sparsify the PET images in the reconstruction process is learned from various training images including the corresponding MR structural image and a self-created hollow sphere. Using simulated and patient brain PET data with corresponding MR images, we study the performance of the DL-MAP algorithm and compare it quantitatively with a conventional MAP algorithm, a TV-MAP algorithm, and a patch-based algorithm. The DL-MAP algorithm achieves improved bias and contrast (or regional mean values) at comparable noise to what the other MAP algorithms acquire. The dictionary learned from the hollow sphere leads to similar results as the dictionary learned from the corresponding MR image. Achieving robust performance in various noise-level simulation and patient studies, the DL-MAP algorithm with a general dictionary demonstrates its potential in quantitative PET imaging.
Diffusion archeology for diffusion progression history reconstruction.
Sefer, Emre; Kingsford, Carl
2016-11-01
Diffusion through graphs can be used to model many real-world processes, such as the spread of diseases, social network memes, computer viruses, or water contaminants. Often, a real-world diffusion cannot be directly observed while it is occurring - perhaps it is not noticed until some time has passed, continuous monitoring is too costly, or privacy concerns limit data access. This leads to the need to reconstruct how the present state of the diffusion came to be from partial diffusion data. Here, we tackle the problem of reconstructing a diffusion history from one or more snapshots of the diffusion state. This ability can be invaluable to learn when certain computer nodes are infected or which people are the initial disease spreaders to control future diffusions. We formulate this problem over discrete-time SEIRS-type diffusion models in terms of maximum likelihood. We design methods that are based on submodularity and a novel prize-collecting dominating-set vertex cover (PCDSVC) relaxation that can identify likely diffusion steps with some provable performance guarantees. Our methods are the first to be able to reconstruct complete diffusion histories accurately in real and simulated situations. As a special case, they can also identify the initial spreaders better than the existing methods for that problem. Our results for both meme and contaminant diffusion show that the partial diffusion data problem can be overcome with proper modeling and methods, and that hidden temporal characteristics of diffusion can be predicted from limited data.
Posteromedial Corner Reconstruction
Ferrer, Gonzalo; Leon, Agustín; Wirth, Hans; Mena, Adolfo; Tuca, María José; Espinoza, Gonzalo
2017-01-01
Objective: Report the experience, after 1-year follow-up, of 30 patients who underwent anatomical knee reconstruction of posteromedial corner (PMC) injuries, using La Prade´s Technique. Methods: Retrospective cohort study of 30 consecutive patients with PMC injuries operated between November 2010 and May 2014 by the same surgical team. Inclusion criteria: patients with clinical presentation and images (stress radiographs and MRI) compatible with PMC injury, who maintained a grade III chronic instability in spite of at least 3 months of orthopedic treatment, who were reconstructed using La Prade’s anatomical technique, and completed at least 12 months of follow-up. Exclusion criteria: discordance between clinical and image studies, grade I or II medial instability, and surgery performed through a different technique. Data was collected by reviewing the electronic files and images. Functional scores (IKDC and Lysholm) were applied and registered in the preoperative evaluation, and then 6 and 12 months after surgery. Results: Thirty patients (28 men and 2 women) met the inclusion criteria. Mean age was 43 years (24-69). The vast majority (28 patients) had a high-energy mechanism of injury. Twenty patients were diagnosed in the acute setting, while 10 had a delayed diagnosis after poor results of concomitant ligament reconstructions. With the exception of 2 patients, who presented with isolated PMC injury, the majority had associated injuries as detailed: 11 cases had PMC + anterior cruciate ligament (ACL) injury, 3 patients had PMC + posterior cruciate ligament (PCL) injury, 3 patients had PMC + meniscal tears, 9 patients had PMC + ACL + PCL injuries, and there were 2 cases of PMC + ACL + PCL + lateral collateral ligament injuries. Mean time for PMC reconstruction surgery was 5 months (range 2-32). Lysholm and IKDC scores were 18,2 (2-69) and 24,3 (9,2-52,9) respectively in the preoperative setting, improving to 76,7 (44-94) and 70,7 (36,8-95,4) after 1-year follow
Canal Wall Reconstruction Mastoidectomy
无
2007-01-01
Objective To investigate the advantages of canal wall reconstruction (CWR) mastoidectomy, a single-stage technique for cholesteatoma removal and posterior external canal wall reconstruction, over the open and closed procedures in terms of cholesteatoma recurrence. Methods: Between June 2002 and December 2005, 38 patients (40 ears) with cholesteatoma were admited to Sun Yat-Sen Memorial Hospital and received surgical treatments. Of these patients, 25 were male with ages ranging between 11 and 60 years (mean = 31.6 years) and 13 were female with ages ranging between 20 and 65 years (mean = 38.8 years). Canal wall reconstruction (CWR)mastoidectomy was performed in 31 ears and canal wall down (CWD) mastoidectomy in 9 ears. Concha cartilage was used for ear canal wall reconstruction in 22 of the 31 CWR procedures and cortical mastoid bone was used in the remaining 9 cases. Results At 0.5 to 4 years follow up, all but one patients remained free of signs of cholesteatoma recurrence, i.e., no retraction pocket or cholesteatoma matrix. One patient, a smoker, needed revision surgery due to cholesteatoma recurrence 1.5 year after the initial operation. The recurrence rate was therefore 3.2% (1/31). Cholesteatoma recurrence was monitored using postoperative CT scans whenever possible. In the case that needed a revision procedure, a retraction pocket was identified by otoendoscopy in the pars flacida area that eventually evolved into a cholesteatoma. A pocket extending to the epitympanum filled with cholesteatoma matrix was confirmed during the revision operation, A decision to perform a modified mastoidectomy was made as the patient refused to quit smoking. The mean air-bone gap in pure tone threshold was 45 dB before surgery and 25 dB after (p ＜ 0.05). There was no difference between using concha cartilage and cortical mastoid bone for the reconstruction regarding air-bone gap improvement, CT findings and otoendoscopic results. Conclusion CWR mastoidectomy can be used for
Model Selection Through Sparse Maximum Likelihood Estimation
Banerjee, Onureena; D'Aspremont, Alexandre
2007-01-01
We consider the problem of estimating the parameters of a Gaussian or binary distribution in such a way that the resulting undirected graphical model is sparse. Our approach is to solve a maximum likelihood problem with an added l_1-norm penalty term. The problem as formulated is convex but the memory requirements and complexity of existing interior point methods are prohibitive for problems with more than tens of nodes. We present two new algorithms for solving problems with at least a thousand nodes in the Gaussian case. Our first algorithm uses block coordinate descent, and can be interpreted as recursive l_1-norm penalized regression. Our second algorithm, based on Nesterov's first order method, yields a complexity estimate with a better dependence on problem size than existing interior point methods. Using a log determinant relaxation of the log partition function (Wainwright & Jordan (2006)), we show that these same algorithms can be used to solve an approximate sparse maximum likelihood problem for...
Maximum-entropy description of animal movement.
Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M
2015-03-01
We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.
Pareto versus lognormal: a maximum entropy test.
Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano
2011-08-01
It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.
Maximum Variance Hashing via Column Generation
Lei Luo
2013-01-01
item search. Recently, a number of data-dependent methods have been developed, reflecting the great potential of learning for hashing. Inspired by the classic nonlinear dimensionality reduction algorithm—maximum variance unfolding, we propose a novel unsupervised hashing method, named maximum variance hashing, in this work. The idea is to maximize the total variance of the hash codes while preserving the local structure of the training data. To solve the derived optimization problem, we propose a column generation algorithm, which directly learns the binary-valued hash functions. We then extend it using anchor graphs to reduce the computational cost. Experiments on large-scale image datasets demonstrate that the proposed method outperforms state-of-the-art hashing methods in many cases.
The Maximum Resource Bin Packing Problem
Boyar, J.; Epstein, L.; Favrholdt, L.M.
2006-01-01
algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....
Nonparametric Maximum Entropy Estimation on Information Diagrams
Martin, Elliot A; Meinke, Alexander; Děchtěrenko, Filip; Davidsen, Jörn
2016-01-01
Maximum entropy estimation is of broad interest for inferring properties of systems across many different disciplines. In this work, we significantly extend a technique we previously introduced for estimating the maximum entropy of a set of random discrete variables when conditioning on bivariate mutual informations and univariate entropies. Specifically, we show how to apply the concept to continuous random variables and vastly expand the types of information-theoretic quantities one can condition on. This allows us to establish a number of significant advantages of our approach over existing ones. Not only does our method perform favorably in the undersampled regime, where existing methods fail, but it also can be dramatically less computationally expensive as the cardinality of the variables increases. In addition, we propose a nonparametric formulation of connected informations and give an illustrative example showing how this agrees with the existing parametric formulation in cases of interest. We furthe...
Zipf's law, power laws and maximum entropy
Visser, Matt
2013-04-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Zipf's law, power laws, and maximum entropy
Visser, Matt
2012-01-01
Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines - from astronomy to demographics to economics to linguistics to zoology, and even warfare. A recent model of random group formation [RGF] attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present article I argue that the cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified.
Regions of constrained maximum likelihood parameter identifiability
Lee, C.-H.; Herget, C. J.
1975-01-01
This paper considers the parameter identification problem of general discrete-time, nonlinear, multiple-input/multiple-output dynamic systems with Gaussian-white distributed measurement errors. Knowledge of the system parameterization is assumed to be known. Regions of constrained maximum likelihood (CML) parameter identifiability are established. A computation procedure employing interval arithmetic is proposed for finding explicit regions of parameter identifiability for the case of linear systems. It is shown that if the vector of true parameters is locally CML identifiable, then with probability one, the vector of true parameters is a unique maximal point of the maximum likelihood function in the region of parameter identifiability and the CML estimation sequence will converge to the true parameters.
A Maximum Radius for Habitable Planets.
Alibert, Yann
2015-09-01
We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-01-01
An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m)] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by t...
A stochastic maximum principle via Malliavin calculus
Øksendal, Bernt; Zhou, Xun Yu; Meyer-Brandis, Thilo
2008-01-01
This paper considers a controlled It\\^o-L\\'evy process where the information available to the controller is possibly less than the overall information. All the system coefficients and the objective performance functional are allowed to be random, possibly non-Markovian. Malliavin calculus is employed to derive a maximum principle for the optimal control of such a system where the adjoint process is explicitly expressed.
Tissue radiation response with maximum Tsallis entropy.
Sotolongo-Grau, O; Rodríguez-Pérez, D; Antoranz, J C; Sotolongo-Costa, Oscar
2010-10-08
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Maximum Estrada Index of Bicyclic Graphs
Wang, Long; Wang, Yi
2012-01-01
Let $G$ be a simple graph of order $n$, let $\\lambda_1(G),\\lambda_2(G),...,\\lambda_n(G)$ be the eigenvalues of the adjacency matrix of $G$. The Esrada index of $G$ is defined as $EE(G)=\\sum_{i=1}^{n}e^{\\lambda_i(G)}$. In this paper we determine the unique graph with maximum Estrada index among bicyclic graphs with fixed order.
Maximum privacy without coherence, zero-error
Leung, Debbie; Yu, Nengkun
2016-09-01
We study the possible difference between the quantum and the private capacities of a quantum channel in the zero-error setting. For a family of channels introduced by Leung et al. [Phys. Rev. Lett. 113, 030512 (2014)], we demonstrate an extreme difference: the zero-error quantum capacity is zero, whereas the zero-error private capacity is maximum given the quantum output dimension.
Maximum entropy analysis of cosmic ray composition
Nosek, Dalibor; Vícha, Jakub; Trávníček, Petr; Nosková, Jana
2016-01-01
We focus on the primary composition of cosmic rays with the highest energies that cause extensive air showers in the Earth's atmosphere. A way of examining the two lowest order moments of the sample distribution of the depth of shower maximum is presented. The aim is to show that useful information about the composition of the primary beam can be inferred with limited knowledge we have about processes underlying these observations. In order to describe how the moments of the depth of shower maximum depend on the type of primary particles and their energies, we utilize a superposition model. Using the principle of maximum entropy, we are able to determine what trends in the primary composition are consistent with the input data, while relying on a limited amount of information from shower physics. Some capabilities and limitations of the proposed method are discussed. In order to achieve a realistic description of the primary mass composition, we pay special attention to the choice of the parameters of the sup...
A Maximum Resonant Set of Polyomino Graphs
Zhang Heping
2016-05-01
Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-03-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
Minimal Length, Friedmann Equations and Maximum Density
Awad, Adel
2014-01-01
Inspired by Jacobson's thermodynamic approach[gr-qc/9504004], Cai et al [hep-th/0501055,hep-th/0609128] have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar--Cai derivation [hep-th/0609128] of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure $p(\\rho,a)$ leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature $k$. As an example w...
Maximum saliency bias in binocular fusion
Lu, Yuhao; Stafford, Tom; Fox, Charles
2016-07-01
Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.
The maximum rate of mammal evolution
Evans, Alistair R.; Jones, David; Boyer, Alison G.; Brown, James H.; Costa, Daniel P.; Ernest, S. K. Morgan; Fitzgerald, Erich M. G.; Fortelius, Mikael; Gittleman, John L.; Hamilton, Marcus J.; Harding, Larisa E.; Lintulaakso, Kari; Lyons, S. Kathleen; Okie, Jordan G.; Saarinen, Juha J.; Sibly, Richard M.; Smith, Felisa A.; Stephens, Patrick R.; Theodor, Jessica M.; Uhen, Mark D.
2012-01-01
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous–Paleogene (K–Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes. PMID:22308461
Maximum-biomass prediction of homofermentative Lactobacillus.
Cui, Shumao; Zhao, Jianxin; Liu, Xiaoming; Chen, Yong Q; Zhang, Hao; Chen, Wei
2016-07-01
Fed-batch and pH-controlled cultures have been widely used for industrial production of probiotics. The aim of this study was to systematically investigate the relationship between the maximum biomass of different homofermentative Lactobacillus and lactate accumulation, and to develop a prediction equation for the maximum biomass concentration in such cultures. The accumulation of the end products and the depletion of nutrients by various strains were evaluated. In addition, the minimum inhibitory concentrations (MICs) of acid anions for various strains at pH 7.0 were examined. The lactate concentration at the point of complete inhibition was not significantly different from the MIC of lactate for all of the strains, although the inhibition mechanism of lactate and acetate on Lactobacillus rhamnosus was different from the other strains which were inhibited by the osmotic pressure caused by acid anions at pH 7.0. When the lactate concentration accumulated to the MIC, the strains stopped growing. The maximum biomass was closely related to the biomass yield per unit of lactate produced (YX/P) and the MIC (C) of lactate for different homofermentative Lactobacillus. Based on the experimental data obtained using different homofermentative Lactobacillus, a prediction equation was established as follows: Xmax - X0 = (0.59 ± 0.02)·YX/P·C.
The maximum rate of mammal evolution.
Evans, Alistair R; Jones, David; Boyer, Alison G; Brown, James H; Costa, Daniel P; Ernest, S K Morgan; Fitzgerald, Erich M G; Fortelius, Mikael; Gittleman, John L; Hamilton, Marcus J; Harding, Larisa E; Lintulaakso, Kari; Lyons, S Kathleen; Okie, Jordan G; Saarinen, Juha J; Sibly, Richard M; Smith, Felisa A; Stephens, Patrick R; Theodor, Jessica M; Uhen, Mark D
2012-03-13
How fast can a mammal evolve from the size of a mouse to the size of an elephant? Achieving such a large transformation calls for major biological reorganization. Thus, the speed at which this occurs has important implications for extensive faunal changes, including adaptive radiations and recovery from mass extinctions. To quantify the pace of large-scale evolution we developed a metric, clade maximum rate, which represents the maximum evolutionary rate of a trait within a clade. We applied this metric to body mass evolution in mammals over the last 70 million years, during which multiple large evolutionary transitions occurred in oceans and on continents and islands. Our computations suggest that it took a minimum of 1.6, 5.1, and 10 million generations for terrestrial mammal mass to increase 100-, and 1,000-, and 5,000-fold, respectively. Values for whales were down to half the length (i.e., 1.1, 3, and 5 million generations), perhaps due to the reduced mechanical constraints of living in an aquatic environment. When differences in generation time are considered, we find an exponential increase in maximum mammal body mass during the 35 million years following the Cretaceous-Paleogene (K-Pg) extinction event. Our results also indicate a basic asymmetry in macroevolution: very large decreases (such as extreme insular dwarfism) can happen at more than 10 times the rate of increases. Our findings allow more rigorous comparisons of microevolutionary and macroevolutionary patterns and processes.
The Last Glacial Maximum experiment in PMIP4-CMIP6
Kageyama, Masa; Braconnot, Pascale; Abe-Ouchi, Ayako; Harrison, Sandy; Lambert, Fabrice; Peltier, W. Richard; Tarasov, Lev
2016-04-01
The Last Glacial Maximum (LGM), around 21,000 years ago, is a cold climate extreme. As such, it has been the focus of many studies on modelling and climate reconstruction, which have brought knowledge on the mechanisms explaining this climate, in terms of climate on the continents and of the ocean state, and in terms relationships between climate changes over land, ice sheets and oceans. It is still a challenge for climate or Earth System models to represent the amplitude of climate changes for this period, under the following forcings: - Ice sheets, which represent perturbations in land surface type, altitude and land/ocean distribution - Atmospheric composition - Astronomical parameters Feedbacks from the vegetation and dust are also known to have played a role in setting up the LGM climate but have not been accounted for in previous PMIP experiments. In this poster, we will present the experimental set-up of the PMIP4 LGM experiment, which is presently being discussed and will be finalized for March 2016. For more information and discussion of the PMIP4-CMIP6 experimental design, please visit: https://wiki.lsce.ipsl.fr/pmip3/doku.php/pmip3:cmip6:design:index
Continuous maximum flow segmentation method for nanoparticle interaction analysis.
Marak, L; Tankyevych, O; Talbot, H
2011-10-01
In recent years, tomographic three-dimensional reconstruction approaches using electrons rather than X-rays have become popular. Such images produced with a transmission electron microscope make it possible to image nanometre-scale materials in three-dimensional. However, they are also noisy, limited in contrast and most often have a very poor resolution along the axis of the electron beam. The analysis of images stemming from such modalities, whether fully or semiautomated, is therefore more complicated. In particular, segmentation of objects is difficult. In this paper, we propose to use the continuous maximum flow segmentation method based on a globally optimal minimal surface model. The use of this fully automated segmentation and filtering procedure is illustrated on two different nanoparticle samples and provide comparisons with other classical segmentation methods. The main objectives are the measurement of the attraction rate of polystyrene beads to silica nanoparticle (for the first sample) and interaction of silica nanoparticles with large unilamellar liposomes (for the second sample). We also illustrate how precise measurements such as contact angles can be performed.
Ground movement at Somma-Vesuvius from Last Glacial Maximum
Marturano, Aldo; Aiello, Giuseppe; Barra, Diana; Fedele, Lorenzo; Morra, Vincenzo
2012-01-01
Detailed micropalaeontological and petrochemical analyses of rock samples from two boreholes drilled at the archaeological excavations of Herculaneum, ~ 7 km west of the Somma -Vesuvius crater, allowed reconstruction of the Late Quaternary palaeoenvironmental evolution of the site. The data provide clear evidence for ground uplift movements involving the studied area. The Holocenic sedimentary sequence on which the archaeological remains of Herculaneum rest has risen several meters at an average rate of ~ 4 mm/yr. The uplift has involved the western apron of the volcano and the Sebeto-Volla Plain, a populous area including the eastern suburbs of Naples. This is consistent with earlier evidence for similar uplift for the areas of Pompeii and Sarno valley (SE of the volcano) and the Somma -Vesuvius eastern apron. An axisimmetric deep source of strain is considered responsible for the long-term uplift affecting the whole Somma -Vesuvius edifice. The deformation pattern can be modeled as a single pressure source, sited in the lower crust and surrounded by a shell of Maxwell viscoelastic medium, which experienced a pressure pulse that began at the Last Glacial Maximum.
Tomographic reconstruction of time-bin-entangled qudits
Nowierski, Samantha J.; Oza, Neal N.; Kumar, Prem; Kanter, Gregory S.
2016-10-01
We describe an experimental implementation to generate and measure high-dimensional time-bin-entangled qudits. Two-photon time-bin entanglement is generated via spontaneous four-wave mixing in single-mode fiber. Unbalanced Mach-Zehnder interferometers transform selected time bins to polarization entanglement, allowing standard polarization-projective measurements to be used for complete quantum state tomographic reconstruction. Here we generate maximally entangled qubits (d =2 ) , qutrits (d =3 ) , and ququarts (d =4 ) , as well as other phase-modulated nonmaximally entangled qubits and qutrits. We reconstruct and verify all generated states using maximum-likelihood estimation tomography.
Limited angle C-arm tomosynthesis reconstruction algorithms
Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying
2015-03-01
In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.
Gutierrez-Jurado, H. A.; Guan, H.; Wang, J.; Wang, H.; Bras, R. L.; Simmons, C. T.
2015-12-01
Quantification of evapotranspiration (ET) and its partition over regions of heterogeneous topography and canopy poses a challenge using traditional approaches. In this study, we report the results of a novel field experiment design guided by the Maximum Entropy Production model of ET (MEP-ET), formulated for estimating evaporation and transpiration from homogeneous soil and canopy. A catchment with complex terrain and patchy vegetation in South Australia was instrumented to measure temperature, humidity and net radiation at soil and canopy surfaces. Performance of the MEP-ET model to quantify transpiration and soil evaporation was evaluated during wet and dry conditions with independently and directly measured transpiration from sapflow and soil evaporation using the Bowen Ratio Energy Balance (BREB). MEP-ET transpiration shows remarkable agreement with that obtained through sapflow measurements during wet conditions, but consistently overestimates the flux during dry periods. However, an additional term introduced to the original MEP-ET model accounting for higher stomatal regulation during dry spells, based on differences between leaf and air vapor pressure deficits and temperatures, significantly improves the model performance. On the other hand, MEP-ET soil evaporation is in good agreement with that from BREB regardless of moisture conditions. The experimental design allows a plot and tree scale quantification of evaporation and transpiration respectively. This study confirms for the first time that the MEP-ET originally developed for homogeneous open bare soil and closed canopy can be used for modeling ET over heterogeneous land surfaces. Furthermore, we show that with the addition of an empirical function simulating the plants ability to regulate transpiration, and based on the same measurements of temperature and humidity, the method can produce reliable estimates of ET during both wet and dry conditions without compromising its parsimony.
Maximum power operation of interacting molecular motors
Golubeva, Natalia; Imparato, Alberto
2013-01-01
We study the mechanical and thermodynamic properties of different traffic models for kinesin which are relevant in biological and experimental contexts. We find that motor-motor interactions play a fundamental role by enhancing the thermodynamic efficiency at maximum power of the motors......, as compared to the non-interacting system, in a wide range of biologically compatible scenarios. We furthermore consider the case where the motor-motor interaction directly affects the internal chemical cycle and investigate the effect on the system dynamics and thermodynamics....
Maximum a posteriori decoder for digital communications
Altes, Richard A. (Inventor)
1997-01-01
A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.
Kernel-based Maximum Entropy Clustering
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
The sun and heliosphere at solar maximum.
Smith, E J; Marsden, R G; Balogh, A; Gloeckler, G; Geiss, J; McComas, D J; McKibben, R B; MacDowall, R J; Lanzerotti, L J; Krupp, N; Krueger, H; Landgraf, M
2003-11-14
Recent Ulysses observations from the Sun's equator to the poles reveal fundamental properties of the three-dimensional heliosphere at the maximum in solar activity. The heliospheric magnetic field originates from a magnetic dipole oriented nearly perpendicular to, instead of nearly parallel to, the Sun's rotation axis. Magnetic fields, solar wind, and energetic charged particles from low-latitude sources reach all latitudes, including the polar caps. The very fast high-latitude wind and polar coronal holes disappear and reappear together. Solar wind speed continues to be inversely correlated with coronal temperature. The cosmic ray flux is reduced symmetrically at all latitudes.
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Maximum entropy signal restoration with linear programming
Mastin, G.A.; Hanson, R.J.
1988-05-01
Dantzig's bounded-variable method is used to express the maximum entropy restoration problem as a linear programming problem. This is done by approximating the nonlinear objective function with piecewise linear segments, then bounding the variables as a function of the number of segments used. The use of a linear programming approach allows equality constraints found in the traditional Lagrange multiplier method to be relaxed. A robust revised simplex algorithm is used to implement the restoration. Experimental results from 128- and 512-point signal restorations are presented.
COMPARISON BETWEEN FORMULAS OF MAXIMUM SHIP SQUAT
PETRU SERGIU SERBAN
2016-06-01
Full Text Available Ship squat is a combined effect of ship’s draft and trim increase due to ship motion in limited navigation conditions. Over time, researchers conducted tests on models and ships to find a mathematical formula that can define squat. Various forms of calculating squat can be found in the literature. Among those most commonly used are of Barrass, Millward, Eryuzlu or ICORELS. This paper presents a comparison between the squat formulas to see the differences between them and which one provides the most satisfactory results. In this respect a cargo ship at different speeds was considered as a model for maximum squat calculations in canal navigation conditions.
Multi-Channel Maximum Likelihood Pitch Estimation
Christensen, Mads Græsbøll
2012-01-01
In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...
Maximum entropy PDF projection: A review
Baggenstoss, Paul M.
2017-06-01
We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.
CORA: Emission Line Fitting with Maximum Likelihood
Ness, Jan-Uwe; Wichmann, Rainer
2011-12-01
CORA analyzes emission line spectra with low count numbers and fits them to a line using the maximum likelihood technique. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise, the software derives the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. CORA has been applied to an X-ray spectrum with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory.
Dynamical maximum entropy approach to flocking
Cavagna, Andrea; Giardina, Irene; Ginelli, Francesco; Mora, Thierry; Piovani, Duccio; Tavarone, Raffaele; Walczak, Aleksandra M.
2014-04-01
We derive a new method to infer from data the out-of-equilibrium alignment dynamics of collectively moving animal groups, by considering the maximum entropy model distribution consistent with temporal and spatial correlations of flight direction. When bird neighborhoods evolve rapidly, this dynamical inference correctly learns the parameters of the model, while a static one relying only on the spatial correlations fails. When neighbors change slowly and the detailed balance is satisfied, we recover the static procedure. We demonstrate the validity of the method on simulated data. The approach is applicable to other systems of active matter.
Maximum Temperature Detection System for Integrated Circuits
Frankiewicz, Maciej; Kos, Andrzej
2015-03-01
The paper describes structure and measurement results of the system detecting present maximum temperature on the surface of an integrated circuit. The system consists of the set of proportional to absolute temperature sensors, temperature processing path and a digital part designed in VHDL. Analogue parts of the circuit where designed with full-custom technique. The system is a part of temperature-controlled oscillator circuit - a power management system based on dynamic frequency scaling method. The oscillator cooperates with microprocessor dedicated for thermal experiments. The whole system is implemented in UMC CMOS 0.18 μm (1.8 V) technology.
Zipf's law and maximum sustainable growth
Malevergne, Y; Sornette, D
2010-01-01
Zipf's law states that the number of firms with size greater than S is inversely proportional to S. Most explanations start with Gibrat's rule of proportional growth but require additional constraints. We show that Gibrat's rule, at all firm levels, yields Zipf's law under a balance condition between the effective growth rate of incumbent firms (which includes their possible demise) and the growth rate of investments in entrant firms. Remarkably, Zipf's law is the signature of the long-term optimal allocation of resources that ensures the maximum sustainable growth rate of an economy.
V. Forcella
2012-09-01
Full Text Available This paper deals with the reconstruction of a building, starting from a point cloud. The shape of this building is a non-stellar concave and multi-connected structure, composed of sowns and chains. A sown is the representation of a horizontal plane formed by dense points. A chain is a planar loop modeled by rare points. CCTV structure is defined only by the three orthogonal Cartesian coordinates. The reconstruction uses a sequence of procedures and the desired output is a consistent 3D model. The first procedure is devoted to attributing points to their voxel and to estimating the three values needed afterwards. The second procedure is devoted to analyzing clusters vertically and horizontally, to preliminarily distinguishing chains from sowns and to generating relational matching. The third procedure is devoted to building closed loops between all chains and all their projections on sowns. The fourth procedure is devoted to connecting points with triangles. The fifth procedure, still being implemented, is devoted to interpolating triangles with triangular splines.
Biomaterials for craniofacial reconstruction
Neumann, Andreas
2009-01-01
Full Text Available Biomaterials for reconstruction of bony defects of the skull comprise of osteosynthetic materials applied after osteotomies or traumatic fractures and materials to fill bony defects which result from malformation, trauma or tumor resections. Other applications concern functional augmentations for dental implants or aesthetic augmentations in the facial region.For ostheosynthesis, mini- and microplates made from titanium alloys provide major advantages concerning biocompatibility, stability and individual fitting to the implant bed. The necessity of removing asymptomatic plates and screws after fracture healing is still a controversial issue. Risks and costs of secondary surgery for removal face a low rate of complications (due to corrosion products when the material remains in situ. Resorbable osteosynthesis systems have similar mechanical stability and are especially useful in the growing skull.The huge variety of biomaterials for the reconstruction of bony defects makes it difficult to decide which material is adequate for which indication and for which site. The optimal biomaterial that meets every requirement (e.g. biocompatibility, stability, intraoperative fitting, product safety, low costs etc. does not exist. The different material types are (autogenic bone and many alloplastics such as metals (mainly titanium, ceramics, plastics and composites. Future developments aim to improve physical and biological properties, especially regarding surface interactions. To date, tissue engineered bone is far from routine clinical application.
Reconstructing Experiences through Sketching
Karapanos, Evangelos; Hassenzahl, Marc
2009-01-01
This paper presents iScale, a survey tool that aims at eliciting users' experiences with a product from memory. iScale employs sketching in imposing a process in the reconstruction of one's experiences. Two versions of iScale, the Constructive and the Value-Account iScale, were motivated by two distinct theories on how people reconstruct emotional experiences from memory. These two versions were tested in two separate studies. Study 1 aimed at providing qualitative insight into the use of iScale and compared its performance to that of free-hand sketching. Study 2 compared the two iScale versions to a control condition: that of reporting one's experiences without employing any form of sketching. Significant differences between iScale and the "no-sketching" tool were found. Overall, iScale resulted in a) an increase in the number of experience reports that subjects provided, b) an increase in the amount of contextual information for the reported experiences, and c) an increase in subjects' accuracy in recalling...
Augusto, O
2012-01-01
The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than $4 \\times 10^{32} cm^{-2} s^{-1}$ and the integrated luminosity reached the value of 1.02 $fb^{-1}$ on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test pertubative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space $\\eta \\times \\phi$ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the calo...
Reconstruction in Fourier space
Burden, A.; Percival, W. J.; Howlett, C.
2015-10-01
We present a fast iterative fast Fourier transform (FFT) based reconstruction algorithm that allows for non-parallel redshift-space distortions (RSDs). We test our algorithm on both N-body dark matter simulations and mock distributions of galaxies designed to replicate galaxy survey conditions. We compare solenoidal and irrotational components of the redshift distortion and show that an approximation of this distortion leads to a better estimate of the real-space potential (and therefore faster convergence) than ignoring the RSD when estimating the displacement field. Our iterative reconstruction scheme converges in two iterations for the mock samples corresponding to Baryon Oscillation Spectroscopic Survey CMASS Data Release 11 when we start with an approximation of the RSD. The scheme takes six iterations when the initial estimate, measured from the redshift-space overdensity, has no RSD correction. Slower convergence would be expected for surveys covering a larger angle on the sky. We show that this FFT based method provides a better estimate of the real-space displacement field than a configuration space method that uses finite difference routines to compute the potential for the same grid resolution. Finally, we show that a lognormal transform of the overdensity, used as a proxy for the linear overdensity, is beneficial in estimating the full displacement field from a dense sample of tracers. However, the lognormal transform of the overdensity does not perform well when estimating the displacements from sparser simulations with a more realistic galaxy density.
Accurate structural correlations from maximum likelihood superpositions.
Douglas L Theobald
2008-02-01
Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.
Maximum entropy production and the fluctuation theorem
Dewar, R C [Unite EPHYSE, INRA Centre de Bordeaux-Aquitaine, BP 81, 33883 Villenave d' Ornon Cedex (France)
2005-05-27
Recently the author used an information theoretical formulation of non-equilibrium statistical mechanics (MaxEnt) to derive the fluctuation theorem (FT) concerning the probability of second law violating phase-space paths. A less rigorous argument leading to the variational principle of maximum entropy production (MEP) was also given. Here a more rigorous and general mathematical derivation of MEP from MaxEnt is presented, and the relationship between MEP and the FT is thereby clarified. Specifically, it is shown that the FT allows a general orthogonality property of maximum information entropy to be extended to entropy production itself, from which MEP then follows. The new derivation highlights MEP and the FT as generic properties of MaxEnt probability distributions involving anti-symmetric constraints, independently of any physical interpretation. Physically, MEP applies to the entropy production of those macroscopic fluxes that are free to vary under the imposed constraints, and corresponds to selection of the most probable macroscopic flux configuration. In special cases MaxEnt also leads to various upper bound transport principles. The relationship between MaxEnt and previous theories of irreversible processes due to Onsager, Prigogine and Ziegler is also clarified in the light of these results. (letter to the editor)
Thermodynamic hardness and the maximum hardness principle
Franco-Pérez, Marco; Gázquez, José L.; Ayers, Paul W.; Vela, Alberto
2017-08-01
An alternative definition of hardness (called the thermodynamic hardness) within the grand canonical ensemble formalism is proposed in terms of the partial derivative of the electronic chemical potential with respect to the thermodynamic chemical potential of the reservoir, keeping the temperature and the external potential constant. This temperature dependent definition may be interpreted as a measure of the propensity of a system to go through a charge transfer process when it interacts with other species, and thus it keeps the philosophy of the original definition. When the derivative is expressed in terms of the three-state ensemble model, in the regime of low temperatures and up to temperatures of chemical interest, one finds that for zero fractional charge, the thermodynamic hardness is proportional to T-1(I -A ) , where I is the first ionization potential, A is the electron affinity, and T is the temperature. However, the thermodynamic hardness is nearly zero when the fractional charge is different from zero. Thus, through the present definition, one avoids the presence of the Dirac delta function. We show that the chemical hardness defined in this way provides meaningful and discernible information about the hardness properties of a chemical species exhibiting integer or a fractional average number of electrons, and this analysis allowed us to establish a link between the maximum possible value of the hardness here defined, with the minimum softness principle, showing that both principles are related to minimum fractional charge and maximum stability conditions.
Maximum Likelihood Analysis in the PEN Experiment
Lehman, Martin
2013-10-01
The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.
Solar Forcing of Greenland Climate during the Last Glacial Maximum
Adolphi, Florian; Muscheler, Raimund; Svensson, Anders; Aldahan, Ala; Possnert, Göran; Beer, Juerg; Sjolte, Jesper; Björck, Svante
2014-05-01
The role of solar forcing in climate changes is a matter of continuous debate. Challenges arise from the short period of direct observations of total solar irradiance (TSI), which indicate minor TSI variations of approximately 1 ‰ over an 11-year cycle, and the limited understanding of possible feedback mechanisms. Opposed to this, there is evidence from paleoclimate records for a tight coupling of solar activity and regional climate (e.g., Bond et al. 2001, Martin-Puertas et al. 2012). One proposed mechanism to amplify the Sun's influence on climate involves the relatively large modulation of the solar UV output (Haigh et al. 2010). This alters the radiative balance in the stratosphere via ozone feedback processes and eventually propagates downwards causing changes in the tropospheric circulation (Inesson et al. 2011). The regional response to this forcing may, however, also depend on orbital forcing of the mean state of the atmosphere (Dietrich et al. 2012). Prior to direct observations cosmogenic radionuclides such as 10Be and 14C are the most reliable proxies of solar activity. Their atmospheric production rates depend on the flux of galactic cosmic rays into the atmosphere which in turn is modulated by the strength of the Earth's and the solar magnetic fields. However, archives of 10Be and 14C are additionally affected by changes of their respective geochemical environment. Owing to their fundamentally different geochemistry, a combined analysis of 10Be and 14C records can aid to isolate production rate variations more reliably and thus, lead to improved reconstructions of solar variability. Due to the absence of high-quality high-resolution data this approach has so far been limited to the Holocene. We will present the first solar activity reconstruction for the end of the last glacial (22.5 - 10 ka BP) based on the cosmogenic radionuclides 10Be and 14C. We will compare glacial solar activity variations to Holocene features through combined interpretation
Exercises in PET Image Reconstruction
Nix, Oliver
These exercises are complementary to the theoretical lectures about positron emission tomography (PET) image reconstruction. They aim at providing some hands on experience in PET image reconstruction and focus on demonstrating the different data preprocessing steps and reconstruction algorithms needed to obtain high quality PET images. Normalisation, geometric-, attenuation- and scatter correction are introduced. To explain the necessity of those some basics about PET scanner hardware, data acquisition and organisation are reviewed. During the course the students use a software application based on the STIR (software for tomographic image reconstruction) library 1,2 which allows them to dynamically select or deselect corrections and reconstruction methods as well as to modify their most important parameters. Following the guided tutorial, the students get an impression on the effect the individual data precorrections have on image quality and what happens if they are forgotten. Several data sets in sinogram format are provided, such as line source data, Jaszczak phantom data sets with high and low statistics and NEMA whole body phantom data. The two most frequently used reconstruction algorithms in PET image reconstruction, filtered back projection (FBP) and the iterative OSEM (ordered subset expectation maximation) approach are used to reconstruct images. The exercise should help the students gaining an understanding what the reasons for inferior image quality and artefacts are and how to improve quality by a clever choice of reconstruction parameters.
Virtual 3-D Facial Reconstruction
Martin Paul Evison
2000-06-01
Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.
[Deferred breast reconstruction - soul surgery?].
Kydlíček, Tomáš; Třešková, Inka; Třeška, Vladislav; Holubec, Luboš
2013-01-01
The loss or mutilation of a breast as a result of surgical treatment of neoplastic disease always represents a negative impact on a woman's psyche and negatively influences the quality of the woman's remaining life. The goal of our work was to implement deferred breast reconstruction into routine practice and the objectification of the influence of reconstruction on bodily integrity, quality of life, and the feeling of satisfaction in women. Women in remission from neoplastic disease after a radical mastectomy were indicated for breast reconstruction. Between January 2002 and December 2011 deferred breast reconstruction was carried out 174 × on 163 women, with an average age of 49.2 and an age range of 29-67 years. The most frequently used reconstruction method was a simple gel augmentation of the breast or a Becker expander/implant - 51 (29.3%) and 37 (21.3%), or in combination with a lateral thoracodorsal flap (31; 17.8% and 47; 27%); reconstruction using a free DIEP flap was carried out 7 × (4%). Complications occurred in 19 operations (10.9%) with a dominance of inflammation and pericapsular fibrosis, in a subjective analysis, satisfaction with the results prevailed, along with an increased quality of life after reconstruction. A growing number of deferred breast reconstructions, women's satisfaction with the results, the positive influence of renewed bodily integrity on the feeling of life satisfaction and the quality of life have elevated breast reconstructions to a qualitatively higher level.
Lake Basin Fetch and Maximum Length/Width
Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...
Application of CT 3D reconstruction in diagnosing atlantoaxial subluxation
段少银; 林清池; 庞瑞麟
2004-01-01
Objective:To evaluate and compare the diagnostic value in atlantoaxial subluxation by CT three-dimensional (3D) reconstruction.Methods:3D reconstruction fimdings of 41 patients with atlantoaxiai subluxation were retrospectively analyzed, and comparisons were made among images of transverse section, multiplanar reformorting (MPR), surface shade display (SSD), maximum intensity project (MIP), and volume rendering (VR). Results:Of 41 patients with atlantoaxial subluxation, 31 belonged to rotary dislocation, 5 antedislocation, and 5 hind dislocation. All the cases showed the dislocated joint panel of atlantoaxial articulation.Fifteen cases showed deviation of the odontoid process and 8 cases widened distance between the dens and anterior arch of the atlas. The dislocated joint panel of atlantoaxial articulation was more clearly seen with SSD-3D imaging than any other methods. Conclusions:Atlantoaxial subluxation can well be diagnosed by CT 3D reconstruction, in which SSD-3D imaging is optimal.
CONNJUR Workflow Builder: a software integration environment for spectral reconstruction
Fenwick, Matthew; Weatherby, Gerard; Vyas, Jay; Sesanker, Colbert [UConn Health, Department of Molecular Biology and Biophysics (United States); Martyn, Timothy O. [Rensselaer at Hartford, Department of Engineering and Science (United States); Ellis, Heidi J. C. [Western New England College, Department of Computer Science and Information Technology (United States); Gryk, Michael R., E-mail: gryk@uchc.edu [UConn Health, Department of Molecular Biology and Biophysics (United States)
2015-07-15
CONNJUR Workflow Builder (WB) is an open-source software integration environment that leverages existing spectral reconstruction tools to create a synergistic, coherent platform for converting biomolecular NMR data from the time domain to the frequency domain. WB provides data integration of primary data and metadata using a relational database, and includes a library of pre-built workflows for processing time domain data. WB simplifies maximum entropy reconstruction, facilitating the processing of non-uniformly sampled time domain data. As will be shown in the paper, the unique features of WB provide it with novel abilities to enhance the quality, accuracy, and fidelity of the spectral reconstruction process. WB also provides features which promote collaboration, education, parameterization, and non-uniform data sets along with processing integrated with the Rowland NMR Toolkit (RNMRTK) and NMRPipe software packages. WB is available free of charge in perpetuity, dual-licensed under the MIT and GPL open source licenses.
Anatomically-aided PET reconstruction using the kernel method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Position Reconstruction in a Dual Phase Xenon Scintillation Detector
Solovov, V N; Akimov, D Yu; Araújo, H M; Barnes, E J; Burenkov, A A; Chepel, V; Currie, A; DeViveiros, L; Edwards, B; Ghag, C; Hollingsworth, A; Horn, M; Kalmus, G E; Kobyakin, A S; Kovalenko, A G; Lebedenko, V N; Lindote, A; Lopes, M I; Lüscher, R; Majewski, P; Murphy, A St J; Neves, F; Paling, S M; da Cunha, J Pinto; Preece, R; Quenby, J J; Reichhart, L; Scovell, P R; Silva, C; Smith, N J T; Smith, P F; Stekhanov, V N; Sumner, T J; Thorne, C; Walker, R J
2011-01-01
We studied the application of statistical reconstruction algorithms, namely maximum likelihood and least squares methods, to the problem of event reconstruction in a dual phase liquid xenon detector. An iterative method was developed for in-situ reconstruction of the PMT light response functions from calibration data taken with an uncollimated gamma-ray source. Using the techniques described, the performance of the ZEPLIN-III dark matter detector was studied for 122 keV gamma-rays. For the inner part of the detector (R<100 mm), spatial resolutions of 13 mm and 1.6 mm FWHM were measured in the horizontal plane for primary and secondary scintillation, respectively. An energy resolution of 8.1% FWHM was achieved at that energy. The possibility of using this technique for improving performance and reducing cost of scintillation cameras for medical applications is currently under study.
Benvenuto, Federico
2012-01-01
In this paper we propose a new statistical stopping rule for constrained maximum likelihood iterative algorithms applied to ill-posed inverse problems. To this aim we extend the definition of Tikhonov regularization in a statistical framework and prove that the application of the proposed stopping rule to the Iterative Space Reconstruction Algorithm (ISRA) in the Gaussian case and Expectation Maximization (EM) in the Poisson case leads to well defined regularization methods according to the given definition. We also prove that, if an inverse problem is genuinely ill-posed in the sense of Tikhonov, the same definition is not satisfied when ISRA and EM are optimized by classical stopping rule like Morozov's discrepancy principle, Pearson's test and Poisson discrepancy principle. The stopping rule is illustrated in the case of image reconstruction from data recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). First, by using a simulated image consisting of structures analogous to those ...
Dynamic characteristics of an NC table with phase space reconstruction
Linhong WANG; Bo WU; Runsheng DU; Shuzi YANG
2009-01-01
The dynamic properties of a numerical control (NC) table directly interfere with the accuracy and surface quality of work pieces machined by a computer numerical control (CNC) machine. Phase space reconstruction is an effective approach for researching dynamic behaviors of a system with measured time series. Based on the theory and method for phase space reconstruction, the correlation dimension, maximum Lyapunov exponent, and dynamic time series measured from the NC table were analyzed. The characteristic quantities such as the power spectrum, phase trajectories, correlation dimension, and maximum Lyapunov exponent are extracted from the measured time series. The chaotic characteristic of the dynamic properties of the NC table is revealed via various approaches.Therefore, an NC table is a nonlinear dynamic system. This research establishes a basis for dynamic system discrimi-nation of a CNC machine.
Effect of reconstruction parameters on defect detection in fan-beam SPECT
Gregoriou, George K.
2002-05-01
The effect of reconstruction parameters on the fan-beam filtered backprojection method in myocardial defect detection was investigated using an observer performance study and receiver operating characteristics (ROC) analysis. A mathematical phantom of the human torso was used to model the anatomy and Thallium-201 (Tl-201) uptake in humans. Half-scan fan-beam realistic projections were simulated using a low-energy high resolution (LEHR) collimator that incorporated the effects of photon attenuation, spatially varying detector response, scatter, and Poison noise. A focal length of 55 cm and a radius of rotation of 25 cm were used, which resulted to a magnification of two at the center of rotation and a maximum magnification of three in the reconstructed region of interest. By changing the reconstruction pixel size, five different projection bin width to reconstruction pixel size (PBWRPS) ratios were obtained which resulted in five classes of reconstructed images. Myocardial defects were simulated as Gaussian-shaped decreases in Tl-201 uptake distribution. The total projection count per 3 mm image slice was 44,000. A total of 96 reconstructed transaxial images from each one of the five classes were shown to eight observers for evaluation. The results indicate that the reconstruction pixel size has a significant effect on the quality of fan-beam SPECT images. Moreover, the study indicated that in order to ensure best image quality the PBWRPS ratio should be at least as large as the maximum possible magnification inside the reconstructed image array.
Maximum entropy principle and texture formation
Arminjon, M; Arminjon, Mayeul; Imbault, Didier
2006-01-01
The macro-to-micro transition in a heterogeneous material is envisaged as the selection of a probability distribution by the Principle of Maximum Entropy (MAXENT). The material is made of constituents, e.g. given crystal orientations. Each constituent is itself made of a large number of elementary constituents. The relevant probability is the volume fraction of the elementary constituents that belong to a given constituent and undergo a given stimulus. Assuming only obvious constraints in MAXENT means describing a maximally disordered material. This is proved to have the same average stimulus in each constituent. By adding a constraint in MAXENT, a new model, potentially interesting e.g. for texture prediction, is obtained.
MLDS: Maximum Likelihood Difference Scaling in R
Kenneth Knoblauch
2008-01-01
Full Text Available The MLDS package in the R programming language can be used to estimate perceptual scales based on the results of psychophysical experiments using the method of difference scaling. In a difference scaling experiment, observers compare two supra-threshold differences (a,b and (c,d on each trial. The approach is based on a stochastic model of how the observer decides which perceptual difference (or interval (a,b or (c,d is greater, and the parameters of the model are estimated using a maximum likelihood criterion. We also propose a method to test the model by evaluating the self-consistency of the estimated scale. The package includes an example in which an observer judges the differences in correlation between scatterplots. The example may be readily adapted to estimate perceptual scales for arbitrary physical continua.
Maximum Profit Configurations of Commercial Engines
Yiran Chen
2011-06-01
Full Text Available An investigation of commercial engines with finite capacity low- and high-price economic subsystems and a generalized commodity transfer law [n ∝ Δ (P m] in commodity flow processes, in which effects of the price elasticities of supply and demand are introduced, is presented in this paper. Optimal cycle configurations of commercial engines for maximum profit are obtained by applying optimal control theory. In some special cases, the eventual state—market equilibrium—is solely determined by the initial conditions and the inherent characteristics of two subsystems; while the different ways of transfer affect the model in respects of the specific forms of the paths of prices and the instantaneous commodity flow, i.e., the optimal configuration.
Maximum Segment Sum, Monadically (distilled tutorial
Jeremy Gibbons
2011-09-01
Full Text Available The maximum segment sum problem is to compute, given a list of integers, the largest of the sums of the contiguous segments of that list. This problem specification maps directly onto a cubic-time algorithm; however, there is a very elegant linear-time solution too. The problem is a classic exercise in the mathematics of program construction, illustrating important principles such as calculational development, pointfree reasoning, algebraic structure, and datatype-genericity. Here, we take a sideways look at the datatype-generic version of the problem in terms of monadic functional programming, instead of the traditional relational approach; the presentation is tutorial in style, and leavened with exercises for the reader.
Maximum Information and Quantum Prediction Algorithms
McElwaine, J N
1997-01-01
This paper describes an algorithm for selecting a consistent set within the consistent histories approach to quantum mechanics and investigates its properties. The algorithm uses a maximum information principle to select from among the consistent sets formed by projections defined by the Schmidt decomposition. The algorithm unconditionally predicts the possible events in closed quantum systems and ascribes probabilities to these events. A simple spin model is described and a complete classification of all exactly consistent sets of histories formed from Schmidt projections in the model is proved. This result is used to show that for this example the algorithm selects a physically realistic set. Other tentative suggestions in the literature for set selection algorithms using ideas from information theory are discussed.
Maximum process problems in optimal control theory
Goran Peskir
2005-01-01
Full Text Available Given a standard Brownian motion (Btt≥0 and the equation of motion dXt=vtdt+2dBt, we set St=max0≤s≤tXs and consider the optimal control problem supvE(Sτ−Cτ, where c>0 and the supremum is taken over all admissible controls v satisfying vt∈[μ0,μ1] for all t up to τ=inf{t>0|Xt∉(ℓ0,ℓ1} with μ0g∗(St, where s↦g∗(s is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation. The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.
Maximum Spectral Luminous Efficacy of White Light
Murphy, T W
2013-01-01
As lighting efficiency improves, it is useful to understand the theoretical limits to luminous efficacy for light that we perceive as white. Independent of the efficiency with which photons are generated, there exists a spectrally-imposed limit to the luminous efficacy of any source of photons. We find that, depending on the acceptable bandpass and---to a lesser extent---the color temperature of the light, the ideal white light source achieves a spectral luminous efficacy of 250--370 lm/W. This is consistent with previous calculations, but here we explore the maximum luminous efficacy as a function of photopic sensitivity threshold, color temperature, and color rendering index; deriving peak performance as a function of all three parameters. We also present example experimental spectra from a variety of light sources, quantifying the intrinsic efficacy of their spectral distributions.
Maximum entropy model for business cycle synchronization
Xi, Ning; Muneepeerakul, Rachata; Azaele, Sandro; Wang, Yougui
2014-11-01
The global economy is a complex dynamical system, whose cyclical fluctuations can mainly be characterized by simultaneous recessions or expansions of major economies. Thus, the researches on the synchronization phenomenon are key to understanding and controlling the dynamics of the global economy. Based on a pairwise maximum entropy model, we analyze the business cycle synchronization of the G7 economic system. We obtain a pairwise-interaction network, which exhibits certain clustering structure and accounts for 45% of the entire structure of the interactions within the G7 system. We also find that the pairwise interactions become increasingly inadequate in capturing the synchronization as the size of economic system grows. Thus, higher-order interactions must be taken into account when investigating behaviors of large economic systems.
Quantum gravity momentum representation and maximum energy
Moffat, J. W.
2016-11-01
We use the idea of the symmetry between the spacetime coordinates xμ and the energy-momentum pμ in quantum theory to construct a momentum space quantum gravity geometry with a metric sμν and a curvature tensor Pλ μνρ. For a closed maximally symmetric momentum space with a constant 3-curvature, the volume of the p-space admits a cutoff with an invariant maximum momentum a. A Wheeler-DeWitt-type wave equation is obtained in the momentum space representation. The vacuum energy density and the self-energy of a charged particle are shown to be finite, and modifications of the electromagnetic radiation density and the entropy density of a system of particles occur for high frequencies.
Video segmentation using Maximum Entropy Model
QIN Li-juan; ZHUANG Yue-ting; PAN Yun-he; WU Fei
2005-01-01
Detecting objects of interest from a video sequence is a fundamental and critical task in automated visual surveillance.Most current approaches only focus on discriminating moving objects by background subtraction whether or not the objects of interest can be moving or stationary. In this paper, we propose layers segmentation to detect both moving and stationary target objects from surveillance video. We extend the Maximum Entropy (ME) statistical model to segment layers with features, which are collected by constructing a codebook with a set of codewords for each pixel. We also indicate how the training models are used for the discrimination of target objects in surveillance video. Our experimental results are presented in terms of the success rate and the segmenting precision.
Helou, E. S.; Zibetti, M. V. W.; Miqueles, E. X.
2017-04-01
We propose the superiorization of incremental algorithms for tomographic image reconstruction. The resulting methods follow a better path in its way to finding the optimal solution for the maximum likelihood problem in the sense that they are closer to the Pareto optimal curve than the non-superiorized techniques. A new scaled gradient iteration is proposed and three superiorization schemes are evaluated. Theoretical analysis of the methods as well as computational experiments with both synthetic and real data are provided.
Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.
Kim, Dae-Min; Kong, Yong-Ku
2016-12-01
A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.
Cosmic shear measurement with maximum likelihood and maximum a posteriori inference
Hall, Alex
2016-01-01
We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with very promising results. We find that the introduction of an intrinsic shape prior mitigates noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely sub-dominant. We show how biases propagate to shear estima...
Biomatrices for bladder reconstruction.
Lin, Hsueh-Kung; Madihally, Sundar V; Palmer, Blake; Frimberger, Dominic; Fung, Kar-Ming; Kropp, Bradley P
2015-03-01
There is a demand for tissue engineering of the bladder needed by patients who experience a neurogenic bladder or idiopathic detrusor overactivity. To avoid complications from augmentation cystoplasty, the field of tissue engineering seeks optimal scaffolds for bladder reconstruction. Naturally derived biomaterials as well as synthetic and natural polymers have been explored as bladder substitutes. To improve regenerative properties, these biomaterials have been conjugated with functional molecules, combined with nanotechology, or seeded with exogenous cells. Although most studies reported complete and functional bladder regeneration in small-animal models, results from large-animal models and human clinical trials varied. For functional bladder regeneration, procedures for biomaterial fabrication, incorporation of biologically active agents, introduction of nanotechnology, and application of stem-cell technology need to be standardized. Advanced molecular and medical technologies such as next generation sequencing and magnetic resonance imaging can be introduced for mechanistic understanding and non-invasive monitoring of regeneration processes, respectively.
Reconstructability analysis of epistasis.
Zwick, Martin
2011-01-01
The literature on epistasis describes various methods to detect epistatic interactions and to classify different types of epistasis. Reconstructability analysis (RA) has recently been used to detect epistasis in genomic data. This paper shows that RA offers a classification of types of epistasis at three levels of resolution (variable-based models without loops, variable-based models with loops, state-based models). These types can be defined by the simplest RA structures that model the data without information loss; a more detailed classification can be defined by the information content of multiple candidate structures. The RA classification can be augmented with structures from related graphical modeling approaches. RA can analyze epistatic interactions involving an arbitrary number of genes or SNPs and constitutes a flexible and effective methodology for genomic analysis.
Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.
2000-01-01
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.
CRUCIATE LIGAMENT RECONSTRUCTION
A. V. Korolev
2016-01-01
Full Text Available Purpose: To evaluate long-term results of meniscal repair during arthroscopic ACL reconstruction.Materials and methods: 45 patients who underwent meniscal repair during arthroscopic ACL reconstruction between 2007 and 2013 by the same surgeon were included in the study. In total, fifty meniscus were repaired (26 medial and 24 lateral. Procedures included use of one up to four Fast-Fix implants (Smith & Nephew. In five cases both medial and lateral meniscus were repaired. Cincinnati, IKDC and Lysholm scales were used for long-term outcome analysis.Results: 19 male and 26 female patients were included in the study aging from 15 to 59 years (mean age 33,2±1,5. Median time from injury to surgical procedure was zero months (ranging zero to one. Mean time from surgery to scale analysis was 55,9±3 months (ranged 20-102. Median Cincinnati score was 97 (ranged 90-100, with excellent results in 93% of cases (43 patients and good results in 7% (3 patients. Median IKDC score was 90,8 (ranged 86,2-95,4, with excellent outcomes in 51% of cases (23 patients, good in 33% (15 patients and satisfactory in 16% (7 patients. Median Lysholm score was 95 (ranged 90-100, with excellent outcomes in 76% of cases (34 patients and good in 24% (11 patients. Authors identified no statistical differences when comparing survey results in age, sex and time from trauma to surgery.Conclusions: Results of the present study match the data from orthopedic literature that prove meniscal repair as a safe and efficient procedure with good and excellent outcomes. All-inside meniscal repair can be used irrespectively of patients' age and is efficient even in case of delayed procedures.
Ptychographic reconstruction of attosecond pulses
Lucchini, M; Ludwig, A; Gallmann, L; Keller, U; Feurer, T
2015-01-01
We demonstrate a new attosecond pulse reconstruction modality which uses an algorithm that is derived from ptychography. In contrast to other methods, energy and delay sampling are not correlated, and as a result, the number of electron spectra to record is considerably smaller. Together with the robust algorithm, this leads to a more precise and fast convergence of the reconstruction.
Wang, J.; Parolari, A.; Huang, S. Y.
2014-12-01
The objective of this study is to formulate and test plant water stress parameterizations for the recently proposed maximum entropy production (MEP) model of evapotranspiration (ET) over vegetated surfaces. . The MEP model of ET is a parsimonious alternative to existing land surface parameterizations of surface energy fluxes from net radiation, temperature, humidity, and a small number of parameters. The MEP model was previously tested for vegetated surfaces under well-watered and dry, dormant conditions, when the surface energy balance is relatively insensitive to plant physiological activity. Under water stressed conditions, however, the plant water stress response strongly affects the surface energy balance. This effect occurs through plant physiological adjustments that reduce ET to maintain leaf turgor pressure as soil moisture is depleted during drought. To improve MEP model of ET predictions under water stress conditions, the model was modified to incorporate this plant-mediated feedback between soil moisture and ET. We compare MEP model predictions to observations under a range of field conditions, including bare soil, grassland, and forest. The results indicate a water stress function that combines the soil water potential in the surface soil layer with the atmospheric humidity successfully reproduces observed ET decreases during drought. In addition to its utility as a modeling tool, the calibrated water stress functions also provide a means to infer ecosystem influence on the land surface state. Challenges associated with sampling model input data (i.e., net radiation, surface temperature, and surface humidity) are also discussed.
Are temperature reconstructions regionally biased?
Bothe, O
2012-01-01
Are temperature reconstructions possibly biased due to regionally differing density of utilized proxy-networks? This question is assessed utilizing a simple process-based forward model of tree growth in the virtual reality of two simulations of the climate of the last millennium with different amplitude of solar forcing variations. The pseudo-tree ring series cluster in high latitudes of the northern hemisphere and east Asia. Only weak biases are found for the full network. However, for a strong solar forcing amplitude the high latitudes indicate a warmer first half of the last millennium while mid-latitudes and Asia were slightly colder than the extratropical hemispheric average. Reconstruction skill is weak or non-existent for two simple reconstruction schemes, and comparison of virtual reality target and reconstructions reveals strong deficiencies. The temporal resolution of the proxies has an influence on the reconstruction task and results are sensitive to the construction of the proxy-network. Existing ...
Nuclear Enhanced X-ray Maximum Entropy Method Used to Analyze Local Distortions in Simple Structures
Christensen, Sebastian; Bindzus, Niels; Christensen, Mogens
the ideal, undistorted rock-salt structure. NEXMEM employs a simple procedure to normalize extracted structure factors to the atomic form factors. The NDD is reconstructed by performing maximum entropy calculations on the normalized structure factors. NEXMEM has been validated by testing against simulated....... In addition, we have applied NEXMEM to multi-temperature synchrotron powder X-ray diffraction collected on PbX. Based on powder diffraction data, our study demonstrates that NEXMEM successfully improves the atomic resolution over standard MEM. This new tool aids our understanding of the local distortions...
Blob-enhanced reconstruction technique
Castrillo, Giusy; Cafiero, Gioacchino; Discetti, Stefano; Astarita, Tommaso
2016-09-01
A method to enhance the quality of the tomographic reconstruction and, consequently, the 3D velocity measurement accuracy, is presented. The technique is based on integrating information on the objects to be reconstructed within the algebraic reconstruction process. A first guess intensity distribution is produced with a standard algebraic method, then the distribution is rebuilt as a sum of Gaussian blobs, based on location, intensity and size of agglomerates of light intensity surrounding local maxima. The blobs substitution regularizes the particle shape allowing a reduction of the particles discretization errors and of their elongation in the depth direction. The performances of the blob-enhanced reconstruction technique (BERT) are assessed with a 3D synthetic experiment. The results have been compared with those obtained by applying the standard camera simultaneous multiplicative reconstruction technique (CSMART) to the same volume. Several blob-enhanced reconstruction processes, both substituting the blobs at the end of the CSMART algorithm and during the iterations (i.e. using the blob-enhanced reconstruction as predictor for the following iterations), have been tested. The results confirm the enhancement in the velocity measurements accuracy, demonstrating a reduction of the bias error due to the ghost particles. The improvement is more remarkable at the largest tested seeding densities. Additionally, using the blobs distributions as a predictor enables further improvement of the convergence of the reconstruction algorithm, with the improvement being more considerable when substituting the blobs more than once during the process. The BERT process is also applied to multi resolution (MR) CSMART reconstructions, permitting simultaneously to achieve remarkable improvements in the flow field measurements and to benefit from the reduction in computational time due to the MR approach. Finally, BERT is also tested on experimental data, obtaining an increase of the
Porcelain three-dimensional shape reconstruction and its color reconstruction
Yu, Xiaoyang; Wu, Haibin; Yang, Xue; Yu, Shuang; Wang, Beiyi; Chen, Deyun
2013-01-01
In this paper, structured light three-dimensional measurement technology was used to reconstruct the porcelain shape, and further more the porcelain color was reconstructed. So the accurate reconstruction of the shape and color of porcelain was realized. Our shape measurement installation drawing is given. Because the porcelain surface is color complex and highly reflective, the binary Gray code encoding is used to reduce the influence of the porcelain surface. The color camera was employed to obtain the color of the porcelain surface. Then, the comprehensive reconstruction of the shape and color was realized in Java3D runtime environment. In the reconstruction process, the space point by point coloration method is proposed and achieved. Our coloration method ensures the pixel corresponding accuracy in both of shape and color aspects. The porcelain surface shape and color reconstruction experimental results completed by proposed method and our installation, show that: the depth range is 860 ˜ 980mm, the relative error of the shape measurement is less than 0.1%, the reconstructed color of the porcelain surface is real, refined and subtle, and has the same visual effect as the measured surface.
Spatial methods for event reconstruction in CLEAN
Coakley, K J; Coakley, Kevin J.; Kinsey, Daniel N. Mc
2004-01-01
In CLEAN (Cryogenic Low Energy Astrophysics with Noble gases), a proposed neutrino and dark matter detector, background discrimination is possible if one can determine the location of an ionizing radiation event with high accuracy. We simulate ionizing radiation events that produce multiple scintillation photons within a spherical detection volume filled with liquid neon. We estimate the radial location of a particular ionizing radiation event based on the observed count data corresponding to that event. The count data are collected by detectors mounted at the spherical boundary of the detection volume. We neglect absorption, but account for Rayleigh scattering. To account for wavelength-shifting of the scintillation light, we assume that photons are absorbed and re-emitted at the detectors. Here, we develop spatial Maximum Likelihood methods for event reconstruction, and study their performance in computer simulation experiments. We also study a method based on the centroid of the observed count data. We cal...
A new iterative algorithm to reconstruct the refractive index.
Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y
2007-06-21
The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.
Ray tracing reconstruction investigation for C-arm tomosynthesis
Malalla, Nuhad A. Y.; Chen, Ying
2016-04-01
C-arm tomosynthesis is a three dimensional imaging technique. Both x-ray source and the detector are mounted on a C-arm wheeled structure to provide wide variety of movement around the object. In this paper, C-arm tomosynthesis was introduced to provide three dimensional information over a limited view angle (less than 180o) to reduce radiation exposure and examination time. Reconstruction algorithms based on ray tracing method such as ray tracing back projection (BP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were developed for C-arm tomosynthesis. C-arm tomosynthesis projection images of simulated spherical object were simulated with a virtual geometric configuration with a total view angle of 40 degrees. This study demonstrated the sharpness of in-plane reconstructed structure and effectiveness of removing out-of-plane blur for each reconstruction algorithms. Results showed the ability of ray tracing based reconstruction algorithms to provide three dimensional information with limited angle C-arm tomosynthesis.
What to Expect After Breast Reconstruction Surgery
... En Español Category Cancer A-Z Breast Cancer Breast Reconstruction Surgery Women who have surgery as part of ... and after your surgery. Deciding Whether To Have Breast Reconstruction Many women choose to have reconstruction surgery, but ...
20 CFR 211.14 - Maximum creditable compensation.
2010-04-01
... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Maximum creditable compensation. 211.14... CREDITABLE RAILROAD COMPENSATION § 211.14 Maximum creditable compensation. Maximum creditable compensation... Employment Accounts shall notify each employer of the amount of maximum creditable compensation applicable...
49 CFR 230.24 - Maximum allowable stress.
2010-10-01
... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...
Theoretical Estimate of Maximum Possible Nuclear Explosion
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Proposed principles of maximum local entropy production.
Ross, John; Corlan, Alexandru D; Müller, Stefan C
2012-07-12
Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.
Maximum entropy production and plant optimization theories.
Dewar, Roderick C
2010-05-12
Plant ecologists have proposed a variety of optimization theories to explain the adaptive behaviour and evolution of plants from the perspective of natural selection ('survival of the fittest'). Optimization theories identify some objective function--such as shoot or canopy photosynthesis, or growth rate--which is maximized with respect to one or more plant functional traits. However, the link between these objective functions and individual plant fitness is seldom quantified and there remains some uncertainty about the most appropriate choice of objective function to use. Here, plants are viewed from an alternative thermodynamic perspective, as members of a wider class of non-equilibrium systems for which maximum entropy production (MEP) has been proposed as a common theoretical principle. I show how MEP unifies different plant optimization theories that have been proposed previously on the basis of ad hoc measures of individual fitness--the different objective functions of these theories emerge as examples of entropy production on different spatio-temporal scales. The proposed statistical explanation of MEP, that states of MEP are by far the most probable ones, suggests a new and extended paradigm for biological evolution--'survival of the likeliest'--which applies from biomacromolecules to ecosystems, not just to individuals.
Maximum likelihood continuity mapping for fraud detection
Hogden, J.
1997-05-01
The author describes a novel time-series analysis technique called maximum likelihood continuity mapping (MALCOM), and focuses on one application of MALCOM: detecting fraud in medical insurance claims. Given a training data set composed of typical sequences, MALCOM creates a stochastic model of sequence generation, called a continuity map (CM). A CM maximizes the probability of sequences in the training set given the model constraints, CMs can be used to estimate the likelihood of sequences not found in the training set, enabling anomaly detection and sequence prediction--important aspects of data mining. Since MALCOM can be used on sequences of categorical data (e.g., sequences of words) as well as real valued data, MALCOM is also a potential replacement for database search tools such as N-gram analysis. In a recent experiment, MALCOM was used to evaluate the likelihood of patient medical histories, where ``medical history`` is used to mean the sequence of medical procedures performed on a patient. Physicians whose patients had anomalous medical histories (according to MALCOM) were evaluated for fraud by an independent agency. Of the small sample (12 physicians) that has been evaluated, 92% have been determined fraudulent or abusive. Despite the small sample, these results are encouraging.
Maximum life spiral bevel reduction design
Savage, M.; Prasanna, M. G.; Coe, H. H.
1992-07-01
Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.
CORA - emission line fitting with Maximum Likelihood
Ness, J.-U.; Wichmann, R.
2002-07-01
The advent of pipeline-processed data both from space- and ground-based observatories often disposes of the need of full-fledged data reduction software with its associated steep learning curve. In many cases, a simple tool doing just one task, and doing it right, is all one wishes. In this spirit we introduce CORA, a line fitting tool based on the maximum likelihood technique, which has been developed for the analysis of emission line spectra with low count numbers and has successfully been used in several publications. CORA uses a rigorous application of Poisson statistics. From the assumption of Poissonian noise we derive the probability for a model of the emission line spectrum to represent the measured spectrum. The likelihood function is used as a criterion for optimizing the parameters of the theoretical spectrum and a fixed point equation is derived allowing an efficient way to obtain line fluxes. As an example we demonstrate the functionality of the program with an X-ray spectrum of Capella obtained with the Low Energy Transmission Grating Spectrometer (LETGS) on board the Chandra observatory and choose the analysis of the Ne IX triplet around 13.5 Å.
Finding maximum JPEG image block code size
Lakhani, Gopal
2012-07-01
We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.
Boedeker, Peter
2017-01-01
Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…
Paleodust variability since the Last Glacial Maximum and implications for iron inputs to the ocean
Albani, S.; Mahowald, N. M.; Murphy, L. N.; Raiswell, R.; Moore, J. K.; Anderson, R. F.; McGee, D.; Bradtmiller, L. I.; Delmonte, B.; Hesse, P. P.; Mayewski, P. A.
2016-04-01
Changing climate conditions affect dust emissions and the global dust cycle, which in turn affects climate and biogeochemistry. In this study we use observationally constrained model reconstructions of the global dust cycle since the Last Glacial Maximum, combined with different simplified assumptions of atmospheric and sea ice processing of dust-borne iron, to provide estimates of soluble iron deposition to the oceans. For different climate conditions, we discuss uncertainties in model-based estimates of atmospheric processing and dust deposition to key oceanic regions, highlighting the large degree of uncertainty of this important variable for ocean biogeochemistry and the global carbon cycle. We also show the role of sea ice acting as a time buffer and processing agent, which results in a delayed and pulse-like soluble iron release into the ocean during the melting season, with monthly peaks up to ~17 Gg/month released into the Southern Oceans during the Last Glacial Maximum (LGM).
Daniel L. Rabosky
2006-01-01
Full Text Available Rates of species origination and extinction can vary over time during evolutionary radiations, and it is possible to reconstruct the history of diversification using molecular phylogenies of extant taxa only. Maximum likelihood methods provide a useful framework for inferring temporal variation in diversification rates. LASER is a package for the R programming environment that implements maximum likelihood methods based on the birth-death process to test whether diversification rates have changed over time. LASER contrasts the likelihood of phylogenetic data under models where diversification rates have changed over time to alternative models where rates have remained constant over time. Major strengths of the package include the ability to detect temporal increases in diversification rates and the inference of diversification parameters under multiple rate-variable models of diversification. The program and associated documentation are freely available from the R package archive at http://cran.r-project.org.