Brendle, Joerg
2016-01-01
We show that, consistently, there can be maximal subtrees of P (omega) and P (omega) / fin of arbitrary regular uncountable size below the size of the continuum. We also show that there are no maximal subtrees of P (omega) / fin with countable levels. Our results answer several questions of Campero, Cancino, Hrusak, and Miranda.
Casewell, Nicholas R; Wagstaff, Simon C; Harrison, Robert A; Wüster, Wolfgang
2011-03-01
The proliferation of gene data from multiple loci of large multigene families has been greatly facilitated by considerable recent advances in sequence generation. The evolution of such gene families, which often undergo complex histories and different rates of change, combined with increases in sequence data, pose complex problems for traditional phylogenetic analyses, and in particular, those that aim to successfully recover species relationships from gene trees. Here, we implement gene tree parsimony analyses on multicopy gene family data sets of snake venom proteins for two separate groups of taxa, incorporating Bayesian posterior distributions as a rigorous strategy to account for the uncertainty present in gene trees. Gene tree parsimony largely failed to infer species trees congruent with each other or with species phylogenies derived from mitochondrial and single-copy nuclear sequences. Analysis of four toxin gene families from a large expressed sequence tag data set from the viper genus Echis failed to produce a consistent topology, and reanalysis of a previously published gene tree parsimony data set, from the family Elapidae, suggested that species tree topologies were predominantly unsupported. We suggest that gene tree parsimony failure in the family Elapidae is likely the result of unequal and/or incomplete sampling of paralogous genes and demonstrate that multiple parallel gene losses are likely responsible for the significant species tree conflict observed in the genus Echis. These results highlight the potential for gene tree parsimony analyses to be undermined by rapidly evolving multilocus gene families under strong natural selection.
Salas-Leiva, Dayana E; Meerow, Alan W; Calonje, Michael; Griffith, M Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W; Lewis, Carl E; Namoff, Sandra
2013-01-01
.... The specific aim is to evaluate several gene tree-species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis...
Goloboff, Pablo A
2014-10-01
Three different types of data sets, for which the uniquely most parsimonious tree can be known exactly but is hard to find with heuristic tree search methods, are studied. Tree searches are complicated more by the shape of the tree landscape (i.e. the distribution of homoplasy on different trees) than by the sheer abundance of homoplasy or character conflict. Data sets of Type 1 are those constructed by Radel et al. (2013). Data sets of Type 2 present a very rugged landscape, with narrow peaks and valleys, but relatively low amounts of homoplasy. For such a tree landscape, subjecting the trees to TBR and saving suboptimal trees produces much better results when the sequence of clipping for the tree branches is randomized instead of fixed. An unexpected finding for data sets of Types 1 and 2 is that starting a search from a random tree instead of a random addition sequence Wagner tree may increase the probability that the search finds the most parsimonious tree; a small artificial example where these probabilities can be calculated exactly is presented. Data sets of Type 3, the most difficult data sets studied here, comprise only congruent characters, and a single island with only one most parsimonious tree. Even if there is a single island, missing entries create a very flat landscape which is difficult to traverse with tree search algorithms because the number of equally parsimonious trees that need to be saved and swapped to effectively move around the plateaus is too large. Minor modifications of the parameters of tree drifting, ratchet, and sectorial searches allow travelling around these plateaus much more efficiently than saving and swapping large numbers of equally parsimonious trees with TBR. For these data sets, two new related criteria for selecting taxon addition sequences in Wagner trees (the "selected" and "informative" addition sequences) produce much better results than the standard random or closest addition sequences. These new methods for Wagner
Holden, Clare Janaki
2002-04-22
Linguistic divergence occurs after speech communities divide, in a process similar to speciation among isolated biological populations. The resulting languages are hierarchically related, like genes or species. Phylogenetic methods developed in evolutionary biology can thus be used to infer language trees, with the caveat that 'borrowing' of linguistic elements between languages also occurs, to some degree. Maximum-parsimony trees for 75 Bantu and Bantoid African languages were constructed using 92 items of basic vocabulary. The level of character fit on the trees was high (consistency index was 0.65), indicating that a tree model fits Bantu language evolution well, at least for the basic vocabulary. The Bantu language tree reflects the spread of farming across this part of sub-Saharan Africa between ca. 3000 BC and AD 500. Modern Bantu subgroups, defined by clades on parsimony trees, mirror the earliest farming traditions both geographically and temporally. This suggests that the major subgroups of modern Bantu stem from the Neolithic and Early Iron Age, with little subsequent movement by speech communities.
Evolution of Shanghai STOCK Market Based on Maximal Spanning Trees
Yang, Chunxia; Shen, Ying; Xia, Bingying
2013-01-01
In this paper, using a moving window to scan through every stock price time series over a period from 2 January 2001 to 11 March 2011 and mutual information to measure the statistical interdependence between stock prices, we construct a corresponding weighted network for 501 Shanghai stocks in every given window. Next, we extract its maximal spanning tree and understand the structure variation of Shanghai stock market by analyzing the average path length, the influence of the center node and the p-value for every maximal spanning tree. A further analysis of the structure properties of maximal spanning trees over different periods of Shanghai stock market is carried out. All the obtained results indicate that the periods around 8 August 2005, 17 October 2007 and 25 December 2008 are turning points of Shanghai stock market, at turning points, the topology structure of the maximal spanning tree changes obviously: the degree of separation between nodes increases; the structure becomes looser; the influence of the center node gets smaller, and the degree distribution of the maximal spanning tree is no longer a power-law distribution. Lastly, we give an analysis of the variations of the single-step and multi-step survival ratios for all maximal spanning trees and find that two stocks are closely bonded and hard to be broken in a short term, on the contrary, no pair of stocks remains closely bonded for a long time.
Mining Maximal Frequent Patterns in a Unidirectional FP-tree
SONG Jing-jing; LIU Rui-xin; WANG Yan; JIANG Bao-qing
2006-01-01
Becausemining complete set of frequent patterns from dense database could be impractical, an interesting alternative has been proposed recently. Instead of mining the complete set of frequent patterns, the new model only finds out the maximal frequent patterns, which can generate all frequent patterns. FP-growth algorithm is one of the most efficient frequent-pattern mining methods published so far. However,because FP-tree and conditional FP-trees must be two-way traversable, a great deal memory is needed in process of mining. This paper proposes an efficient algorithm Unid_FP-Max for mining maximal frequent patterns based on unidirectional FP-tree. Because of generation method of unidirectional FP-tree and conditional unidirectional FP-trees, the algorithm reduces the space consumption to the fullest extent. With the development of two techniques:single path pruning and header table pruning which can cut down many conditional unidirectional FP-trees generated recursively in mining process, Unid_ FP-Max further lowers the expense of time and space.
Computing a Clique Tree with the Algorithm Maximal Label Search
Anne Berry
2017-01-01
Full Text Available The algorithm MLS (Maximal Label Search is a graph search algorithm that generalizes the algorithms Maximum Cardinality Search (MCS, Lexicographic Breadth-First Search (LexBFS, Lexicographic Depth-First Search (LexDFS and Maximal Neighborhood Search (MNS. On a chordal graph, MLS computes a PEO (perfect elimination ordering of the graph. We show how the algorithm MLS can be modified to compute a PMO (perfect moplex ordering, as well as a clique tree and the minimal separators of a chordal graph. We give a necessary and sufficient condition on the labeling structure of MLS for the beginning of a new clique in the clique tree to be detected by a condition on labels. MLS is also used to compute a clique tree of the complement graph, and new cliques in the complement graph can be detected by a condition on labels for any labeling structure. We provide a linear time algorithm computing a PMO and the corresponding generators of the maximal cliques and minimal separators of the complement graph. On a non-chordal graph, the algorithm MLSM, a graph search algorithm computing an MEO and a minimal triangulation of the graph, is used to compute an atom tree of the clique minimal separator decomposition of any graph.
Salas-Leiva, Dayana E; Meerow, Alan W; Calonje, Michael; Griffith, M Patrick; Francisco-Ortega, Javier; Nakamura, Kyoko; Stevenson, Dennis W; Lewis, Carl E; Namoff, Sandra
2013-11-01
Despite a recent new classification, a stable phylogeny for the cycads has been elusive, particularly regarding resolution of Bowenia, Stangeria and Dioon. In this study, five single-copy nuclear genes (SCNGs) are applied to the phylogeny of the order Cycadales. The specific aim is to evaluate several gene tree-species tree reconciliation approaches for developing an accurate phylogeny of the order, to contrast them with concatenated parsimony analysis and to resolve the erstwhile problematic phylogenetic position of these three genera. DNA sequences of five SCNGs were obtained for 20 cycad species representing all ten genera of Cycadales. These were analysed with parsimony, maximum likelihood (ML) and three Bayesian methods of gene tree-species tree reconciliation, using Cycas as the outgroup. A calibrated date estimation was developed with Bayesian methods, and biogeographic analysis was also conducted. Concatenated parsimony, ML and three species tree inference methods resolve exactly the same tree topology with high support at most nodes. Dioon and Bowenia are the first and second branches of Cycadales after Cycas, respectively, followed by an encephalartoid clade (Macrozamia-Lepidozamia-Encephalartos), which is sister to a zamioid clade, of which Ceratozamia is the first branch, and in which Stangeria is sister to Microcycas and Zamia. A single, well-supported phylogenetic hypothesis of the generic relationships of the Cycadales is presented. However, massive extinction events inferred from the fossil record that eliminated broader ancestral distributions within Zamiaceae compromise accurate optimization of ancestral biogeographical areas for that hypothesis. While major lineages of Cycadales are ancient, crown ages of all modern genera are no older than 12 million years, supporting a recent hypothesis of mostly Miocene radiations. This phylogeny can contribute to an accurate infrafamilial classification of Zamiaceae.
Global preferential consistency for the topological sorting-based maximal spanning tree problem
Joseph, Rémy-Robert
2012-01-01
We introduce a new type of fully computable problems, for DSS dedicated to maximal spanning tree problems, based on deduction and choice: preferential consistency problems. To show its interest, we describe a new compact representation of preferences specific to spanning trees, identifying an efficient maximal spanning tree sub-problem. Next, we compare this problem with the Pareto-based multiobjective one. And at last, we propose an efficient algorithm solving the associated preferential consistency problem.
Maximizing Adaptivity in Hierarchical Topological Models Using Cancellation Trees
Bremer, P; Pascucci, V; Hamann, B
2008-12-08
We present a highly adaptive hierarchical representation of the topology of functions defined over two-manifold domains. Guided by the theory of Morse-Smale complexes, we encode dependencies between cancellations of critical points using two independent structures: a traditional mesh hierarchy to store connectivity information and a new structure called cancellation trees to encode the configuration of critical points. Cancellation trees provide a powerful method to increase adaptivity while using a simple, easy-to-implement data structure. The resulting hierarchy is significantly more flexible than the one previously reported. In particular, the resulting hierarchy is guaranteed to be of logarithmic height.
Upper Bounds for Maximally Greedy Binary Search Trees
Fox, Kyle
2011-01-01
At SODA 2009, Demaine et al. presented a novel connection between binary search trees (BSTs) and subsets of points on the plane. This connection was independently discovered by Derryberry et al. As part of their results, Demaine et al. considered GreedyFuture, an offline BST algorithm that greedily rearranges the search path to minimize the cost of future searches. They showed that GreedyFuture is actually an online algorithm in their geometric view, and that there is a way to turn GreedyFuture into an online BST algorithm with only a constant factor increase in total search cost. Demaine et al. conjectured this algorithm was dynamically optimal, but no upper bounds were given in their paper. We prove the first non-trivial upper bounds for the cost of search operations using GreedyFuture including giving an access lemma similar to that found in Sleator and Tarjan's classic paper on splay trees.
Intermediate tree cover can maximize groundwater recharge in the seasonally dry tropics
Ilstedt, U.; Bargués Tobella, A.; Bazié, H. R.; Bayala, J.; Verbeeten, E.; Nyberg, G.; Sanou, J.; Benegas, L.; Murdiyarso, D.; Laudon, H.; Sheil, D.; Malmer, A.
2016-02-01
Water scarcity contributes to the poverty of around one-third of the world’s people. Despite many benefits, tree planting in dry regions is often discouraged by concerns that trees reduce water availability. Yet relevant studies from the tropics are scarce, and the impacts of intermediate tree cover remain unexplored. We developed and tested an optimum tree cover theory in which groundwater recharge is maximized at an intermediate tree density. Below this optimal tree density the benefits from any additional trees on water percolation exceed their extra water use, leading to increased groundwater recharge, while above the optimum the opposite occurs. Our results, based on groundwater budgets calibrated with measurements of drainage and transpiration in a cultivated woodland in West Africa, demonstrate that groundwater recharge was maximised at intermediate tree densities. In contrast to the prevailing view, we therefore find that moderate tree cover can increase groundwater recharge, and that tree planting and various tree management options can improve groundwater resources. We evaluate the necessary conditions for these results to hold and suggest that they are likely to be common in the seasonally dry tropics, offering potential for widespread tree establishment and increased benefits for hundreds of millions of people.
Optimized ancestral state reconstruction using Sankoff parsimony
Valiente Gabriel
2009-02-01
Full Text Available Abstract Background Parsimony methods are widely used in molecular evolution to estimate the most plausible phylogeny for a set of characters. Sankoff parsimony determines the minimum number of changes required in a given phylogeny when a cost is associated to transitions between character states. Although optimizations exist to reduce the computations in the number of taxa, the original algorithm takes time O(n2 in the number of states, making it impractical for large values of n. Results In this study we introduce an optimization of Sankoff parsimony for the reconstruction of ancestral states when ultrametric or additive cost matrices are used. We analyzed its performance for randomly generated matrices, Jukes-Cantor and Kimura's two-parameter models of DNA evolution, and in the reconstruction of elongation factor-1α and ancestral metabolic states of a group of eukaryotes, showing that in all cases the execution time is significantly less than with the original implementation. Conclusion The algorithms here presented provide a fast computation of Sankoff parsimony for a given phylogeny. Problems where the number of states is large, such as reconstruction of ancestral metabolism, are particularly adequate for this optimization. Since we are reducing the computations required to calculate the parsimony cost of a single tree, our method can be combined with optimizations in the number of taxa that aim at finding the most parsimonious tree.
Christopher W. Woodall; Anthony W. D' Amato; John B. Bradford; Andrew O. Finley
2011-01-01
There is expanding interest in management strategies that maximize forest carbon (C) storage to mitigate increased atmospheric carbon dioxide. The tremendous tree species diversity and range of stand stocking found across the eastern United States presents a challenge for determining optimal combinations for the maximization of standing tree C storage. Using a...
Chapman, S. K.; Shaw, R.; Langley, A.
2008-12-01
Management of agroecosystems for the purpose of manipulating soil carbon stocks could be a viable approach for countering rising atmospheric carbon dioxide concentrations, while maximizing sustainability of the agroforestry industry. We investigated the carbon storage potential of Christmas tree farms in the southern Appalachian mountains as a potential model for the impacts of land management on soil carbon. We quantified soil carbon stocks across a gradient of cultivation duration and herbicide management. We compared soil carbon in farms to that in adjacent pastures and native forests that represent a control group to account for variability in other soil-forming factors. We partitioned tree farm soil carbon into fractions delineated by stability, an important determinant of long-term sequestration potential. Soil carbon stocks in the intermediate pool are significantly greater in the tree farms under cultivation for longer periods of time than in the younger tree farms. This pool can be quite large, yet has the ability to repond to biological environmental changes on the centennial time scale. Pasture soil carbon was significantly greater than both forest and tree farm soil carbon, which were not different from each other. These data can help inform land management and soil carbon sequestration strategies.
Decentralized Lifetime Maximizing Tree with Clustering for Data Delivery in Wireless Sensor Networks
Deepali Virmani
2011-09-01
Full Text Available A wireless sensor network has a wide application domain which is expanding everyday and they have been deployed pertaining to their application area. An application independent approach is yet to come to terms with the ongoing exploitation of the WSNs. In this paper we propose a decentralized lifetime maximizing tree for application independent data aggregation scheme using the clustering for data delivery in WSNs. The proposed tree will minimize the energy consumption which has been a resisting factor in the smooth working of WSNs as well as minimize the distance between the communicating nodes under the control of a sub-sink which further communicate and transfer data to the sink node.
Research on the evolution of stock correlation based on maximal spanning trees
Yang, Chunxia; Zhu, Xueshuai; Li, Qian; Chen, Yanhua; Deng, Qiangqiang
2014-12-01
In this study, we choose the daily closing price of 268 constituent stocks of the S&P 500 index, 221 stocks of London Stock Exchange, 148 constituent stocks of the Shanghai Composite index and 152 constituent stocks of the Hang Seng index as the research objects and select the sample of all the stock markets from 2 January, 2003, to 16 September, 2013. For each stock market, first, using a moving window to scan through every stock return series and mutual information to measure the statistical interdependence between stock returns, we construct a corresponding weighted network in every given window. Then we study the evolution of stock correlation by analyzing the average mutual information, mutual information distribution and topology structure’s variation of the maximal spanning tree extracting from every weighted network. All the obtained results indicate that for all the stock markets, both the average mutual information and the standard deviation of mutual information distribution first gradually increase and they reach a peak during the full-outbreak periods, and finally, they decrease again. In addition, the topology structure of the maximal spanning tree also changes from compact star-like to loose chain-like first and then turns to compact star-like once more. All the facts tell us that the crisis does change the stock correlation and the stock correlation is from weak to strong first, and then becomes weak again.
Pure Parsimony Xor Haplotyping
Bonizzoni, Paola; Dondi, Riccardo; Pirola, Yuri; Rizzi, Romeo
2010-01-01
The haplotype resolution from xor-genotype data has been recently formulated as a new model for genetic studies. The xor-genotype data is a cheaply obtainable type of data distinguishing heterozygous from homozygous sites without identifying the homozygous alleles. In this paper we propose a formulation based on a well-known model used in haplotype inference: pure parsimony. We exhibit exact solutions of the problem by providing polynomial time algorithms for some restricted cases and a fixed-parameter algorithm for the general case. These results are based on some interesting combinatorial properties of a graph representation of the solutions. Furthermore, we show that the problem has a polynomial time k-approximation, where k is the maximum number of xor-genotypes containing a given SNP. Finally, we propose a heuristic and produce an experimental analysis showing that it scales to real-world large instances taken from the HapMap project.
Pure parsimony xor haplotyping.
Bonizzoni, Paola; Della Vedova, Gianluca; Dondi, Riccardo; Pirola, Yuri; Rizzi, Romeo
2010-01-01
The haplotype resolution from xor-genotype data has been recently formulated as a new model for genetic studies. The xor-genotype data is a cheaply obtainable type of data distinguishing heterozygous from homozygous sites without identifying the homozygous alleles. In this paper, we propose a formulation based on a well-known model used in haplotype inference: pure parsimony. We exhibit exact solutions of the problem by providing polynomial time algorithms for some restricted cases and a fixed-parameter algorithm for the general case. These results are based on some interesting combinatorial properties of a graph representation of the solutions. Furthermore, we show that the problem has a polynomial time k-approximation, where k is the maximum number of xor-genotypes containing a given single nucleotide polymorphisms (SNP). Finally, we propose a heuristic and produce an experimental analysis showing that it scales to real-world large instances taken from the HapMap project.
Catalano, S A; Goloboff, P A
2012-05-01
All methods proposed to date for mapping landmark configurations on a phylogenetic tree start from an alignment generated by methods that make no use of phylogenetic information, usually by superimposing all configurations against a consensus configuration. In order to properly interpret differences between landmark configurations along the tree as changes in shape, the metric chosen to define the ancestral assignments should also form the basis to superimpose the configurations. Thus, we present here a method that merges both steps, map and align, into a single procedure that (for the given tree) produces a multiple alignment and ancestral assignments such that the sum of the Euclidean distances between the corresponding landmarks along tree nodes is minimized. This approach is an extension of the method proposed by Catalano et al. (2010. Phylogenetic morphometrics (I): the use of landmark data in a phylogenetic framework. Cladistics. 26:539-549) for mapping landmark data with parsimony as optimality criterion. In the context of phylogenetics, this method allows maximizing the degree to which similarity in landmark positions can be accounted for by common ancestry. In the context of morphometrics, this approach guarantees (heuristics aside) that all the transformations inferred on the tree represent changes in shape. The performance of the method was evaluated on different data sets, indicating that the method produces marked improvements in tree score (up to 5% compared with generalized superimpositions, up to 11% compared with ordinary superimpositions). These empirical results stress the importance of incorporating the phylogenetic information into the alignment step.
Parsimonious Language Models for Information Retrieval
Hiemstra, Djoerd; Robertson, Stephen; Zaragoza, Hugo
2004-01-01
We systematically investigate a new approach to estimating the parameters of language models for information retrieval, called parsimonious language models. Parsimonious language models explicitly address the relation between levels of language models that are typically used for smoothing. As such,
Tsai, Tein-Shun; Lee, How-Jing; Tu, Ming-Chung
2009-11-01
With bioenergetic modeling, we tested the hypothesis that reptiles maximize net energy gain by postprandial thermal selection. Previous studies have shown that Chinese green tree vipers (Trimeresurus s. stejnegeri) have postprandial thermophily (mean preferred temperature T(p) for males =27.8 degrees C) in a linear thigmothermal gradient when seclusion sites and water existed. With some published empirical models of digestion associated factors for this snake, we calculated the average rate (E(net)) and efficiency (K(net)) of net energy gain from possible combinations of meal size, activity level, and feeding frequency at each temperature. The simulations consistently revealed that E(net) maximizes at the T(p) of these snakes. Although the K(net) peaks at a lower temperature than E(net), the value of K(net) remains high (>=0.85 in ratio to maximum) at the peak temperature of E(net). This suggested that the demands of both E(net) and K(net) can be attained by postprandial thermal selection in this snake. In conclusion, the data support our prediction that postprandial thermal selection may maximize net energy gain.
XU Jin
2016-01-01
A maximal planar graph is called the recursive maximal planar graph if it can be obtained from 4 K by embedding a 3-degree vertex in some triangular face continuously. The uniquely 4-colorable maximal planar graph conjecture states that a planar graph is uniquely 4-colorable if and only if it is a recursive maximal planar graph. This conjecture, which has 43 years of history, is a very influential conjecture in graph coloring theory after the Four-Color Conjecture. In this paper, the structures and properties of dumbbell maximal planar graphs and recursive maximal planar graphs are studied, and an idea of proving the uniquely 4-colorable maximal planar graph conjecture is proposed based on the extending-contracting operation proposed in this series of article (2).
Parsimonious Refraction Interferometry and Tomography
Hanafy, Sherif
2017-02-04
We present parsimonious refraction interferometry and tomography where a densely populated refraction data set can be obtained from two reciprocal and several infill shot gathers. The assumptions are that the refraction arrivals are head waves, and a pair of reciprocal shot gathers and several infill shot gathers are recorded over the line of interest. Refraction traveltimes from these shot gathers are picked and spawned into O(N2) virtual refraction traveltimes generated by N virtual sources, where N is the number of geophones in the 2D survey. The virtual traveltimes can be inverted to give the velocity tomogram. This enormous increase in the number of traveltime picks and associated rays, compared to the many fewer traveltimes from the reciprocal and infill shot gathers, allows for increased model resolution and a better condition number with the system of normal equations. A significant benefit is that the parsimonious survey and the associated traveltime picking is far less time consuming than that for a standard refraction survey with a dense distribution of sources.
Direct maximum parsimony phylogeny reconstruction from genotype data
Ravi R
2007-12-01
Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.
Nor, Igor; Charlat, Sylvain; Engelstadter, Jan; Reuter, Max; Duron, Olivier; Sagot, Marie-France
2010-01-01
We address in this paper a new computational biology problem that aims at understanding a mechanism that could potentially be used to genetically manipulate natural insect populations infected by inherited, intra-cellular parasitic bacteria. In this problem, that we denote by \\textsc{Mod/Resc Parsimony Inference}, we are given a boolean matrix and the goal is to find two other boolean matrices with a minimum number of columns such that an appropriately defined operation on these matrices gives back the input. We show that this is formally equivalent to the \\textsc{Bipartite Biclique Edge Cover} problem and derive some complexity results for our problem using this equivalence. We provide a new, fixed-parameter tractability approach for solving both that slightly improves upon a previously published algorithm for the \\textsc{Bipartite Biclique Edge Cover}. Finally, we present experimental results where we applied some of our techniques to a real-life data set.
PTree: pattern-based, stochastic search for maximum parsimony phylogenies
Ivan Gregor
2013-06-01
Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.
Yan, Xin-Guo; Xie, Chi; Wang, Gang-Jin
2015-08-01
We study the topological stability of stock market network by investigating the topological robustness, namely the ability of the network to resist structural or topological changes. The stock market network is extracted by minimal spanning tree (MST) and planar maximally filtered graph (PMFG). We find that the specific delisting thresholds of the listed companies exist in both MST and PMFG networks. In comparison with MST, PMFG provides more information and is better for the aim of exploring stock market network’s robustness. The PMFG before the US sub-prime crisis (i.e., from June 2005 to May 2007) has a stronger robustness against the intentional topological damage than the other two sub-periods (i.e., from June 2007 to May 2009 and from June 2009 to May 2011). We also find that the nonfractal property exists in MSTs of S&P 500, i.e., the highly connected nodes link with each other directly, which indicates that the MSTs are vulnerable to the removal of such important nodes. Moreover, the financial institutions and high technology companies are important in maintaining the stability of S&P 500 network.
The effect of natural selection on the performance of maximum parsimony
Ofria Charles
2007-06-01
Full Text Available Abstract Background Maximum parsimony is one of the most commonly used and extensively studied phylogeny reconstruction methods. While current evaluation methodologies such as computer simulations provide insight into how well maximum parsimony reconstructs phylogenies, they tell us little about how well maximum parsimony performs on taxa drawn from populations of organisms that evolved subject to natural selection in addition to the random factors of drift and mutation. It is clear that natural selection has a significant impact on Among Site Rate Variation (ASRV and the rate of accepted substitutions; that is, accepted mutations do not occur with uniform probability along the genome and some substitutions are more likely to occur than other substitutions. However, little is know about how ASRV and non-uniform character substitutions impact the performance of reconstruction methods such as maximum parsimony. To gain insight into these issues, we study how well maximum parsimony performs with data generated by Avida, a digital life platform where populations of digital organisms evolve subject to natural selective pressures. Results We first identify conditions where natural selection does affect maximum parsimony's reconstruction accuracy. In general, as we increase the probability that a significant adaptation will occur in an intermediate ancestor, the performance of maximum parsimony improves. In fact, maximum parsimony can correctly reconstruct small 4 taxa trees on data that have received surprisingly many mutations if the intermediate ancestor has received a significant adaptation. We demonstrate that this improved performance of maximum parsimony is attributable more to ASRV than to non-uniform character substitutions. Conclusion Maximum parsimony, as well as most other phylogeny reconstruction methods, may perform significantly better on actual biological data than is currently suggested by computer simulation studies because of natural
Guy Rostoker
Full Text Available Iron overload used to be considered rare among hemodialysis patients after the advent of erythropoesis-stimulating agents, but recent MRI studies have challenged this view. The aim of this study, based on decision-tree learning and on MRI determination of hepatic iron content, was to identify a noxious pattern of parenteral iron administration in hemodialysis patients.We performed a prospective cross-sectional study from 31 January 2005 to 31 August 2013 in the dialysis centre of a French community-based private hospital. A cohort of 199 fit hemodialysis patients free of overt inflammation and malnutrition were treated for anemia with parenteral iron-sucrose and an erythropoesis-stimulating agent (darbepoetin, in keeping with current clinical guidelines. Patients had blinded measurements of hepatic iron stores by means of T1 and T2* contrast MRI, without gadolinium, together with CHi-squared Automatic Interaction Detection (CHAID analysis.The CHAID algorithm first split the patients according to their monthly infused iron dose, with a single cutoff of 250 mg/month. In the node comprising the 88 hemodialysis patients who received more than 250 mg/month of IV iron, 78 patients had iron overload on MRI (88.6%, 95% CI: 80% to 93%. The odds ratio for hepatic iron overload on MRI was 3.9 (95% CI: 1.81 to 8.4 with >250 mg/month of IV iron as compared to <250 mg/month. Age, gender (female sex and the hepcidin level also influenced liver iron content on MRI.The standard maximal amount of iron infused per month should be lowered to 250 mg in order to lessen the risk of dialysis iron overload and to allow safer use of parenteral iron products.
Parsimony analysis of endemicity of enchodontoid fishes from the Cenomanian
Da Silva, Hilda,; GALLO,VALÉRIA
2007-01-01
8 pages; International audience; Parsimony analysis of endemicity was applied to analyze the distribution of enchodontoid fishes occurring strictly in the Cenomanian. The analysis was carried out using the computer program PAUP* 4.0b10, based on a data matrix built with 17 taxa and 12 areas. The rooting was made on an hypothetical all-zero outgroup. Applying the exact algorithm branch and bound, 47 trees were obtained with 26 steps, a consistency index of 0.73, and a retention index of 0.50. ...
Mixed integer linear programming for maximum-parsimony phylogeny inference.
Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell
2008-01-01
Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.
Parsimonious catchment and river flow modelling
Khatibi, R.H.; Moore, R.J.; Booij, Martijn J.; Cadman, D.; Boyce, G.; Rizzoli, A.E.; Jakeman, A.J.
2002-01-01
It is increasingly the case that models are being developed as “evolving” products rather than one-off application tools, such that auditable modelling versus ad hoc treatment of models becomes a pivotal issue. Auditable modelling is particularly vital to “parsimonious modelling” aimed at meeting
Connie Ko
2016-08-01
Full Text Available Recent research into improving the effectiveness of forest inventory management using airborne LiDAR data has focused on developing advanced theories in data analytics. Furthermore, supervised learning as a predictive model for classifying tree genera (and species, where possible has been gaining popularity in order to minimize this labor-intensive task. However, bottlenecks remain that hinder the immediate adoption of supervised learning methods. With supervised classification, training samples are required for learning the parameters that govern the performance of a classifier, yet the selection of training data is often subjective and the quality of such samples is critically important. For LiDAR scanning in forest environments, the quantification of data quality is somewhat abstract, normally referring to some metric related to the completeness of individual tree crowns; however, this is not an issue that has received much attention in the literature. Intuitively the choice of training samples having varying quality will affect classification accuracy. In this paper a Diversity Index (DI is proposed that characterizes the diversity of data quality (Qi among selected training samples required for constructing a classification model of tree genera. The training sample is diversified in terms of data quality as opposed to the number of samples per class. The diversified training sample allows the classifier to better learn the positive and negative instances and; therefore; has a higher classification accuracy in discriminating the “unknown” class samples from the “known” samples. Our algorithm is implemented within the Random Forests base classifiers with six derived geometric features from LiDAR data. The training sample contains three tree genera (pine; poplar; and maple and the validation samples contains four labels (pine; poplar; maple; and “unknown”. Classification accuracy improved from 72.8%; when training samples were
Al-Khaja, Nawal
2007-01-01
This is a thematic lesson plan for young learners about palm trees and the importance of taking care of them. The two part lesson teaches listening, reading and speaking skills. The lesson includes parts of a tree; the modal auxiliary, can; dialogues and a role play activity.
Cichy, Krzysztof [DESY, Zeuthen (Germany). NIC; Adam Mickiewicz Univ., Poznan (Poland). Faculty of Physics; Jansen, Karl [DESY, Zeuthen (Germany). NIC; Korcyl, Piotr [DESY, Zeuthen (Germany). NIC; Jagiellonian Univ., Krakow (Poland). M. Smoluchowski Inst. of Physics
2012-07-15
We present results of a lattice QCD application of a coordinate space renormalization scheme for the extraction of renormalization constants for flavour non-singlet bilinear quark operators. The method consists in the analysis of the small-distance behaviour of correlation functions in Euclidean space and has several theoretical and practical advantages, in particular: it is gauge invariant, easy to implement and has relatively low computational cost. The values of renormalization constants in the X-space scheme can be converted to the MS scheme via 4-loop continuum perturbative formulae. Our results for N{sub f}=2 maximally twisted mass fermions with tree-level Symanzik improved gauge action are compared to the ones from the RI-MOM scheme and show full agreement with this method. (orig.)
Species Tree Inference Using a Mixture Model.
Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens
2015-09-01
Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic
Henri Epstein
2016-01-01
An algebraic formalism, developed with V. Glaser and R. Stora for the study of the generalized retarded functions of quantum field theory, is used to prove a factorization theorem which provides a complete description of the generalized retarded functions associated with any tree graph. Integrating over the variables associated to internal vertices to obtain the perturbative generalized retarded functions for interacting fields arising from such graphs is shown to be possible for a large cate...
Epstein, Henri
2016-01-01
An algebraic formalism, developped with V. Glaser and R. Stora for the study of the generalized retarded functions of quantum field theory, is used to prove a factorization theorem which provides a complete description of the generalized retarded functions associated with any tree graph. Integrating over the variables associated to internal vertices to obtain the perturbative generalized retarded functions for interacting fields arising from such graphs is shown to be possible for a large cat...
Epstein, Henri
2016-01-01
An algebraic formalism, developped with V.~Glaser and R.~Stora for the study of the generalized retarded functions of quantum field theory, is used to prove a factorization theorem which provides a complete description of the generalized retarded functions associated with any tree graph. Integrating over the variables associated to internal vertices to obtain the perturbative generalized retarded functions for interacting fields arising from such graphs is shown to be possible for a large category of space-times.
Parsimonious modeling with information filtering networks
Barfuss, Wolfram; Massara, Guido Previde; Di Matteo, T.; Aste, Tomaso
2016-12-01
We introduce a methodology to construct parsimonious probabilistic models. This method makes use of information filtering networks to produce a robust estimate of the global sparse inverse covariance from a simple sum of local inverse covariances computed on small subparts of the network. Being based on local and low-dimensional inversions, this method is computationally very efficient and statistically robust, even for the estimation of inverse covariance of high-dimensional, noisy, and short time series. Applied to financial data our method results are computationally more efficient than state-of-the-art methodologies such as Glasso producing, in a fraction of the computation time, models that can have equivalent or better performances but with a sparser inference structure. We also discuss performances with sparse factor models where we notice that relative performances decrease with the number of factors. The local nature of this approach allows us to perform computations in parallel and provides a tool for dynamical adaptation by partial updating when the properties of some variables change without the need of recomputing the whole model. This makes this approach particularly suitable to handle big data sets with large numbers of variables. Examples of practical application for forecasting, stress testing, and risk allocation in financial systems are also provided.
Parsimonious Ways to Use Vision for Navigation
Paul Graham
2012-05-01
Full Text Available The use of visual information for navigation appears to be a universal strategy for sighted animals, amongst which, one particular group of expert navigators are the ants. The broad interest in studies of ant navigation is in part due to their small brains, thus biomimetic engineers expect to be impressed by elegant control solutions, and psychologists might hope for a description of the minimal cognitive requirements for complex spatial behaviours. In this spirit, we have been taking an interdisciplinary approach to the visual guided navigation of ants in their natural habitat. Behavioural experiments and natural image statistics show that visual navigation need not depend on the remembering or recognition of objects. Further modelling work suggests how simple behavioural routines might enable navigation using familiarity detection rather than explicit recall, and we present a proof of concept that visual navigation using familiarity can be achieved without specifying when or what to learn, nor separating routes into sequences of waypoints. We suggest that our current model represents the only detailed and complete model of insect route guidance to date. What's more, we believe the suggested mechanisms represent useful parsimonious hypotheses for the visually guided navigation in larger-brain animals.
Quality Quandaries- Time Series Model Selection and Parsimony
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Quality Quandaries- Time Series Model Selection and Parsimony
Bisgaard, Søren; Kulahci, Murat
2009-01-01
Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....
Predicting protein interactions via parsimonious network history inference.
Patro, Rob; Kingsford, Carl
2013-07-01
Reconstruction of the network-level evolutionary history of protein-protein interactions provides a principled way to relate interactions in several present-day networks. Here, we present a general framework for inferring such histories and demonstrate how it can be used to determine what interactions existed in the ancestral networks, which present-day interactions we might expect to exist based on evolutionary evidence and what information extant networks contain about the order of ancestral protein duplications. Our framework characterizes the space of likely parsimonious network histories. It results in a structure that can be used to find probabilities for a number of events associated with the histories. The framework is based on a directed hypergraph formulation of dynamic programming that we extend to enumerate many optimal and near-optimal solutions. The algorithm is applied to reconstructing ancestral interactions among bZIP transcription factors, imputing missing present-day interactions among the bZIPs and among proteins from five herpes viruses, and determining relative protein duplication order in the bZIP family. Our approach more accurately reconstructs ancestral interactions than existing approaches. In cross-validation tests, we find that our approach ranks the majority of the left-out present-day interactions among the top 2 and 17% of possible edges for the bZIP and herpes networks, respectively, making it a competitive approach for edge imputation. It also estimates relative bZIP protein duplication orders, using only interaction data and phylogenetic tree topology, which are significantly correlated with sequence-based estimates. The algorithm is implemented in C++, is open source and is available at http://www.cs.cmu.edu/ckingsf/software/parana2. Supplementary data are available at Bioinformatics online.
Finding Maximal Quasiperiodicities in Strings
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...
de Queiroz, Kevin; Poe, Steven
2003-06-01
Kluge's (2001, Syst. Biol. 50:322-330) continued arguments that phylogenetic methods based on the statistical principle of likelihood are incompatible with the philosophy of science described by Karl Popper are based on false premises related to Kluge's misrepresentations of Popper's philosophy. Contrary to Kluge's conjectures, likelihood methods are not inherently verificationist; they do not treat every instance of a hypothesis as confirmation of that hypothesis. The historical nature of phylogeny does not preclude phylogenetic hypotheses from being evaluated using the probability of evidence. The low absolute probabilities of hypotheses are irrelevant to the correct interpretation of Popper's concept termed degree of corroboration, which is defined entirely in terms of relative probabilities. Popper did not advocate minimizing background knowledge; in any case, the background knowledge of both parsimony and likelihood methods consists of the general assumption of descent with modification and additional assumptions that are deterministic, concerning which tree is considered most highly corroborated. Although parsimony methods do not assume (in the sense of entailing) that homoplasy is rare, they do assume (in the sense of requiring to obtain a correct phylogenetic inference) certain things about patterns of homoplasy. Both parsimony and likelihood methods assume (in the sense of implying by the manner in which they operate) various things about evolutionary processes, although violation of those assumptions does not always cause the methods to yield incorrect phylogenetic inferences. Test severity is increased by sampling additional relevant characters rather than by character reanalysis, although either interpretation is compatible with the use of phylogenetic likelihood methods. Neither parsimony nor likelihood methods assess test severity (critical evidence) when used to identify a most highly corroborated tree(s) based on a single method or model and a
Maximum Parsimony and the Skewness Test: A Simulation Study of the Limits of Applicability
Määttä, Jussi; Roos, Teemu
2016-01-01
The maximum parsimony (MP) method for inferring phylogenies is widely used, but little is known about its limitations in non-asymptotic situations. This study employs large-scale computations with simulated phylogenetic data to estimate the probability that MP succeeds in finding the true phylogeny for up to twelve taxa and 256 characters. The set of candidate phylogenies are taken to be unrooted binary trees; for each simulated data set, the tree lengths of all (2n − 5)!! candidates are computed to evaluate quantities related to the performance of MP, such as the probability of finding the true phylogeny, the probability that the tree with the shortest length is unique, the probability that the true phylogeny has the shortest tree length, and the expected inverse of the number of trees sharing the shortest length. The tree length distributions are also used to evaluate and extend the skewness test of Hillis for distinguishing between random and phylogenetic data. The results indicate, for example, that the critical point after which MP achieves a success probability of at least 0.9 is roughly around 128 characters. The skewness test is found to perform well on simulated data and the study extends its scope to up to twelve taxa. PMID:27035667
A temporal extension to the parsimonious covering theory.
Wainer, J; Rezende, A de M
1997-07-01
In this paper, parsimonious covering theory is extended in such a way that temporal knowledge can be accommodated. In addition to causally associating possible manifestations with disorders, temporal relationships about duration and the time elapsed before a manifestation comes into existence can be represented by a graph. Precise definitions of the solution of a temporal diagnostic problem as well as algorithms to compute the solutions are provided. The medical suitability of the extended parsimonious cover theory is studied in the domain of food-borne disease.
Statistical parsimony networks and species assemblages in Cephalotrichid nemerteans (nemertea).
Chen, Haixia; Strand, Malin; Norenburg, Jon L; Sun, Shichun; Kajihara, Hiroshi; Chernyshev, Alexey V; Maslakova, Svetlana A; Sundberg, Per
2010-09-21
It has been suggested that statistical parsimony network analysis could be used to get an indication of species represented in a set of nucleotide data, and the approach has been used to discuss species boundaries in some taxa. Based on 635 base pairs of the mitochondrial protein-coding gene cytochrome c oxidase I (COI), we analyzed 152 nemertean specimens using statistical parsimony network analysis with the connection probability set to 95%. The analysis revealed 15 distinct networks together with seven singletons. Statistical parsimony yielded three networks supporting the species status of Cephalothrix rufifrons, C. major and C. spiralis as they currently have been delineated by morphological characters and geographical location. Many other networks contained haplotypes from nearby geographical locations. Cladistic structure by maximum likelihood analysis overall supported the network analysis, but indicated a false positive result where subnetworks should have been connected into one network/species. This probably is caused by undersampling of the intraspecific haplotype diversity. Statistical parsimony network analysis provides a rapid and useful tool for detecting possible undescribed/cryptic species among cephalotrichid nemerteans based on COI gene. It should be combined with phylogenetic analysis to get indications of false positive results, i.e., subnetworks that would have been connected with more extensive haplotype sampling.
Statistical parsimony networks and species assemblages in Cephalotrichid nemerteans (nemertea.
Haixia Chen
Full Text Available BACKGROUND: It has been suggested that statistical parsimony network analysis could be used to get an indication of species represented in a set of nucleotide data, and the approach has been used to discuss species boundaries in some taxa. METHODOLOGY/PRINCIPAL FINDINGS: Based on 635 base pairs of the mitochondrial protein-coding gene cytochrome c oxidase I (COI, we analyzed 152 nemertean specimens using statistical parsimony network analysis with the connection probability set to 95%. The analysis revealed 15 distinct networks together with seven singletons. Statistical parsimony yielded three networks supporting the species status of Cephalothrix rufifrons, C. major and C. spiralis as they currently have been delineated by morphological characters and geographical location. Many other networks contained haplotypes from nearby geographical locations. Cladistic structure by maximum likelihood analysis overall supported the network analysis, but indicated a false positive result where subnetworks should have been connected into one network/species. This probably is caused by undersampling of the intraspecific haplotype diversity. CONCLUSIONS/SIGNIFICANCE: Statistical parsimony network analysis provides a rapid and useful tool for detecting possible undescribed/cryptic species among cephalotrichid nemerteans based on COI gene. It should be combined with phylogenetic analysis to get indications of false positive results, i.e., subnetworks that would have been connected with more extensive haplotype sampling.
K B Athreya
2009-09-01
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.
Distribution patterns of Neotropical primates (Platyrrhini based on Parsimony Analysis of Endemicity
A. Goldani
Full Text Available The Parsimony Analysis of Endemicity (PAE is a method of historical biogeography that is used for detecting and connecting areas of endemism. Based on data on the distribution of Neotropical primates, we constructed matrices using quadrats, interfluvial regions and pre-determinated areas of endemism described for avians as Operative Geographic Units (OGUs. We codified the absence of a species from an OGU as 0 (zero and its presence as 1 (one. A hypothetical area with a complete absence of primate species was used as outgroup to root the trees. All three analyses resulted in similar groupings of areas of endemism, which match the distribution of biomes in the Neotropical region. One area includes Central America and the extreme Northwest of South America, other the Amazon basin, and another the Atlantic Forest, Caatinga, Cerrado and Chaco.
Goldani, A; Carvalho, G S; Bicca-Marques, J C
2006-02-01
The Parsimony Analysis of Endemicity (PAE) is a method of historical biogeography that is used for detecting and connecting areas of endemism. Based on data on the distribution of Neotropical primates, we constructed matrices using quadrats, interfluvial regions and pre-determinated areas of endemism described for avians as Operative Geographic Units (OGUs). We codified the absence of a species from an OGU as 0 (zero) and its presence as 1 (one). A hypothetical area with a complete absence of primate species was used as outgroup to root the trees. All three analyses resulted in similar groupings of areas of endemism, which match the distribution of biomes in the Neotropical region. One area includes Central America and the extreme Northwest of South America, other the Amazon basin, and another the Atlantic Forest, Caatinga, Cerrado and Chaco.
Danforth, B N; Sauquet, H; Packer, L
1999-12-01
We investigated higher-level phylogenetic relationships within the genus Halictus based on parsimony and maximum likelihood (ML) analysis of elongation factor-1alpha DNA sequence data. Our data set includes 41 OTUs representing 35 species of halictine bees from a diverse sample of outgroup genera and from the three widely recognized subgenera of Halictus (Halictus s.s., Seladonia, and Vestitohalictus). We analyzed 1513 total aligned nucleotide sites spanning three exons and two introns. Equal-weights parsimony analysis of the overall data set yielded 144 equally parsimonious trees. Major conclusions supported in this analysis (and in all subsequent analyses) included the following: (1) Thrincohalictus is the sister group to Halictus s.l., (2) Halictus s.l. is monophyletic, (3) Vestitohalictus renders Seladonia paraphyletic but together Seladonia + Vestitohalictus is monophyletic, (4) Michener's Groups 1 and 3 are monophyletic, and (5) Michener's Group 1 renders Group 2 paraphyletic. In order to resolve basal relationships within Halictus we applied various weighting schemes under parsimony (successive approximations character weighting and implied weights) and employed ML under 17 models of sequence evolution. Weighted parsimony yielded conflicting results but, in general, supported the hypothesis that Seladonia + Vestitohalictus is sister to Michener's Group 3 and renders Halictus s.s. paraphyletic. ML analyses using the GTR model with site-specific rates supported an alternative hypothesis: Seladonia + Vestitohalictus is sister to Halictus s.s. We mapped social behavior onto trees obtained under ML and parsimony in order to reconstruct the likely historical pattern of social evolution. Our results are unambiguous: the ancestral state for the genus Halictus is eusociality. Reversal to solitary behavior has occurred at least four times among the species included in our analysis. Copyright 1999 Academic Press.
Exactly computing the parsimony scores on phylogenetic networks using dynamic programming.
Kannan, Lavanya; Wheeler, Ward C
2014-04-01
Scoring a given phylogenetic network is the first step that is required in searching for the best evolutionary framework for a given dataset. Using the principle of maximum parsimony, we can score phylogenetic networks based on the minimum number of state changes across a subset of edges of the network for each character that are required for a given set of characters to realize the input states at the leaves of the networks. Two such subsets of edges of networks are interesting in light of studying evolutionary histories of datasets: (i) the set of all edges of the network, and (ii) the set of all edges of a spanning tree that minimizes the score. The problems of finding the parsimony scores under these two criteria define slightly different mathematical problems that are both NP-hard. In this article, we show that both problems, with scores generalized to adding substitution costs between states on the endpoints of the edges, can be solved exactly using dynamic programming. We show that our algorithms require O(m(p)k) storage at each vertex (per character), where k is the number of states the character can take, p is the number of reticulate vertices in the network, m = k for the problem with edge set (i), and m = 2 for the problem with edge set (ii). This establishes an O(nm(p)k(2)) algorithm for both the problems (n is the number of leaves in the network), which are extensions of Sankoff's algorithm for finding the parsimony scores for phylogenetic trees. We will discuss improvements in the complexities and show that for phylogenetic networks whose underlying undirected graphs have disjoint cycles, the storage at each vertex can be reduced to O(mk), thus making the algorithm polynomial for this class of networks. We will present some properties of the two approaches and guidance on choosing between the criteria, as well as traverse through the network space using either of the definitions. We show that our methodology provides an effective means to
Xiao-Lei Huang
2010-05-01
Full Text Available Parsimony analysis of endemicity (PAE was used to identify areas of endemism (AOEs for Chinese birds at the subregional level. Four AOEs were identified based on a distribution database of 105 endemic species and using 18 avifaunal subregions as the operating geographical units (OGUs. The four AOEs are the Qinghai-Zangnan Subregion, the Southwest Mountainous Subregion, the Hainan Subregion and the Taiwan Subregion. Cladistic analysis of subregions generally supports the division of China’s avifauna into Palaearctic and Oriental realms. Two PAE area trees were produced from two different distribution datasets (year 1976 and 2007. The 1976 topology has four distinct subregional branches; however, the 2007 topology has three distinct branches. Moreover, three Palaearctic subregions in the 1976 tree clustered together with the Oriental subregions in the 2007 tree. Such topological differences may reflect changes in the distribution of bird species through circa three decades.
Parsimonious extreme learning machine using recursive orthogonal least squares.
Wang, Ning; Er, Meng Joo; Han, Min
2014-10-01
Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.
Haseeb A. Khan
2008-01-01
Full Text Available This investigation was aimed to compare the inference of antelope phylogenies resulting from the 16S rRNA, cytochrome-b (cyt-b and d-loop segments of mitochondrial DNA using three different computational models including Bayesian (BA, maximum parsimony (MP and unweighted pair group method with arithmetic mean (UPGMA. The respective nucleotide sequences of three Oryx species (Oryx leucoryx, Oryx dammah and Oryx gazella and an out-group (Addax nasomaculatus were aligned and subjected to BA, MP and UPGMA models for comparing the topologies of respective phylogenetic trees. The 16S rRNA region possessed the highest frequency of conserved sequences (97.65% followed by cyt-b (94.22% and d-loop (87.29%. There were few transitions (2.35% and none transversions in 16S rRNA as compared to cyt-b (5.61% transitions and 0.17% transversions and d-loop (11.57% transitions and 1.14% transversions while com- paring the four taxa. All the three mitochondrial segments clearly differentiated the genus Addax from Oryx using the BA or UPGMA models. The topologies of all the gamma-corrected Bayesian trees were identical irrespective of the marker type. The UPGMA trees resulting from 16S rRNA and d-loop sequences were also identical (Oryx dammah grouped with Oryx leucoryx to Bayesian trees except that the UPGMA tree based on cyt-b showed a slightly different phylogeny (Oryx dammah grouped with Oryx gazella with a low bootstrap support. However, the MP model failed to differentiate the genus Addax from Oryx. These findings demonstrate the efficiency and robustness of BA and UPGMA methods for phylogenetic analysis of antelopes using mitochondrial markers.
Unimodular trees versus Einstein trees
Alvarez, Enrique; Gonzalez-Martin, Sergio [Universidad Autonoma, Instituto de Fisica Teorica, IFT-UAM/CSIC, Madrid (Spain); Universidad Autonoma de Madrid, Departamento de Fisica Teorica, Madrid (Spain); Martin, Carmelo P. [Universidad Complutense de Madrid (UCM), Departamento de Fisica Teorica I Facultad de Ciencias Fisicas, Madrid (Spain)
2016-10-15
The maximally helicity violating tree-level scattering amplitudes involving three, four or five gravitons are worked out in Unimodular Gravity. They are found to coincide with the corresponding amplitudes in General Relativity. This a remarkable result, insofar as both the propagators and the vertices are quite different in the two theories. (orig.)
Unimodular Trees versus Einstein Trees
Alvarez, Enrique; Martin, Carmelo P
2016-01-01
The maximally helicity violating (MHV) tree level scattering amplitudes involving three, four or five gravitons are worked out in Unimodular Gravity. They are found to coincide with the corresponding amplitudes in General Relativity. This a remarkable result, insofar as both the propagators and the vertices are quite different in both theories.
Unimodular trees versus Einstein trees
Álvarez, Enrique; González-Martín, Sergio; Martín, Carmelo P.
2016-10-01
The maximally helicity violating tree-level scattering amplitudes involving three, four or five gravitons are worked out in Unimodular Gravity. They are found to coincide with the corresponding amplitudes in General Relativity. This a remarkable result, insofar as both the propagators and the vertices are quite different in the two theories.
Using genes as characters and a parsimony analysis to explore the phylogenetic position of turtles.
Bin Lu
Full Text Available The phylogenetic position of turtles within the vertebrate tree of life remains controversial. Conflicting conclusions from different studies are likely a consequence of systematic error in the tree construction process, rather than random error from small amounts of data. Using genomic data, we evaluate the phylogenetic position of turtles with both conventional concatenated data analysis and a "genes as characters" approach. Two datasets were constructed, one with seven species (human, opossum, zebra finch, chicken, green anole, Chinese pond turtle, and western clawed frog and 4584 orthologous genes, and the second with four additional species (soft-shelled turtle, Nile crocodile, royal python, and tuatara but only 1638 genes. Our concatenated data analysis strongly supported turtle as the sister-group to archosaurs (the archosaur hypothesis, similar to several recent genomic data based studies using similar methods. When using genes as characters and gene trees as character-state trees with equal weighting for each gene, however, our parsimony analysis suggested that turtles are possibly sister-group to diapsids, archosaurs, or lepidosaurs. None of these resolutions were strongly supported by bootstraps. Furthermore, our incongruence analysis clearly demonstrated that there is a large amount of inconsistency among genes and most of the conflict relates to the placement of turtles. We conclude that the uncertain placement of turtles is a reflection of the true state of nature. Concatenated data analysis of large and heterogeneous datasets likely suffers from systematic error and over-estimates of confidence as a consequence of a large number of characters. Using genes as characters offers an alternative for phylogenomic analysis. It has potential to reduce systematic error, such as data heterogeneity and long-branch attraction, and it can also avoid problems associated with computation time and model selection. Finally, treating genes as
Using genes as characters and a parsimony analysis to explore the phylogenetic position of turtles.
Lu, Bin; Yang, Weizhao; Dai, Qiang; Fu, Jinzhong
2013-01-01
The phylogenetic position of turtles within the vertebrate tree of life remains controversial. Conflicting conclusions from different studies are likely a consequence of systematic error in the tree construction process, rather than random error from small amounts of data. Using genomic data, we evaluate the phylogenetic position of turtles with both conventional concatenated data analysis and a "genes as characters" approach. Two datasets were constructed, one with seven species (human, opossum, zebra finch, chicken, green anole, Chinese pond turtle, and western clawed frog) and 4584 orthologous genes, and the second with four additional species (soft-shelled turtle, Nile crocodile, royal python, and tuatara) but only 1638 genes. Our concatenated data analysis strongly supported turtle as the sister-group to archosaurs (the archosaur hypothesis), similar to several recent genomic data based studies using similar methods. When using genes as characters and gene trees as character-state trees with equal weighting for each gene, however, our parsimony analysis suggested that turtles are possibly sister-group to diapsids, archosaurs, or lepidosaurs. None of these resolutions were strongly supported by bootstraps. Furthermore, our incongruence analysis clearly demonstrated that there is a large amount of inconsistency among genes and most of the conflict relates to the placement of turtles. We conclude that the uncertain placement of turtles is a reflection of the true state of nature. Concatenated data analysis of large and heterogeneous datasets likely suffers from systematic error and over-estimates of confidence as a consequence of a large number of characters. Using genes as characters offers an alternative for phylogenomic analysis. It has potential to reduce systematic error, such as data heterogeneity and long-branch attraction, and it can also avoid problems associated with computation time and model selection. Finally, treating genes as characters provides a
A Parsimonious and Universal Description of Turbulent Velocity Increments
Barndorff-Nielsen, O.E.; Blæsild, P.; Schmiegel, J.
This paper proposes a reformulation and extension of the concept of Extended Self-Similarity. In support of this new hypothesis, we discuss an analysis of the probability density function (pdf) of turbulent velocity increments based on the class of normal inverse Gaussian distributions. It allows...... for a parsimonious description of velocity increments that covers the whole range of amplitudes and all accessible scales from the finest resolution up to the integral scale. The analysis is performed for three different data sets obtained from a wind tunnel experiment, a free-jet experiment and an atmospheric...
Parsimony score of phylogenetic networks: hardness results and a linear-time heuristic.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2009-01-01
Phylogenies-the evolutionary histories of groups of organisms-play a major role in representing the interrelationships among biological entities. Many methods for reconstructing and studying such phylogenies have been proposed, almost all of which assume that the underlying history of a given set of species can be represented by a binary tree. Although many biological processes can be effectively modeled and summarized in this fashion, others cannot: recombination, hybrid speciation, and horizontal gene transfer result in networks of relationships rather than trees of relationships. In previous works, we formulated a maximum parsimony (MP) criterion for reconstructing and evaluating phylogenetic networks, and demonstrated its quality on biological as well as synthetic data sets. In this paper, we provide further theoretical results as well as a very fast heuristic algorithm for the MP criterion of phylogenetic networks. In particular, we provide a novel combinatorial definition of phylogenetic networks in terms of "forbidden cycles," and provide detailed hardness and hardness of approximation proofs for the "small" MP problem. We demonstrate the performance of our heuristic in terms of time and accuracy on both biological and synthetic data sets. Finally, we explain the difference between our model and a similar one formulated by Nguyen et al., and describe the implications of this difference on the hardness and approximation results.
A Parsimonious Bootstrap Method to Model Natural Inflow Energy Series
Fernando Luiz Cyrino Oliveira
2014-01-01
Full Text Available The Brazilian energy generation and transmission system is quite peculiar in its dimension and characteristics. As such, it can be considered unique in the world. It is a high dimension hydrothermal system with huge participation of hydro plants. Such strong dependency on hydrological regimes implies uncertainties related to the energetic planning, requiring adequate modeling of the hydrological time series. This is carried out via stochastic simulations of monthly inflow series using the family of Periodic Autoregressive models, PAR(p, one for each period (month of the year. In this paper it is shown the problems in fitting these models by the current system, particularly the identification of the autoregressive order “p” and the corresponding parameter estimation. It is followed by a proposal of a new approach to set both the model order and the parameters estimation of the PAR(p models, using a nonparametric computational technique, known as Bootstrap. This technique allows the estimation of reliable confidence intervals for the model parameters. The obtained results using the Parsimonious Bootstrap Method of Moments (PBMOM produced not only more parsimonious model orders but also adherent stochastic scenarios and, in the long range, lead to a better use of water resources in the energy operation planning.
Fossils impact as hard as living taxa in parsimony analyses of morphology.
Cobbett, Andrea; Wilkinson, Mark; Wills, Matthew A
2007-10-01
Systematists disagree whether data from fossils should be included in parsimony analyses. In a handful of well-documented cases, the addition of fossil data radically overturns a hypothesis of relationships based on extant taxa alone. Fossils can break up long branches and preserve character combinations closer in time to deep splitting events. However, fossils usually require more interpretation than extant taxa, introducing greater potential for spurious codings. Moreover, because fossils often have more "missing" codings, they are frequently accused of increasing numbers of MPTs, frustrating resolution and reducing support. Despite the controversy, remarkably little is known about the effects of fossils more generally. Here we provide the first systematic study, investigating empirically the behavior of fossil and extant taxa in 45 published morphological data sets. First-order jackknifing is used to determine the effects that each terminal has on inferred relationships, on the number of MPTs, and on CI' and RI as measures of homoplasy. Bootstrap leaf stabilities provide a proxy for the contribution of individual taxa to the branch support in the rest of the tree. There is no significant difference in the impact of fossil versus extant taxa on relationships, numbers of MPTs, and CI' or RI. However, adding individual fossil taxa is more likely to reduce the total branch support of the tree than adding extant taxa. This must be weighed against the superior taxon sampling afforded by including judiciously coded fossils, providing data from otherwise unsampled regions of the tree. We therefore recommend that investigators should include fossils, in the absence of compelling and case specific reasons for their exclusion.
A Practical pedestrian approach to parsimonious regression with inaccurate inputs
Seppo Karrila
2014-04-01
Full Text Available A measurement result often dictates an interval containing the correct value. Interval data is also created by roundoff, truncation, and binning. We focus on such common interval uncertainty in data. Inaccuracy in model inputs is typically ignored on model fitting. We provide a practical approach for regression with inaccurate data: the mathematics is easy, and the linear programming formulations simple to use even in a spreadsheet. This self-contained elementary presentation introduces interval linear systems and requires only basic knowledge of algebra. Feature selection is automatic; but can be controlled to find only a few most relevant inputs; and joint feature selection is enabled for multiple modeled outputs. With more features than cases, a novel connection to compressed sensing emerges: robustness against interval errors-in-variables implies model parsimony, and the input inaccuracies determine the regularization term. A small numerical example highlights counterintuitive results and a dramatic difference to total least squares.
A large version of the small parsimony problem
Fredslund, Jakob; Hein, Jotun; Scharling, Tejs
2003-01-01
Given a multiple alignment over $k$ sequences, an evolutionary tree relating the sequences, and a subadditive gap penalty function (e.g. an affine function), we reconstruct the internal nodes of the tree optimally: we find the optimal explanation in terms of indels of the observed gaps and find...... case time. E.g. for a tree with nine leaves and a random alignment of length 10.000 with 60% gaps, the running time is on average around 45 seconds. For a real alignment of length 9868 of nine HIV-1 sequences, the running time is less than one second....
A Large Version of the Small Parsimony Problem
Fredslund, Jacob; Hein, Jotun; Scharling, Tejs
2003-01-01
Given a multiple alignment over k sequences, an evolutionary tree relating the sequences, and a subadditive gap penalty function (e.g. an affine function), we reconstruct the internal nodes of the tree optimally: we find the optimal explanation in terms of indels of the observed gaps and find...... case time. E.g. for a tree with nine leaves and a random alignment of length 10.000 with 60% gaps, the running time is on average around 45 seconds. For a real alignment of length 9868 of nine HIV-1 sequences, the running time is less than one second....
Multivariate residues and maximal unitarity
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
Pengintegrasian Model Leadership Menuju Model yang Lebih Komprhensip dan Parsimoni
Miswanto Miswanti
2016-06-01
Full Text Available ABTSRACT Through leadership models offered by Locke et. al (1991 we can say that whether good or not the vision of leaders in the organization is highly dependent on whether good or not the motives and traits, knowledge, skill, and abilities owned leaders. Then, good or not the implementation of the vision by the leader depends on whether good or not the motives and traits, knowledge, skills, abilities, and the vision of the leaders. Strategic Leadership written by Davies (1991 states that the implementation of the vision by using strategic leadership, the meaning is much more complete than what has been written by Locke et. al. in the fourth stage of leadership. Thus, aspects of the implementation of the vision by Locke et al (1991 it is not complete implementation of the vision according to Davies (1991. With the considerations mentioned above, this article attempts to combine the leadership model of the Locke et. al and strategic leadership of the Davies. With this modification is expected to be an improvement model of leadership is more comprehensive and parsimony.
SEAPODYM-LTL: a parsimonious zooplankton dynamic biomass model
Conchon, Anna; Lehodey, Patrick; Gehlen, Marion; Titaud, Olivier; Senina, Inna; Séférian, Roland
2017-04-01
Mesozooplankton organisms are of critical importance for the understanding of early life history of most fish stocks, as well as the nutrient cycles in the ocean. Ongoing climate change and the need for improved approaches to the management of living marine resources has driven recent advances in zooplankton modelling. The classical modeling approach tends to describe the whole biogeochemical and plankton cycle with increasing complexity. We propose here a different and parsimonious zooplankton dynamic biomass model (SEAPODYM-LTL) that is cost efficient and can be advantageously coupled with primary production estimated either from satellite derived ocean color data or biogeochemical models. In addition, the adjoint code of the model is developed allowing a robust optimization approach for estimating the few parameters of the model. In this study, we run the first optimization experiments using a global database of climatological zooplankton biomass data and we make a comparative analysis to assess the importance of resolution and primary production inputs on model fit to observations. We also compare SEAPODYM-LTL outputs to those produced by a more complex biogeochemical model (PISCES) but sharing the same physical forcings.
Frederick H. Sheldon
2013-03-01
Full Text Available Insertion/deletion (indel mutations, which are represented by gaps in multiple sequence alignments, have been used to examine phylogenetic hypotheses for some time. However, most analyses combine gap data with the nucleotide sequences in which they are embedded, probably because most phylogenetic datasets include few gap characters. Here, we report analyses of 12,030 gap characters from an alignment of avian nuclear genes using maximum parsimony (MP and a simple maximum likelihood (ML framework. Both trees were similar, and they exhibited almost all of the strongly supported relationships in the nucleotide tree, although neither gap tree supported many relationships that have proven difficult to recover in previous studies. Moreover, independent lines of evidence typically corroborated the nucleotide topology instead of the gap topology when they disagreed, although the number of conflicting nodes with high bootstrap support was limited. Filtering to remove short indels did not substantially reduce homoplasy or reduce conflict. Combined analyses of nucleotides and gaps resulted in the nucleotide topology, but with increased support, suggesting that gap data may prove most useful when analyzed in combination with nucleotide substitutions.
Junction trees of general graphs
Xiaofei WANG; Jianhua GUO
2008-01-01
In this paper,we study the maximal prime subgraphs and their corresponding structure for any undirected graph.We introduce the notion of junction trees and investigate their structural characteristics,including junction properties,induced-subtree properties,running-intersection properties and maximum-weight spanning tree properties.Furthermore,the characters of leaves and edges on junction trees are discussed.
Dianfeng Liu; Zimei Dong; Yanze Gu; Lingxia Tao
2008-01-01
We studied patterns of distribution and relationships among distributional areas of Tetrigidae insects in China using parsimony analysis of endemism (PAE). We constructed a matrix based on distribution data for Chinese Tetrigidae insects and an area cladogram using northeastern China area as an outgroup. Exhaustivesearches were conducted under the maximum parsimony criterion. Cluster analysis divided eight biogeographic areas into four groups; group 1 was composed of northeast China, group 2 ...
Profit maximization mitigates competition
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...
Parsimonious Hydrologic and Nitrate Response Models For Silver Springs, Florida
Klammler, Harald; Yaquian-Luna, Jose Antonio; Jawitz, James W.; Annable, Michael D.; Hatfield, Kirk
2014-05-01
Silver Springs with an approximate discharge of 25 m3/sec is one of Florida's first magnitude springs and among the largest springs worldwide. Its 2500-km2 springshed overlies the mostly unconfined Upper Floridan Aquifer. The aquifer is approximately 100 m thick and predominantly consists of porous, fractured and cavernous limestone, which leads to excellent surface drainage properties (no major stream network other than Silver Springs run) and complex groundwater flow patterns through both rock matrix and fast conduits. Over the past few decades, discharge from Silver Springs has been observed to slowly but continuously decline, while nitrate concentrations in the spring water have enormously increased from a background level of 0.05 mg/l to over 1 mg/l. In combination with concurrent increases in algae growth and turbidity, for example, and despite an otherwise relatively stable water quality, this has given rise to concerns about the ecological equilibrium in and near the spring run as well as possible impacts on tourism. The purpose of the present work is to elaborate parsimonious lumped parameter models that may be used by resource managers for evaluating the springshed's hydrologic and nitrate transport responses. Instead of attempting to explicitly consider the complex hydrogeologic features of the aquifer in a typically numerical and / or stochastic approach, we use a transfer function approach wherein input signals (i.e., time series of groundwater recharge and nitrate loading) are transformed into output signals (i.e., time series of spring discharge and spring nitrate concentrations) by some linear and time-invariant law. The dynamic response types and parameters are inferred from comparing input and output time series in frequency domain (e.g., after Fourier transformation). Results are converted into impulse (or step) response functions, which describe at what time and to what magnitude a unitary change in input manifests at the output. For the
Tarasov, Sergei; Génier, François
2015-01-01
Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a
Tarasov, Sergei; Génier, François
2015-01-01
Scarabaeine dung beetles are the dominant dung feeding group of insects and are widely used as model organisms in conservation, ecology and developmental biology. Due to the conflicts among 13 recently published phylogenies dealing with the higher-level relationships of dung beetles, the phylogeny of this lineage remains largely unresolved. In this study, we conduct rigorous phylogenetic analyses of dung beetles, based on an unprecedented taxon sample (110 taxa) and detailed investigation of morphology (205 characters). We provide the description of morphology and thoroughly illustrate the used characters. Along with parsimony, traditionally used in the analysis of morphological data, we also apply the Bayesian method with a novel approach that uses anatomy ontology for matrix partitioning. This approach allows for heterogeneity in evolutionary rates among characters from different anatomical regions. Anatomy ontology generates a number of parameter-partition schemes which we compare using Bayes factor. We also test the effect of inclusion of autapomorphies in the morphological analysis, which hitherto has not been examined. Generally, schemes with more parameters were favored in the Bayesian comparison suggesting that characters located on different body regions evolve at different rates and that partitioning of the data matrix using anatomy ontology is reasonable; however, trees from the parsimony and all the Bayesian analyses were quite consistent. The hypothesized phylogeny reveals many novel clades and provides additional support for some clades recovered in previous analyses. Our results provide a solid basis for a new classification of dung beetles, in which the taxonomic limits of the tribes Dichotomiini, Deltochilini and Coprini are restricted and many new tribes must be described. Based on the consistency of the phylogeny with biogeography, we speculate that dung beetles may have originated in the Mesozoic contrary to the traditional view pointing to a
Maximally incompatible quantum observables
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Does random tree puzzle produce Yule-Harding trees in the many-taxon limit?
Zhu, Sha; Steel, Mike
2013-05-01
It has been suggested that a random tree puzzle (RTP) process leads to a Yule-Harding (YH) distribution, when the number of taxa becomes large. In this study, we formalize this conjecture, and we prove that the two tree distributions converge for two particular properties, which suggests that the conjecture may be true. However, we present statistical evidence that, while the two distributions are close, the RTP appears to converge on a different distribution than does the YH. By way of contrast, in the concluding section we show that the maximum parsimony method applied to random two-state data leads a very different (PDA, or uniform) distribution on trees.
刘丽娟; 陈果
2012-01-01
An one-class Classification with multi hyper-spheres based on maximal tree clustering algorithm was presented herein.The training samples were firstly clustered into several sub-classes by the maximal tree clustering algorithm,and then,the sub-classes data were trained separately using one-class SVM（OC-SVM） and the multi hyper-spheres classifying models were established.The new method was applied to the instances of the simulation data set,UCI data sets and the rotor faults diagnosis,and the results show the effectiveness of the new method.%提出了一种基于最大树聚类的多超球体一类分类算法。首先应用最大树聚类算法将训练样本聚为多个子类,再对各子类分别进行一类支持向量机（one-class SVM,OC-SVM）分类器训练,得到由各子类对应的超球体形成的多超球体一类分类模型。分别将该方法应用于仿真数据、UCI标准数据集以及转子故障诊断三个实例中,结果表明了该方法的有效性。
A bicriterion Steiner tree problem on graph
Vujošević Mirko B.
2003-01-01
Full Text Available This paper presents a formulation of bicriterion Steiner tree problem which is stated as a task of finding a Steiner tree with maximal capacity and minimal length. It is considered as a lexicographic multicriteria problem. This means that the bottleneck Steiner tree problem is solved first. After that, the next optimization problem is stated as a classical minimums Steiner tree problem under the constraint on capacity of the tree. The paper also presents some computational experiments with the multicriteria problem.
廖福蓉; 王成良
2012-01-01
频繁项集的挖掘受到大量候选频繁项集和较高计算花费的限制,只挖掘最大长度频繁项集已满足很多应用.提出一种基于有序FP-tree结构挖掘最大长度频繁项集的算法.即对有序FP-tree的头表进行改造,增加一个max-level域,记录该项在有序FP-tree中的最大高度.挖掘时仅对max-level大于等于已有最大长度频繁项集长度的项进行遍历,不产生条件模式基,无需递归构造条件FP-tree,且计算出最大长度频繁项集的支持度.实验结果表明该算法挖掘效率高、速度快.%The mining of frequent itemsets has been limited by the large number of resulting itemsets as well as the high computational cost. In many application domains, however, it is often sufficient to mine maximum length frequent itemsets. An order FP-tree-based algorithm is proposed for the mining problem. A field max-level is added in head-table to record the greatest height of item. In the mining process, only the item which max-level value is equal or greater than the length of existing maximum length frequent itemsets is traversed. Neither producing conditional pattern base nor constructing conditional frequent pattern tree recursively is needed, and the support of maximum length frequent itemsets is calculated. The experimental results show that the algorithm accelerates the speed to traverse the tree and improves the mining efficiency.
Parker, Andrew M.; Wandi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...
c, Aleksandar Ili\\'; Feng, Lihua
2011-01-01
The Harary index of a graph $G$ is recently introduced topological index, defined on the reverse distance matrix as $H(G)=\\sum_{u,v \\in V(G)}\\frac{1}{d(u,v)}$, where $d(u,v)$ is the length of the shortest path between two distinct vertices $u$ and $v$. We present the partial ordering of starlike trees based on the Harary index and we describe the trees with the second maximal and the second minimal Harary index. In this paper, we investigate the Harary index of trees with $k$ pendent vertices and determine the extremal trees with maximal Harary index. We also characterize the extremal trees with maximal Harary index with respect to the number of vertices of degree two, matching number, independence number, domination number, radius and diameter. In addition, we characterize the extremal trees with minimal Harary index and given maximum degree. We concluded that in all presented classes, the trees with maximal Harary index are exactly those trees with the minimal Wiener index, and vice versa.
Callot, Laurent; Kristensen, Johannes Tang
the monetary policy response to inflation and business cycle fluctuations in the US by estimating a parsimoniously time varying parameter Taylor rule.We document substantial changes in the policy response of the Fed in the 1970s and 1980s, and since 2007, but also document the stability of this response...
Wei Wu; James Clark; James Vose
2010-01-01
Hierarchical Bayesian (HB) modeling allows for multiple sources of uncertainty by factoring complex relationships into conditional distributions that can be used to draw inference and make predictions. We applied an HB model to estimate the parameters and state variables of a parsimonious hydrological model â GR4J â by coherently assimilating the uncertainties from the...
Matthews, Luke J; Rosenberger, Alfred L
2008-11-01
The classifications of primates, in general, and platyrrhine primates, in particular, have been greatly revised subsequent to the rationale for taxonomic decisions shifting from one rooted in the biological species concept to one rooted solely in phylogenetic affiliations. Given the phylogenetic justification provided for revised taxonomies, the scientific validity of taxonomic distinctions can be rightly judged by the robusticity of the phylogenetic results supporting them. In this study, we empirically investigated taxonomic-sampling effects on a cladogram previously inferred from craniodental data for the woolly monkeys (Lagothrix). We conducted the study primarily through much greater sampling of species-level taxa (OTUs) after improving some character codings and under a variety of outgroup choices. The results indicate that alternative selections of species subsets from within genera produce various tree topologies. These results stand even after adjusting the character set and considering the potential role of interobserver disagreement. We conclude that specific taxon combinations, in this case, generic or species pairings, of the primary study group has a biasing effect in parsimony analysis, and that the cladistic rationale for resurrecting the Oreonax generic distinction for the yellow-tailed woolly monkey (Lagothrix flavicauda) is based on an artifact of idiosyncratic sampling within the study group below the genus level. Some recommendations to minimize the problem, which is prevalent in all cladistic analyses, are proposed.
Smith, J F
2000-06-01
Generic relationships within Episcieae were assessed using ITS and ndhF sequences. Previous analyses of this tribe have focussed only on ndhF data and have excluded two genera, Rhoogeton and Oerstedina, which are included in this analysis. Data were analyzed using both parsimony and maximum-likelihood methods. Results from partition homogeneity tests imply that the two data sets are significantly incongruent, but when Rhoogeton is removed from the analysis, the data sets are not significantly different. The combined data sets reveal greater strength of relationships within the tribe with the exception of the position of Rhoogeton. Poorly or unresolved relationships based exclusively on ndhF data are more fully resolved with ITS data. These resolved clades include the monophyly of the genera Columnea and Paradrymonia and the sister-group relationship of Nematanthus and Codonanthe. A closer affinity between Neomortonia nummularia and N. rosea than has previously been seen is apparent from these data, although these two species are not monophyletic in any tree. Lastly, Capanea appears to be a member of Gloxinieae, although C. grandiflora remains within Episcieae. Evolution of fruit type, epiphytic habit, and presence of tubers is re-examined with the new data presented here.
Ming Yi WANG; Guo ZHAO
2005-01-01
A right R-module E over a ring R is said to be maximally injective in case for any maximal right ideal m of R, every R-homomorphism f : m → E can be extended to an R-homomorphism f' : R → E. In this paper, we first construct an example to show that maximal injectivity is a proper generalization of injectivity. Then we prove that any right R-module over a left perfect ring R is maximally injective if and only if it is injective. We also give a partial affirmative answer to Faith's conjecture by further investigating the property of maximally injective rings. Finally, we get an approximation to Faith's conjecture, which asserts that every injective right R-module over any left perfect right self-injective ring R is the injective hull of a projective submodule.
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
Brüstle, Thomas; Pérotin, Matthieu
2012-01-01
Maximal green sequences are particular sequences of quiver mutations which were introduced by Keller in the context of quantum dilogarithm identities and independently by Cecotti-Cordova-Vafa in the context of supersymmetric gauge theory. Our aim is to initiate a systematic study of these sequences from a combinatorial point of view. Interpreting maximal green sequences as paths in various natural posets arising in representation theory, we prove the finiteness of the number of maximal green sequences for cluster finite quivers, affine quivers and acyclic quivers with at most three vertices. We also give results concerning the possible numbers and lengths of these maximal green sequences. Finally we describe an algorithm for computing maximal green sequences for arbitrary valued quivers which we used to obtain numerous explicit examples that we present.
Language trees support the express-train sequence of Austronesian expansion.
Gray, R D; Jordan, F M
2000-06-29
Languages, like molecules, document evolutionary history. Darwin observed that evolutionary change in languages greatly resembled the processes of biological evolution: inheritance from a common ancestor and convergent evolution operate in both. Despite many suggestions, few attempts have been made to apply the phylogenetic methods used in biology to linguistic data. Here we report a parsimony analysis of a large language data set. We use this analysis to test competing hypotheses--the "express-train" and the "entangled-bank" models--for the colonization of the Pacific by Austronesian-speaking peoples. The parsimony analysis of a matrix of 77 Austronesian languages with 5,185 lexical items produced a single most-parsimonious tree. The express-train model was converted into an ordered geographical character and mapped onto the language tree. We found that the topology of the language tree was highly compatible with the express-train model.
Rudiger Bubner
1998-12-01
Full Text Available Even though the maxims' theory is not at thecenter of Kant's ethics, it is the unavoidable basis of the categoric imperative's formulation. Kant leanson the transmitted representations of modem moral theory. During the last decades, the notion of maxims has deserved more attention, due to the philosophy of language's debates on rules, and due to action theory's interest in this notion. I here by brietly expound my views in these discussions.
Toda, M.; Yokozawa, M.; Richardson, A. D.; Kohyama, T.
2011-12-01
The effects of wind disturbance on interannual variability in ecosystem CO2 exchange have been assessed in two forests in northern Japan, i.e., a young, even-aged, monocultured, deciduous forest and an uneven-aged mixed forest of evergreen and deciduous trees, including some over 200 years old using eddy covariance (EC) measurements during 2004-2008. The EC measurements have indicated that photosynthetic recovery of trees after a huge typhoon occurred during early September in 2004 activated annual carbon uptake of both forests due to changes in physiological response of tree leaves during their growth stages. However, little have been resolved about what biotic and abiotic factors regulated interannual variability in heat, water and carbon exchange between an atmosphere and forests. In recent years, an inverse modeling analysis has been utilized as a powerful tool to estimate biotic and abiotic parameters that might affect heat, water and CO2 exchange between the atmosphere and forest of a parsimonious physiologically based model. We conducted the Bayesian inverse model analysis for the model with the EC measurements. The preliminary result showed that the above model-derived NEE values were consistent with observed ones on the hourly basis with optimized parameters by Baysian inversion. In the presentation, we would examine interannual variability in biotic and abiotic parameters related to heat, water and carbon exchange between the atmosphere and forests after disturbance by typhoon.
Equally parsimonious pathways through an RNA sequence space are not equally likely
Lee, Y. H.; DSouza, L. M.; Fox, G. E.
1997-01-01
An experimental system for determining the potential ability of sequences resembling 5S ribosomal RNA (rRNA) to perform as functional 5S rRNAs in vivo in the Escherichia coli cellular environment was devised previously. Presumably, the only 5S rRNA sequences that would have been fixed by ancestral populations are ones that were functionally valid, and hence the actual historical paths taken through RNA sequence space during 5S rRNA evolution would have most likely utilized valid sequences. Herein, we examine the potential validity of all sequence intermediates along alternative equally parsimonious trajectories through RNA sequence space which connect two pairs of sequences that had previously been shown to behave as valid 5S rRNAs in E. coli. The first trajectory requires a total of four changes. The 14 sequence intermediates provide 24 apparently equally parsimonious paths by which the transition could occur. The second trajectory involves three changes, six intermediate sequences, and six potentially equally parsimonious paths. In total, only eight of the 20 sequence intermediates were found to be clearly invalid. As a consequence of the position of these invalid intermediates in the sequence space, seven of the 30 possible paths consisted of exclusively valid sequences. In several cases, the apparent validity/invalidity of the intermediate sequences could not be anticipated on the basis of current knowledge of the 5S rRNA structure. This suggests that the interdependencies in RNA sequence space may be more complex than currently appreciated. If ancestral sequences predicted by parsimony are to be regarded as actual historical sequences, then the present results would suggest that they should also satisfy a validity requirement and that, in at least limited cases, this conjecture can be tested experimentally.
Kuss, D.J.; Shorter, G. W.; Rooij, A.J. van; Griffiths, M.D.; Schoenmakers, T.M.
2014-01-01
Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (Journal ...
Kuss, DJ; Shorter, GW; Van Rooij, AJ; Griffiths, MD; Schoenmakers, T.
2014-01-01
Internet usage has grown exponentially over the last decade. Research indicates that excessive Internet use can lead to symptoms associated with addiction. To date, assessment of potential Internet addiction has varied regarding populations studied and instruments used, making reliable prevalence estimations difficult. To overcome the present problems a preliminary study was conducted testing a parsimonious Internet addiction components model based on Griffiths’ addiction components (2005), i...
The combinatorics of tandem duplication trees.
Gascuel, Olivier; Hendy, Michael D; Jean-Marie, Alain; McLachlan, Robert
2003-02-01
We developed a recurrence relation that counts the number of tandem duplication trees (either rooted or unrooted) that are consistent with a set of n tandemly repeated sequences generated under the standard unequal recombination (or crossover) model of tandem duplications. The number of rooted duplication trees is exactly twice the number of unrooted trees, which means that on average only two positions for a root on a duplication tree are possible. Using the recurrence, we tabulated these numbers for small values of n. We also developed an asymptotic formula that for large n provides estimates for these numbers. These numbers give a priori probabilities for phylogenies of the repeated sequences to be duplication trees. This work extends earlier studies where exhaustive counts of the numbers for small n were obtained. One application showed the significance of finding that most maximum-parsimony trees constructed from repeat sequences from human immunoglobins and T-cell receptors were tandem duplication trees. Those findings provided strong support to the proposed mechanisms of tandem gene duplication. The recurrence relation also suggests efficient algorithms to recognize duplication trees and to generate random duplication trees for simulation. We present a linear-time recognition algorithm.
Dallolio Laura
2006-08-01
Full Text Available Abstract Background Cesarean section rates is often used as an indicator of quality of care in maternity hospitals. The assumption is that lower rates reflect in developed countries more appropriate clinical practice and general better performances. Hospitals are thus often ranked on the basis of caesarean section rates. The aim of this study is to assess whether the adjustment for clinical and sociodemographic variables of the mother and the fetus is necessary for inter-hospital comparisons of cesarean section (c-section rates and to assess whether a risk adjustment model based on a limited number of variables could be identified and used. Methods Discharge abstracts of labouring women without prior cesarean were linked with abstracts of newborns discharged from 29 hospitals of the Emilia-Romagna Region (Italy from 2003 to 2004. Adjusted ORs of cesarean by hospital were estimated by using two logistic regression models: 1 a full model including the potential confounders selected by a backward procedure; 2 a parsimonious model including only actual confounders identified by the "change-in-estimate" procedure. Hospital rankings, based on ORs were examined. Results 24 risk factors for c-section were included in the full model and 7 (marital status, maternal age, infant weight, fetopelvic disproportion, eclampsia or pre-eclampsia, placenta previa/abruptio placentae, malposition/malpresentation in the parsimonious model. Hospital ranking using the adjusted ORs from both models was different from that obtained using the crude ORs. The correlation between the rankings of the two models was 0.92. The crude ORs were smaller than ORs adjusted by both models, with the parsimonious ones producing more precise estimates. Conclusion Risk adjustment is necessary to compare hospital c-section rates, it shows differences in rankings and highlights inappropriateness of some hospitals. By adjusting for only actual confounders valid and more precise estimates
Schwartz, Carolyn E; Patrick, Donald L
2014-07-01
When planning a comparative effectiveness study comparing disease-modifying treatments, competing demands influence choice of outcomes. Current practice emphasizes parsimony, although understanding multidimensional treatment impact can help to personalize medical decision-making. We discuss both sides of this 'tug of war'. We discuss the assumptions, advantages and drawbacks of composite scores and multidimensional outcomes. We describe possible solutions to the multiple comparison problem, including conceptual hierarchy distinctions, statistical approaches, 'real-world' benchmarks of effectiveness and subgroup analysis. We conclude that comparative effectiveness research should consider multiple outcome dimensions and compare different approaches that fit the individual context of study objectives.
A Distributed Spanning Tree Algorithm
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asynchronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....
A distributed spanning tree algorithm
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge
1988-01-01
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asyncronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....
A Distributed Spanning Tree Algorithm
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asynchronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....
Reconciliation with non-binary species trees.
Vernot, Benjamin; Stolzer, Maureen; Goldman, Aiton; Durand, Dannie
2008-10-01
Reconciliation extracts information from the topological incongruence between gene and species trees to infer duplications and losses in the history of a gene family. The inferred duplication-loss histories provide valuable information for a broad range of biological applications, including ortholog identification, estimating gene duplication times, and rooting and correcting gene trees. While reconciliation for binary trees is a tractable and well studied problem, there are no algorithms for reconciliation with non-binary species trees. Yet a striking proportion of species trees are non-binary. For example, 64% of branch points in the NCBI taxonomy have three or more children. When applied to non-binary species trees, current algorithms overestimate the number of duplications because they cannot distinguish between duplication and incomplete lineage sorting. We present the first algorithms for reconciling binary gene trees with non-binary species trees under a duplication-loss parsimony model. Our algorithms utilize an efficient mapping from gene to species trees to infer the minimum number of duplications in O(|V(G) | x (k(S) + h(S))) time, where |V(G)| is the number of nodes in the gene tree, h(S) is the height of the species tree and k(S) is the size of its largest polytomy. We present a dynamic programming algorithm which also minimizes the total number of losses. Although this algorithm is exponential in the size of the largest polytomy, it performs well in practice for polytomies with outdegree of 12 or less. We also present a heuristic which estimates the minimal number of losses in polynomial time. In empirical tests, this algorithm finds an optimal loss history 99% of the time. Our algorithms have been implemented in NOTUNG, a robust, production quality, tree-fitting program, which provides a graphical user interface for exploratory analysis and also supports automated, high-throughput analysis of large data sets.
Janusz Brzozowski
2014-05-01
Full Text Available The atoms of a regular language are non-empty intersections of complemented and uncomplemented quotients of the language. Tight upper bounds on the number of atoms of a language and on the quotient complexities of atoms are known. We introduce a new class of regular languages, called the maximally atomic languages, consisting of all languages meeting these bounds. We prove the following result: If L is a regular language of quotient complexity n and G is the subgroup of permutations in the transition semigroup T of the minimal DFA of L, then L is maximally atomic if and only if G is transitive on k-subsets of 1,...,n for 0 <= k <= n and T contains a transformation of rank n-1.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline with...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Slowinski, J B; Knight, A; Rooney, A P
1997-12-01
Toward the goal of recovering the phylogenetic relationships among elapid snakes, we separately found the shortest trees from the amino acid sequences for the venom proteins phospholipase A2 and the short neurotoxin, collectively representing 32 species in 16 genera. We then applied a method we term gene tree parsimony for inferring species trees from gene trees that works by finding the species tree which minimizes the number of deep coalescences or gene duplications plus unsampled sequences necessary to fit each gene tree to the species tree. This procedure, which is both logical and generally applicable, avoids many of the problems of previous approaches for inferring species trees from gene trees. The results support a division of the elapids examined into sister groups of the Australian and marine (laticaudines and hydrophiines) species, and the African and Asian species. Within the former clade, the sea snakes are shown to be diphyletic, with the laticaudines and hydrophiines having separate origins. This finding is corroborated by previous studies, which provide support for the usefulness of gene tree parsimony.
Extremal Matching Energy of Complements of Trees
Wu Tingzeng
2016-08-01
Full Text Available Gutman and Wagner proposed the concept of the matching energy which is defined as the sum of the absolute values of the zeros of the matching polynomial of a graph. And they pointed out that the chemical applications of matching energy go back to the 1970s. Let T be a tree with n vertices. In this paper, we characterize the trees whose complements have the maximal, second-maximal and minimal matching energy. Furthermore, we determine the trees with edge-independence number p whose complements have the minimum matching energy for p = 1, 2, . . . , [n/2]. When we restrict our consideration to all trees with a perfect matching, we determine the trees whose complements have the second-maximal matching energy.
Rooting gene trees without outgroups: EP rooting.
Sinsheimer, Janet S; Little, Roderick J A; Lake, James A
2012-01-01
Gene sequences are routinely used to determine the topologies of unrooted phylogenetic trees, but many of the most important questions in evolution require knowing both the topologies and the roots of trees. However, general algorithms for calculating rooted trees from gene and genomic sequences in the absence of gene paralogs are few. Using the principles of evolutionary parsimony (EP) (Lake JA. 1987a. A rate-independent technique for analysis of nucleic acid sequences: evolutionary parsimony. Mol Biol Evol. 4:167-181) and its extensions (Cavender, J. 1989. Mechanized derivation of linear invariants. Mol Biol Evol. 6:301-316; Nguyen T, Speed TP. 1992. A derivation of all linear invariants for a nonbalanced transversion model. J Mol Evol. 35:60-76), we explicitly enumerate all linear invariants that solely contain rooting information and derive algorithms for rooting gene trees directly from gene and genomic sequences. These new EP linear rooting invariants allow one to determine rooted trees, even in the complete absence of outgroups and gene paralogs. EP rooting invariants are explicitly derived for three taxon trees, and rules for their extension to four or more taxa are provided. The method is demonstrated using 18S ribosomal DNA to illustrate how the new animal phylogeny (Aguinaldo AMA et al. 1997. Evidence for a clade of nematodes, arthropods, and other moulting animals. Nature 387:489-493; Lake JA. 1990. Origin of the metazoa. Proc Natl Acad Sci USA 87:763-766) may be rooted directly from sequences, even when they are short and paralogs are unavailable. These results are consistent with the current root (Philippe H et al. 2011. Acoelomorph flatworms are deuterostomes related to Xenoturbella. Nature 470:255-260).
Commitment to Sport and Exercise: Re-examining the Literature for a Practical and Parsimonious Model
2013-01-01
A commitment to physical activity is necessary for personal health, and is a primary goal of physical activity practitioners. Effective practitioners rely on theory and research as a guide to best practices. Thus, sound theory, which is both practical and parsimonious, is a key to effective practice. The purpose of this paper is to review the literature in search of such a theory - one that applies to and explains commitment to physical activity in the form of sport and exercise for youths and adults. The Sport Commitment Model has been commonly used to study commitment to sport and has more recently been applied to the exercise context. In this paper, research using the Sport Commitment Model is reviewed relative to its utility in both the sport and exercise contexts. Through this process, the relevance of the Investment Model for study of physical activity commitment emerged, and a more parsimonious framework for studying of commitment to physical activity is suggested. Lastly, links between the models of commitment and individuals' participation motives in physical activity are suggested and practical implications forwarded. PMID:23412904
Williams, Lavon
2013-01-01
A commitment to physical activity is necessary for personal health, and is a primary goal of physical activity practitioners. Effective practitioners rely on theory and research as a guide to best practices. Thus, sound theory, which is both practical and parsimonious, is a key to effective practice. The purpose of this paper is to review the literature in search of such a theory - one that applies to and explains commitment to physical activity in the form of sport and exercise for youths and adults. The Sport Commitment Model has been commonly used to study commitment to sport and has more recently been applied to the exercise context. In this paper, research using the Sport Commitment Model is reviewed relative to its utility in both the sport and exercise contexts. Through this process, the relevance of the Investment Model for study of physical activity commitment emerged, and a more parsimonious framework for studying of commitment to physical activity is suggested. Lastly, links between the models of commitment and individuals' participation motives in physical activity are suggested and practical implications forwarded.
Parsimonious wave-equation travel-time inversion for refraction waves
Fu, Lei
2017-02-14
We present a parsimonious wave-equation travel-time inversion technique for refraction waves. A dense virtual refraction dataset can be generated from just two reciprocal shot gathers for the sources at the endpoints of the survey line, with N geophones evenly deployed along the line. These two reciprocal shots contain approximately 2N refraction travel times, which can be spawned into O(N2) refraction travel times by an interferometric transformation. Then, these virtual refraction travel times are used with a source wavelet to create N virtual refraction shot gathers, which are the input data for wave-equation travel-time inversion. Numerical results show that the parsimonious wave-equation travel-time tomogram has about the same accuracy as the tomogram computed by standard wave-equation travel-time inversion. The most significant benefit is that a reciprocal survey is far less time consuming than the standard refraction survey where a source is excited at each geophone location.
Chen, Shuo; Kang, Jian; Xing, Yishi; Wang, Guoqing
2015-12-01
Group-level functional connectivity analyses often aim to detect the altered connectivity patterns between subgroups with different clinical or psychological experimental conditions, for example, comparing cases and healthy controls. We present a new statistical method to detect differentially expressed connectivity networks with significantly improved power and lower false-positive rates. The goal of our method was to capture most differentially expressed connections within networks of constrained numbers of brain regions (by the rule of parsimony). By virtue of parsimony, the false-positive individual connectivity edges within a network are effectively reduced, whereas the informative (differentially expressed) edges are allowed to borrow strength from each other to increase the overall power of the network. We develop a test statistic for each network in light of combinatorics graph theory, and provide p-values for the networks (in the weak sense) by using permutation test with multiple-testing adjustment. We validate and compare this new approach with existing methods, including false discovery rate and network-based statistic, via simulation studies and a resting-state functional magnetic resonance imaging case-control study. The results indicate that our method can identify differentially expressed connectivity networks, whereas existing methods are limited.
Kucharczyk, Robert A
2012-01-01
In this note we discuss trees similar to the Calkin-Wilf tree, a binary tree that enumerates all positive rational numbers in a simple way. The original construction of Calkin and Wilf is reformulated in a more algebraic language, and an elementary application of methods from analytic number theory gives restrictions on possible analogues.
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Brandes, U; Gaertler, M; Goerke, R; Hoefer, M; Nikoloski, Z; Wagner, D
2006-01-01
Several algorithms have been proposed to compute partitions of networks into communities that score high on a graph clustering index called modularity. While publications on these algorithms typically contain experimental evaluations to emphasize the plausibility of results, none of these algorithms has been shown to actually compute optimal partitions. We here settle the unknown complexity status of modularity maximization by showing that the corresponding decision version is NP-complete in the strong sense. As a consequence, any efficient, i.e. polynomial-time, algorithm is only heuristic and yields suboptimal partitions on many instances.
de Queiroz, K; Poe, S
2001-06-01
Advocates of cladistic parsimony methods have invoked the philosophy of Karl Popper in an attempt to argue for the superiority of those methods over phylogenetic methods based on Ronald Fisher's statistical principle of likelihood. We argue that the concept of likelihood in general, and its application to problems of phylogenetic inference in particular, are highly compatible with Popper's philosophy. Examination of Popper's writings reveals that his concept of corroboration is, in fact, based on likelihood. Moreover, because probabilistic assumptions are necessary for calculating the probabilities that define Popper's corroboration, likelihood methods of phylogenetic inference--with their explicit probabilistic basis--are easily reconciled with his concept. In contrast, cladistic parsimony methods, at least as described by certain advocates of those methods, are less easily reconciled with Popper's concept of corroboration. If those methods are interpreted as lacking probabilistic assumptions, then they are incompatible with corroboration. Conversely, if parsimony methods are to be considered compatible with corroboration, then they must be interpreted as carrying implicit probabilistic assumptions. Thus, the non-probabilistic interpretation of cladistic parsimony favored by some advocates of those methods is contradicted by an attempt by the same authors to justify parsimony methods in terms of Popper's concept of corroboration. In addition to being compatible with Popperian corroboration, the likelihood approach to phylogenetic inference permits researchers to test the assumptions of their analytical methods (models) in a way that is consistent with Popper's ideas about the provisional nature of background knowledge.
Maximizing without difficulty: A modified maximizing scale and its correlates
Linda Lai
2010-01-01
This article presents several studies that replicate and extend previous research on maximizing. A modified scale for measuring individual maximizing tendency is introduced. The scale has adequate psychometric properties and reflects maximizers' aspirations for high standards and their preference for extensive alternative search, but not the decision difficulty aspect included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cogniti...
Tree compression with top trees
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.;
2015-01-01
We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Tree compression with top trees
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.
2013-01-01
We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
Tree compression with top trees
Bille, Philip; Gørtz, Inge Li; Landau, Gad M.
2015-01-01
We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...
HEMI: Hyperedge Majority Influence Maximization
Gangal, Varun; Narayanam, Ramasuri
2016-01-01
In this work, we consider the problem of influence maximization on a hypergraph. We first extend the Independent Cascade (IC) model to hypergraphs, and prove that the traditional influence maximization problem remains submodular. We then present a variant of the influence maximization problem (HEMI) where one seeks to maximize the number of hyperedges, a majority of whose nodes are influenced. We prove that HEMI is non-submodular under the diffusion model proposed.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...... with 30% (v/v) ethanol or saline, respectively. Relative viscosity was used as one measure of physical properties of the emulsion. Higher degrees of sensitization (but not rates) were obtained at the 48 h challenge reading with the oil/propylene glycol and oil/saline + ethanol emulsions compared...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Singular Spectrum Analysis for astronomical time series: constructing a parsimonious hypothesis test
Greco, G; Kobayashi, S; Ghil, M; Branchesi, M; Guidorzi, C; Stratta, G; Ciszak, M; Marino, F; Ortolan, A
2015-01-01
We present a data-adaptive spectral method - Monte Carlo Singular Spectrum Analysis (MC-SSA) - and its modification to tackle astrophysical problems. Through numerical simulations we show the ability of the MC-SSA in dealing with $1/f^{\\beta}$ power-law noise affected by photon counting statistics. Such noise process is simulated by a first-order autoregressive, AR(1) process corrupted by intrinsic Poisson noise. In doing so, we statistically estimate a basic stochastic variation of the source and the corresponding fluctuations due to the quantum nature of light. In addition, MC-SSA test retains its effectiveness even when a significant percentage of the signal falls below a certain level of detection, e.g., caused by the instrument sensitivity. The parsimonious approach presented here may be broadly applied, from the search for extrasolar planets to the extraction of low-intensity coherent phenomena probably hidden in high energy transients.
State space parsimonious reconstruction of attractor produced by an electronic oscillator
Aguirre, Luis A.; Freitas, Ubiratan S.; Letellier, Christophe; Sceller, Lois Le; Maquet, Jean
2000-02-01
This work discusses the reconstruction, from a set of real data, of a chaotic attractor produced by a well-known electronic oscillator, Chua's circuit. The mathematical representation used is a nonlinear differential equation of the polynomial type. One of the contributions of the present study is that structure selection techniques have been applied to help determine the regressors in the model. Models of the chaotic attractor obtained with and without structure selection were compared. The main differences between structure-selected models and complete structure models are: i) the former are more parsimonious that the latter, ii) fixed-point symmetry is guaranteed for the former, iii) for structure-selected models a trivial fixed point is also guaranteed, and iv) the former set of models produce attractors that are topologically closer to the original attractor than those produced by the complete structure models.
Roth, Bradley J.
2017-01-01
The strength-interval curve plays a major role in understanding how cardiac tissue responds to an electrical stimulus. This complex behavior has been studied previously using the bidomain formulation incorporating the Beeler-Reuter and Luo-Rudy dynamic ionic current models. The complexity of these models renders the interpretation and extrapolation of simulation results problematic. Here we utilize a recently developed parsimonious ionic current model with only two currents—a sodium current that activates rapidly upon depolarization INa and a time-independent inwardly rectifying repolarization current IK—which reproduces many experimentally measured action potential waveforms. Bidomain tissue simulations with this ionic current model reproduce the distinctive dip in the anodal (but not cathodal) strength-interval curve. Studying model variants elucidates the necessary and sufficient physiological conditions to predict the polarity dependent dip: a voltage and time dependent INa, a nonlinear rectifying repolarization current, and bidomain tissue with unequal anisotropy ratios. PMID:28222136
Satisfiability Parsimoniously Reduces to the Tantrix(TM) Rotation Puzzle Problem
Baumeister, Dorothea
2007-01-01
Holzer and Holzer (Discrete Applied Mathematics 144(3):345--358, 2004) proved that the Tantrix(TM) rotation puzzle problem is NP-complete. They also showed that for infinite rotation puzzles, this problem becomes undecidable. We study the counting version and the unique version of this problem. We prove that the satisfiability problem parsimoniously reduces to the Tantrix(TM) rotation puzzle problem. In particular, this reduction preserves the uniqueness of the solution, which implies that the unique Tantrix(TM) rotation puzzle problem is as hard as the unique satisfiability problem, and so is DP-complete under polynomial-time randomized reductions, where DP is the second level of the boolean hierarchy over NP.
Time-Lapse Monitoring of Subsurface Fluid Flow using Parsimonious Seismic Interferometry
Hanafy, Sherif
2017-04-21
A typical small-scale seismic survey (such as 240 shot gathers) takes at least 16 working hours to be completed, which is a major obstacle in case of time-lapse monitoring experiments. This is especially true if the subject that needs to be monitored is rapidly changing. In this work, we will discuss how to decrease the recording time from 16 working hours to less than one hour of recording. Here, the virtual data has the same accuracy as the conventional data. We validate the efficacy of parsimonious seismic interferometry with the time-lapse mentoring idea with field examples, where we were able to record 30 different data sets within a 2-hour period. The recorded data are then processed to generate 30 snapshots that shows the spread of water from the ground surface down to a few meters.
Michael Seifert
2012-01-01
Full Text Available Array-based comparative genomic hybridization (Array-CGH is an important technology in molecular biology for the detection of DNA copy number polymorphisms between closely related genomes. Hidden Markov Models (HMMs are popular tools for the analysis of Array-CGH data, but current methods are only based on first-order HMMs having constrained abilities to model spatial dependencies between measurements of closely adjacent chromosomal regions. Here, we develop parsimonious higher-order HMMs enabling the interpolation between a mixture model ignoring spatial dependencies and a higher-order HMM exhaustively modeling spatial dependencies. We apply parsimonious higher-order HMMs to the analysis of Array-CGH data of the accessions C24 and Col-0 of the model plant Arabidopsis thaliana. We compare these models against first-order HMMs and other existing methods using a reference of known deletions and sequence deviations. We find that parsimonious higher-order HMMs clearly improve the identification of these polymorphisms. Moreover, we perform a functional analysis of identified polymorphisms revealing novel details of genomic differences between C24 and Col-0. Additional model evaluations are done on widely considered Array-CGH data of human cell lines indicating that parsimonious HMMs are also well-suited for the analysis of non-plant specific data. All these results indicate that parsimonious higher-order HMMs are useful for Array-CGH analyses. An implementation of parsimonious higher-order HMMs is available as part of the open source Java library Jstacs (www.jstacs.de/index.php/PHHMM.
Seifert, Michael; Gohr, André; Strickert, Marc; Grosse, Ivo
2012-01-01
Array-based comparative genomic hybridization (Array-CGH) is an important technology in molecular biology for the detection of DNA copy number polymorphisms between closely related genomes. Hidden Markov Models (HMMs) are popular tools for the analysis of Array-CGH data, but current methods are only based on first-order HMMs having constrained abilities to model spatial dependencies between measurements of closely adjacent chromosomal regions. Here, we develop parsimonious higher-order HMMs enabling the interpolation between a mixture model ignoring spatial dependencies and a higher-order HMM exhaustively modeling spatial dependencies. We apply parsimonious higher-order HMMs to the analysis of Array-CGH data of the accessions C24 and Col-0 of the model plant Arabidopsis thaliana. We compare these models against first-order HMMs and other existing methods using a reference of known deletions and sequence deviations. We find that parsimonious higher-order HMMs clearly improve the identification of these polymorphisms. Moreover, we perform a functional analysis of identified polymorphisms revealing novel details of genomic differences between C24 and Col-0. Additional model evaluations are done on widely considered Array-CGH data of human cell lines indicating that parsimonious HMMs are also well-suited for the analysis of non-plant specific data. All these results indicate that parsimonious higher-order HMMs are useful for Array-CGH analyses. An implementation of parsimonious higher-order HMMs is available as part of the open source Java library Jstacs (www.jstacs.de/index.php/PHHMM).
Parsimony and goodness-of-fit in multi-dimensional NMR inversion
Babak, Petro; Kryuchkov, Sergey; Kantzas, Apostolos
2017-01-01
Multi-dimensional nuclear magnetic resonance (NMR) experiments are often used for study of molecular structure and dynamics of matter in core analysis and reservoir evaluation. Industrial applications of multi-dimensional NMR involve a high-dimensional measurement dataset with complicated correlation structure and require rapid and stable inversion algorithms from the time domain to the relaxation rate and/or diffusion domains. In practice, applying existing inverse algorithms with a large number of parameter values leads to an infinite number of solutions with a reasonable fit to the NMR data. The interpretation of such variability of multiple solutions and selection of the most appropriate solution could be a very complex problem. In most cases the characteristics of materials have sparse signatures, and investigators would like to distinguish the most significant relaxation and diffusion values of the materials. To produce an easy to interpret and unique NMR distribution with the finite number of the principal parameter values, we introduce a new method for NMR inversion. The method is constructed based on the trade-off between the conventional goodness-of-fit approach to multivariate data and the principle of parsimony guaranteeing inversion with the least number of parameter values. We suggest performing the inversion of NMR data using the forward stepwise regression selection algorithm. To account for the trade-off between goodness-of-fit and parsimony, the objective function is selected based on Akaike Information Criterion (AIC). The performance of the developed multi-dimensional NMR inversion method and its comparison with conventional methods are illustrated using real data for samples with bitumen, water and clay.
MAXIMS VIOLATIONS IN LITERARY WORK
Widya Hanum Sari Pertiwi
2015-12-01
Full Text Available This study was qualitative research action that focuses to find out the flouting of Gricean maxims and the functions of the flouting in the tales which are included in collection of children literature entitled My Giant Treasury of Stories and Rhymes. The objective of the study is generally to identify the violation of maxims of quantity, quality, relevance, and manner in the data sources and also to analyze the use of the flouting in the tales which are included in the book. Qualitative design using categorizing strategies, specifically coding strategy, was applied. Thus, the researcher as the instrument in this investigation was selecting the tales, reading them, and gathering every item which reflects the violation of Gricean maxims based on some conditions of flouting maxims. On the basis of the data analysis, it was found that the some utterances in the tales, both narration and conversation, flouting the four maxims of conversation, namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner. The researcher has also found that the flouting of maxims has one basic function that is to encourage the readers’ imagination toward the tales. This one basic function is developed by six others functions: (1 generating specific situation, (2 developing the plot, (3 enlivening the characters’ utterance, (4 implicating message, (5 indirectly characterizing characters, and (6 creating ambiguous setting. Keywords: children literature, tales, flouting maxims
Ganzinger, Harald; Nieuwenhuis, Robert; Nivela, Pilar
2001-01-01
Indexing data structures are well-known to be crucial for the efficiency of the current state-of-the-art theorem provers. Examples are \\emph{discrimination trees}, which are like tries where terms are seen as strings and common prefixes are shared, and \\emph{substitution trees}, where terms keep their tree structure and all common \\emph{contexts} can be shared. Here we describe a new indexing data structure, \\emph{context trees}, where, by means of a limited kind of conte...
Cochrane, John. H.; Longstaff, Francis A.; Santa-Clara, Pedro
2004-01-01
We solve a model with two â€œLucas trees.â€ Each tree has i.i.d. dividend growth. The investor has log utility and consumes the sum of the two treesâ€™ dividends. This model produces interesting asset-pricing dynamics, despite its simple ingredients. Investors want to rebalance their portfolios after any change in value. Since the size of the trees is fixed, however, prices must adjust to oï¬€set this desire. As a result, expected returns, excess returns, and return volatility all vary throug...
Longest common extensions in trees
Bille, Philip; Gawrychowski, Pawel; Gørtz, Inge Li;
2016-01-01
to trees and suggest a few applications of LCE in trees to tries and XML databases. Given a labeled and rooted tree T of size n, the goal is to preprocess T into a compact data structure that support the following LCE queries between subpaths and subtrees in T. Let v1, v2, w1, and w2 be nodes of T...... such that w1 and w2 are descendants of v1 and v2 respectively. - LCEPP(v1, w1, v2, w2): (path-path LCE) return the longest common prefix of the paths v1 ~→ w1 and v2 ~→ w2. - LCEPT(v1, w1, v2): (path-tree LCE) return maximal path-path LCE of the path v1 ~→ w1 and any path from v2 to a descendant leaf. - LCETT......(v1, v2): (tree-tree LCE) return a maximal path-path LCE of any pair of paths from v1 and v2 to descendant leaves. We present the first non-trivial bounds for supporting these queries. For LCEPP queries, we present a linear-space solution with O(log* n) query time. For LCEPT queries, we present...
Longest Common Extensions in Trees
Bille, Philip; Gawrychowski, Pawel; Gørtz, Inge Li;
2015-01-01
to trees and suggest a few applications of LCE in trees to tries and XML databases. Given a labeled and rooted tree T of size n, the goal is to preprocess T into a compact data structure that support the following LCE queries between subpaths and subtrees in T. Let v1, v2, w1, and w2 be nodes of T...... such that w1 and w2 are descendants of v1 and v2 respectively. - LCEPP(v1, w1, v2, w2): (path-path LCE) return the longest common prefix of the paths v1 ~→ w1 and v2 ~→ w2. - LCEPT(v1, w1, v2): (path-tree LCE) return maximal path-path LCE of the path v1 ~→ w1 and any path from v2 to a descendant leaf. - LCETT......(v1, v2): (tree-tree LCE) return a maximal path-path LCE of any pair of paths from v1 and v2 to descendant leaves. We present the first non-trivial bounds for supporting these queries. For LCEPP queries, we present a linear-space solution with O(log* n) query time. For LCEPT queries, we present...
Swanepoel, Konrad J
2011-01-01
A subset of a normed space X is called equilateral if the distance between any two points is the same. Let m(X) be the smallest possible size of an equilateral subset of X maximal with respect to inclusion. We first observe that Petty's construction of a d-dimensional X of any finite dimension d >= 4 with m(X)=4 can be generalised to show that m(X\\oplus_1\\R)=4 for any X of dimension at least 2 which has a smooth point on its unit sphere. By a construction involving Hadamard matrices we then show that both m(\\ell_p) and m(\\ell_p^d) are finite and bounded above by a function of p, for all 1 1 such that m(X) <= d+1 for all d-dimensional X with Banach-Mazur distance less than c from \\ell_p^d. Using Brouwer's fixed-point theorem we show that m(X) <= d+1 for all d-\\dimensional X with Banach-Mazur distance less than 3/2 from \\ell_\\infty^d. A graph-theoretical argument furthermore shows that m(\\ell_\\infty^d)=d+1. The above results lead us to conjecture that m(X) <= 1+\\dim X.
Unified Maximally Natural Supersymmetry
Huang, Junwu
2016-01-01
Maximally Natural Supersymmetry, an unusual weak-scale supersymmetric extension of the Standard Model based upon the inherently higher-dimensional mechanism of Scherk-Schwarz supersymmetry breaking (SSSB), possesses remarkably good fine tuning given present LHC limits. Here we construct a version with precision $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ unification: $\\sin^2 \\theta_W(M_Z) \\simeq 0.231$ is predicted to $\\pm 2\\%$ by unifying $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ into a 5D $SU(3)_{\\rm EW}$ theory at a Kaluza-Klein scale of $1/R_5 \\sim 4.4\\,{\\rm TeV}$, where SSSB is simultaneously realised. Full unification with $SU(3)_{\\rm C}$ is accommodated by extending the 5D theory to a $N=4$ supersymmetric $SU(6)$ gauge theory on a 6D rectangular orbifold at $1/R_6 \\sim 40 \\,{\\rm TeV}$. TeV-scale states beyond the SM include exotic charged fermions implied by $SU(3)_{\\rm EW}$ with masses lighter than $\\sim 1.2\\,{\\rm TeV}$, and squarks in the mass range $1.4\\,{\\rm TeV} - 2.3\\,{\\rm TeV}$, providing distinct signature...
Tolman, Marvin
2005-01-01
Students love outdoor activities and will love them even more when they build confidence in their tree identification and measurement skills. Through these activities, students will learn to identify the major characteristics of trees and discover how the pace--a nonstandard measuring unit--can be used to estimate not only distances but also the…
Computing Maximally Supersymmetric Scattering Amplitudes
Stankowicz, James Michael, Jr.
This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at
Maximal subgroups of finite groups
S. Srinivasan
1990-01-01
Full Text Available In finite groups maximal subgroups play a very important role. Results in the literature show that if the maximal subgroup has a very small index in the whole group then it influences the structure of the group itself. In this paper we study the case when the index of the maximal subgroups of the groups have a special type of relation with the Fitting subgroup of the group.
Maximizing Entropy over Markov Processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Maximizing entropy over Markov processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Which trees should be removed in thinning?
Timo Pukkala
2015-12-01
Full Text Available Background: In economically optimal management, trees that are removed in a thinning treatment should be selected on the basis of their value, relative value increment and the effect of removal on the growth of remaining trees. Large valuable trees with decreased value increment should be removed, especially when they overtop smaller trees. Methods: This study optimized the tree selection rule in the thinning treatments of continuous cover management when the aim is to maximize the profitability of forest management. The weights of three criteria (stem value, relative value increment and effect of removal on the competition of remaining trees were optimized together with thinning intervals. Results and conclusions: The results confirmed the hypothesis that optimal thinning involves removing predominantly large trees. Increasing stumpage value, decreasing relative value increment, and increasing competitive influence increased the likelihood that removal is optimal decision. However, if the spatial distribution of trees is irregular, it is optimal to leave large trees in sparse places and remove somewhat smaller trees from dense places. However, the benefit of optimal thinning, as compared to diameter limit cutting is not usually large in pure one-species stands. On the contrary, removing the smallest trees from the stand may lead to significant (30–40 % reductions in the net present value of harvest incomes. Keywords: Continuous cover forestry, Tree selection, High thinning, Optimal management, Spatial distribution, Spatial growth model
Baños, Hector; Bushek, Nathaniel; Davidson, Ruth; Gross, Elizabeth; Harris, Pamela E.; Krone, Robert; Long, Colby; Stewart, Allen; WALKER, Robert
2016-01-01
We introduce the package PhylogeneticTrees for Macaulay2 which allows users to compute phylogenetic invariants for group-based tree models. We provide some background information on phylogenetic algebraic geometry and show how the package PhylogeneticTrees can be used to calculate a generating set for a phylogenetic ideal as well as a lower bound for its dimension. Finally, we show how methods within the package can be used to compute a generating set for the join of any two ideals.
Urban water quality modelling: a parsimonious holistic approach for a complex real case study.
Freni, Gabriele; Mannina, Giorgio; Viviani, Gaspare
2010-01-01
In the past three decades, scientific research has focused on the preservation of water resources, and in particular, on the polluting impact of urban areas on natural water bodies. One approach to this research has involved the development of tools to describe the phenomena that take place on the urban catchment during both wet and dry periods. Research has demonstrated the importance of the integrated analysis of all the transformation phases that characterise the delivery and treatment of urban water pollutants from source to outfall. With this aim, numerous integrated urban drainage models have been developed to analyse the fate of pollution from urban catchments to the final receiving waters, simulating several physical and chemical processes. Such modelling approaches require calibration, and for this reason, researchers have tried to address two opposing needs: the need for reliable representation of complex systems, and the need to employ parsimonious approaches to cope with the usually insufficient, especially for urban sources, water quality data. The present paper discusses the application of a be-spoke model to a complex integrated catchment: the Nocella basin (Italy). This system is characterised by two main urban areas served by two wastewater treatment plants, and has a small river as the receiving water body. The paper describes the monitoring approach that was used for model calibration, presents some interesting considerations about the monitoring needs for integrated modelling applications, and provides initial results useful for identifying the most relevant polluting sources.
A data parsimonious model for capturing snapshots of groundwater pollution sources.
Chaubey, Jyoti; Kashyap, Deepak
2017-02-01
Presented herein is a data parsimonious model for identification of regional and local groundwater pollution sources at a reference time employing corresponding fields of head, concentration and its time derivative. The regional source flux, assumed to be uniformly distributed, is viewed as the causative factor for the widely prevalent background concentration. The localized concentration-excesses are attributed to flux from local sources distributed around the respective centroids. The groundwater pollution is parameterized by flux from regional and local sources, and distribution parameters of the latter. These parameters are estimated by minimizing the sum of squares of differences between the observed and simulated concentration fields. The concentration field is simulated by a numerical solution of the transient solute transport equation. The equation is solved assuming the temporal derivative term to be known a priori and merging it with the sink term. This strategy circumvents the requirement of dynamic concentration data. The head field is generated using discrete point head data employing a specially devised interpolator that controls the numerical-differentiation errors and simultaneously ensures micro-level mass balance. This measure eliminates the requirement of flow modeling without compromising the sanctity of head field. The model after due verification has been illustrated employing available and simulated data from an area lying between two rivers Yamuna and Krishni in India. Copyright © 2016 Elsevier B.V. All rights reserved.
Komatitsch, Dimitri
2016-06-13
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
B. P. Weissling
2007-01-01
Full Text Available Soil moisture condition plays a vital role in a watershed's hydrologic response to a precipitation event and is thus parameterized in most, if not all, rainfall-runoff models. Yet the soil moisture condition antecedent to an event has proven difficult to quantify both spatially and temporally. This study assesses the potential to parameterize a parsimonious streamflow prediction model solely utilizing precipitation records and multi-temporal remotely sensed biophysical variables (i.e.~from Moderate Resolution Imaging Spectroradiometer (MODIS/Terra satellite. This study is conducted on a 1420 km^{2} rural watershed in the Guadalupe River basin of southcentral Texas, a basin prone to catastrophic flooding from convective precipitation events. A multiple regression model, accounting for 78% of the variance of observed streamflow for calendar year 2004, was developed based on gauged precipitation, land surface temperature, and enhanced vegetation Index (EVI, on an 8-day interval. These results compared favorably with streamflow estimations utilizing the Natural Resources Conservation Service (NRCS curve number method and the 5-day antecedent moisture model. This approach has great potential for developing near real-time predictive models for flood forecasting and can be used as a tool for flood management in any region for which similar remotely sensed data are available.
Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen
2016-09-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
Weissling, B. P.; Xie, H.; Murray, K. E.
2007-01-01
Soil moisture condition plays a vital role in a watershed's hydrologic response to a precipitation event and is thus parameterized in most, if not all, rainfall-runoff models. Yet the soil moisture condition antecedent to an event has proven difficult to quantify both spatially and temporally. This study assesses the potential to parameterize a parsimonious streamflow prediction model solely utilizing precipitation records and multi-temporal remotely sensed biophysical variables (i.e.~from Moderate Resolution Imaging Spectroradiometer (MODIS)/Terra satellite). This study is conducted on a 1420 km2 rural watershed in the Guadalupe River basin of southcentral Texas, a basin prone to catastrophic flooding from convective precipitation events. A multiple regression model, accounting for 78% of the variance of observed streamflow for calendar year 2004, was developed based on gauged precipitation, land surface temperature, and enhanced vegetation Index (EVI), on an 8-day interval. These results compared favorably with streamflow estimations utilizing the Natural Resources Conservation Service (NRCS) curve number method and the 5-day antecedent moisture model. This approach has great potential for developing near real-time predictive models for flood forecasting and can be used as a tool for flood management in any region for which similar remotely sensed data are available.
Komatitsch, Dimitri; Bozdag, Ebru; de Andrade, Elliott Sales; Peter, Daniel B; Liu, Qinya; Tromp, Jeroen
2016-01-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the $K_\\alpha$ sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersi...
Gopal, Judy; Muthu, Manikandan; Chun, Sechul
2016-07-28
The development of thin film coatings has been a very important development in materials science for the modification of native material surface properties. Thin film coatings are enabled through the use of sophisticated instruments and technologies that demand expertise and huge initial and running costs. Nano-thin films are yet a furtherance of thin films which require more expertise and much more sophistication. In this work for the first time we present a one-pot straightforward carbon thin film coating methodology for glass substrates. There is novelty in every single aspect of the method, with the carbon used in the nanofilm being obtained from turmeric soot, the coating technique consisting of a basic immersion technique, a dip-dry method, in combination with the phytosoot-derived carbon's inherent ability to self-assemble to form a uniform and continuous stable coating. The carbon nanofilm has been characterized using field emission scanning electron microscopy (FESEM), Energy Dispersive X-ray (EDAX) analysis, a goniometer and X-ray diffraction (XRD). This study for the first time opens a new school of thought of using such naturally available free nanomaterials as eco-friendly green coatings. The amorphous porous carbon film can be coated on any hydrophilic substrate and is not substrate specific. Its added advantages of being transparent and antibacterial in spite of being green and parsimonious are meant to realize its utility as ideal choices for solar panels, medical implants and other construction applications.
Diodato, Nazzareno; Borrelli, Pasquale; Fiener, Peter; Bellocchi, Gianni; Romano, Nunzio
2017-01-01
An in-depth analysis of the interannual variability of storms is required to detect changes in soil erosive power of rainfall, which can also result in severe on-site and off-site damages. Evaluating long-term rainfall erosivity is a challenging task, mainly because of the paucity of high-resolution historical precipitation observations that are generally reported at coarser temporal resolutions (e.g., monthly to annual totals). In this paper we suggest overcoming this limitation through an analysis of long-term processes governing rainfall erosivity with an application to datasets available the central Ruhr region (Western Germany) for the period 1701-2011. Based on a parsimonious interpretation of seasonal rainfall-related processes (from spring to autumn), a model was derived using 5-min erosivity data from 10 stations covering the period 1937-2002, and then used to reconstruct a long series of annual rainfall erosivity values. Change-points in the evolution of rainfall erosivity are revealed over the 1760s and the 1920s that mark three sub-periods characterized by increasing mean values. The results indicate that the erosive hazard tends to increase as a consequence of an increased frequency of extreme precipitation events occurred during the last decades, characterized by short-rain events regrouped into prolonged wet spells.
A data parsimonious model for capturing snapshots of groundwater pollution sources
Chaubey, Jyoti; Kashyap, Deepak
2017-02-01
Presented herein is a data parsimonious model for identification of regional and local groundwater pollution sources at a reference time employing corresponding fields of head, concentration and its time derivative. The regional source flux, assumed to be uniformly distributed, is viewed as the causative factor for the widely prevalent background concentration. The localized concentration-excesses are attributed to flux from local sources distributed around the respective centroids. The groundwater pollution is parameterized by flux from regional and local sources, and distribution parameters of the latter. These parameters are estimated by minimizing the sum of squares of differences between the observed and simulated concentration fields. The concentration field is simulated by a numerical solution of the transient solute transport equation. The equation is solved assuming the temporal derivative term to be known a priori and merging it with the sink term. This strategy circumvents the requirement of dynamic concentration data. The head field is generated using discrete point head data employing a specially devised interpolator that controls the numerical-differentiation errors and simultaneously ensures micro-level mass balance. This measure eliminates the requirement of flow modeling without compromising the sanctity of head field. The model after due verification has been illustrated employing available and simulated data from an area lying between two rivers Yamuna and Krishni in India.
Cowden, Joshua R.; Watkins, David W., Jr.; Mihelcic, James R.
2008-10-01
SummarySeveral parsimonious stochastic rainfall models are developed and compared for application to domestic rainwater harvesting (DRWH) assessment in West Africa. Worldwide, improved water access rates are lowest for Sub-Saharan Africa, including the West African region, and these low rates have important implications on the health and economy of the region. Domestic rainwater harvesting (DRWH) is proposed as a potential mechanism for water supply enhancement, especially for the poor urban households in the region, which is essential for development planning and poverty alleviation initiatives. The stochastic rainfall models examined are Markov models and LARS-WG, selected due to availability and ease of use for water planners in the developing world. A first-order Markov occurrence model with a mixed exponential amount model is selected as the best option for unconditioned Markov models. However, there is no clear advantage in selecting Markov models over the LARS-WG model for DRWH in West Africa, with each model having distinct strengths and weaknesses. A multi-model approach is used in assessing DRWH in the region to illustrate the variability associated with the rainfall models. It is clear DRWH can be successfully used as a water enhancement mechanism in West Africa for certain times of the year. A 200 L drum storage capacity could potentially optimize these simple, small roof area systems for many locations in the region.
Game tree algorithms and solution trees
W.H.L.M. Pijls (Wim); A. de Bruin (Arie)
1998-01-01
textabstractIn this paper, a theory of game tree algorithms is presented, entirely based upon the concept of solution tree. Two types of solution trees are distinguished: max and min trees. Every game tree algorithm tries to prune nodes as many as possible from the game tree. A cut-off criterion in
Appelt, Ane L; Rønde, Heidi S
2013-01-01
The photo shows a close-up of a Lichtenberg figure – popularly called an “electron tree” – produced in a cylinder of polymethyl methacrylate (PMMA). Electron trees are created by irradiating a suitable insulating material, in this case PMMA, with an intense high energy electron beam. Upon discharge......, during dielectric breakdown in the material, the electrons generate branching chains of fractures on leaving the PMMA, producing the tree pattern seen. To be able to create electron trees with a clinical linear accelerator, one needs to access the primary electron beam used for photon treatments. We...... appropriated a linac that was being decommissioned in our department and dismantled the head to circumvent the target and ion chambers. This is one of 24 electron trees produced before we had to stop the fun and allow the rest of the accelerator to be disassembled....
Appelt, Ane L; Rønde, Heidi S
2013-01-01
The photo shows a close-up of a Lichtenberg figure – popularly called an “electron tree” – produced in a cylinder of polymethyl methacrylate (PMMA). Electron trees are created by irradiating a suitable insulating material, in this case PMMA, with an intense high energy electron beam. Upon discharge......, during dielectric breakdown in the material, the electrons generate branching chains of fractures on leaving the PMMA, producing the tree pattern seen. To be able to create electron trees with a clinical linear accelerator, one needs to access the primary electron beam used for photon treatments. We...... appropriated a linac that was being decommissioned in our department and dismantled the head to circumvent the target and ion chambers. This is one of 24 electron trees produced before we had to stop the fun and allow the rest of the accelerator to be disassembled....
Gonzalez-Sanchez, Jon
2010-01-01
Let $w = w(x_1,..., x_n)$ be a word, i.e. an element of the free group $F =$ on $n$ generators $x_1,..., x_n$. The verbal subgroup $w(G)$ of a group $G$ is the subgroup generated by the set $\\{w (g_1,...,g_n)^{\\pm 1} | g_i \\in G, 1\\leq i\\leq n \\}$ of all $w$-values in $G$. We say that a (finite) group $G$ is $w$-maximal if $|G:w(G)|> |H:w(H)|$ for all proper subgroups $H$ of $G$ and that $G$ is hereditarily $w$-maximal if every subgroup of $G$ is $w$-maximal. In this text we study $w$-maximal and hereditarily $w$-maximal (finite) groups.
The key kinematic determinants of undulatory underwater swimming at maximal velocity.
Connaboy, Chris; Naemi, Roozbeh; Brown, Susan; Psycharakis, Stelios; McCabe, Carla; Coleman, Simon; Sanders, Ross
2016-01-01
The optimisation of undulatory underwater swimming is highly important in competitive swimming performance. Nineteen kinematic variables were identified from previous research undertaken to assess undulatory underwater swimming performance. The purpose of the present study was to determine which kinematic variables were key to the production of maximal undulatory underwater swimming velocity. Kinematic data at maximal undulatory underwater swimming velocity were collected from 17 skilled swimmers. A series of separate backward-elimination analysis of covariance models was produced with cycle frequency and cycle length as dependent variables (DVs) and participant as a fixed factor, as including cycle frequency and cycle length would explain 100% of the maximal swimming velocity variance. The covariates identified in the cycle-frequency and cycle-length models were used to form the saturated model for maximal swimming velocity. The final parsimonious model identified three covariates (maximal knee joint angular velocity, maximal ankle angular velocity and knee range of movement) as determinants of the variance in maximal swimming velocity (adjusted-r2 = 0.929). However, when participant was removed as a fixed factor there was a large reduction in explained variance (adjusted r2 = 0.397) and only maximal knee joint angular velocity continued to contribute significantly, highlighting its importance to the production of maximal swimming velocity. The reduction in explained variance suggests an emphasis on inter-individual differences in undulatory underwater swimming technique and/or anthropometry. Future research should examine the efficacy of other anthropometric, kinematic and coordination variables to better understand the production of maximal swimming velocity and consider the importance of individual undulatory underwater swimming techniques when interpreting the data.
Tanner, Alastair R.; Fleming, James F.; Tarver, James E.; Pisani, Davide
2017-01-01
Morphological data provide the only means of classifying the majority of life's history, but the choice between competing phylogenetic methods for the analysis of morphology is unclear. Traditionally, parsimony methods have been favoured but recent studies have shown that these approaches are less accurate than the Bayesian implementation of the Mk model. Here we expand on these findings in several ways: we assess the impact of tree shape and maximum-likelihood estimation using the Mk model, as well as analysing data composed of both binary and multistate characters. We find that all methods struggle to correctly resolve deep clades within asymmetric trees, and when analysing small character matrices. The Bayesian Mk model is the most accurate method for estimating topology, but with lower resolution than other methods. Equal weights parsimony is more accurate than implied weights parsimony, and maximum-likelihood estimation using the Mk model is the least accurate method. We conclude that the Bayesian implementation of the Mk model should be the default method for phylogenetic estimation from phenotype datasets, and we explore the implications of our simulations in reanalysing several empirical morphological character matrices. A consequence of our finding is that high levels of resolution or the ability to classify species or groups with much confidence should not be expected when using small datasets. It is now necessary to depart from the traditional parsimony paradigms of constructing character matrices, towards datasets constructed explicitly for Bayesian methods. PMID:28077778
Maximizing without difficulty: A modified maximizing scale and its correlates
Lai, Linda
2010-01-01
... included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cognition, desire for consistency, risk aversion, intrinsic motivation, self-efficacy and perceived workload, whereas...
Maximizing and customer loyalty: Are maximizers less loyal?
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Are maximizers really unhappy? The measurement of maximizing tendency,
Dalia L. Diab
2008-06-01
Full Text Available Recent research suggesting that people who maximize are less happy than those who satisfice has received considerable fanfare. The current study investigates whether this conclusion reflects the construct itself or rather how it is measured. We developed an alternative measure of maximizing tendency that is theory-based, has good psychometric properties, and predicts behavioral outcomes. In contrast to the existing maximization measure, our new measure did not correlate with life (dissatisfaction, nor with most maladaptive personality and decision-making traits. We conclude that the interpretation of maximizers as unhappy may be due to poor measurement of the construct. We present a more reliable and valid measure for future researchers to use.
Principles of maximally classical and maximally realistic quantum mechanics
S M Roy
2002-08-01
Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive phase space density. Here I formulate a stationary principle which gives a nonperturbative deﬁnition of a maximally classical as well as maximally realistic phase space density. I show that the maximally classical trajectories are in fact exactly classical in the simple examples of coherent states and bound states of an oscillator and Gaussian free particle states. In contrast, it is known that the de Broglie–Bohm realistic theory gives highly nonclassical trajectories.
Douglas D Gaffin
Full Text Available The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects' brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path's end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery.
Gaffin, Douglas D; Brayfield, Brad P
2016-01-01
The navigation of bees and ants from hive to food and back has captivated people for more than a century. Recently, the Navigation by Scene Familiarity Hypothesis (NSFH) has been proposed as a parsimonious approach that is congruent with the limited neural elements of these insects' brains. In the NSFH approach, an agent completes an initial training excursion, storing images along the way. To retrace the path, the agent scans the area and compares the current scenes to those previously experienced. By turning and moving to minimize the pixel-by-pixel differences between encountered and stored scenes, the agent is guided along the path without having memorized the sequence. An important premise of the NSFH is that the visual information of the environment is adequate to guide navigation without aliasing. Here we demonstrate that an image landscape of an indoor setting possesses ample navigational information. We produced a visual landscape of our laboratory and part of the adjoining corridor consisting of 2816 panoramic snapshots arranged in a grid at 12.7-cm centers. We show that pixel-by-pixel comparisons of these images yield robust translational and rotational visual information. We also produced a simple algorithm that tracks previously experienced routes within our lab based on an insect-inspired scene familiarity approach and demonstrate that adequate visual information exists for an agent to retrace complex training routes, including those where the path's end is not visible from its origin. We used this landscape to systematically test the interplay of sensor morphology, angles of inspection, and similarity threshold with the recapitulation performance of the agent. Finally, we compared the relative information content and chance of aliasing within our visually rich laboratory landscape to scenes acquired from indoor corridors with more repetitive scenery.
Species tree inference by minimizing deep coalescences.
Cuong Than
2009-09-01
Full Text Available In a 1997 seminal paper, W. Maddison proposed minimizing deep coalescences, or MDC, as an optimization criterion for inferring the species tree from a set of incongruent gene trees, assuming the incongruence is exclusively due to lineage sorting. In a subsequent paper, Maddison and Knowles provided and implemented a search heuristic for optimizing the MDC criterion, given a set of gene trees. However, the heuristic is not guaranteed to compute optimal solutions, and its hill-climbing search makes it slow in practice. In this paper, we provide two exact solutions to the problem of inferring the species tree from a set of gene trees under the MDC criterion. In other words, our solutions are guaranteed to find the tree that minimizes the total number of deep coalescences from a set of gene trees. One solution is based on a novel integer linear programming (ILP formulation, and another is based on a simple dynamic programming (DP approach. Powerful ILP solvers, such as CPLEX, make the first solution appealing, particularly for very large-scale instances of the problem, whereas the DP-based solution eliminates dependence on proprietary tools, and its simplicity makes it easy to integrate with other genomic events that may cause gene tree incongruence. Using the exact solutions, we analyze a data set of 106 loci from eight yeast species, a data set of 268 loci from eight Apicomplexan species, and several simulated data sets. We show that the MDC criterion provides very accurate estimates of the species tree topologies, and that our solutions are very fast, thus allowing for the accurate analysis of genome-scale data sets. Further, the efficiency of the solutions allow for quick exploration of sub-optimal solutions, which is important for a parsimony-based criterion such as MDC, as we show. We show that searching for the species tree in the compatibility graph of the clusters induced by the gene trees may be sufficient in practice, a finding that helps
Interpreting Tree Ensembles with inTrees
Deng, Houtao
2014-01-01
Tree ensembles such as random forests and boosted trees are accurate but difficult to understand, debug and deploy. In this work, we provide the inTrees (interpretable trees) framework that extracts, measures, prunes and selects rules from a tree ensemble, and calculates frequent variable interactions. An rule-based learner, referred to as the simplified tree ensemble learner (STEL), can also be formed and used for future prediction. The inTrees framework can applied to both classification an...
Maximizing ROI with yield management
Neil Snyder
2001-01-01
.... the technology is based on the concept of yield management, which aims to sell the right product to the right customer at the right price and the right time therefore maximizing revenue, or yield...
Are CEOs Expected Utility Maximizers?
John List; Charles Mason
2009-01-01
Are individuals expected utility maximizers? This question represents much more than academic curiosity. In a normative sense, at stake are the fundamental underpinnings of the bulk of the last half-century's models of choice under uncertainty. From a positive perspective, the ubiquitous use of benefit-cost analysis across government agencies renders the expected utility maximization paradigm literally the only game in town. In this study, we advance the literature by exploring CEO's preferen...
Gaussian maximally multipartite entangled states
Facchi, Paolo; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio
2009-01-01
We introduce the notion of maximally multipartite entangled states (MMES) in the context of Gaussian continuous variable quantum systems. These are bosonic multipartite states that are maximally entangled over all possible bipartitions of the system. By considering multimode Gaussian states with constrained energy, we show that perfect MMESs, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of MMESs and their frustration for n <= 7.
All maximally entangling unitary operators
Cohen, Scott M. [Department of Physics, Duquesne University, Pittsburgh, Pennsylvania 15282 (United States); Department of Physics, Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo
2016-01-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
National Audubon Society, New York, NY.
Included are an illustrated student reader, "The Story of Trees," a leaders' guide, and a large tree chart with 37 colored pictures. The student reader reviews several aspects of trees: a definition of a tree; where and how trees grow; flowers, pollination and seed production; how trees make their food; how to recognize trees; seasonal changes;…
Predicting in ungauged basins using a parsimonious rainfall-runoff model
Skaugen, Thomas; Olav Peerebom, Ivar; Nilsson, Anna
2015-04-01
Prediction in ungauged basins is a demanding, but necessary test for hydrological model structures. Ideally, the relationship between model parameters and catchment characteristics (CC) should be hydrologically justifiable. Many studies, however, report on failure to obtain significant correlations between model parameters and CCs. Under the hypothesis that the lack of correlations stems from non-identifiability of model parameters caused by overparameterization, the relatively new parameter parsimonious DDD (Distance Distribution Dynamics) model was tested for predictions in ungauged basins in Norway. In DDD, the capacity of the subsurface water reservoir M is the only parameter to be calibrated whereas the runoff dynamics is completely parameterised from observed characteristics derived from GIS and runoff recession analysis. Water is conveyed through the soils to the river network by waves with celerities determined by the level of saturation in the catchment. The distributions of distances between points in the catchment to the nearest river reach and of the river network give, together with the celerities, distributions of travel times, and, consequently unit hydrographs. DDD has 6 parameters less to calibrate in the runoff module than, for example, the well-known Swedish HBV model. In this study, multiple regression equations relating CCs and model parameters were trained from 84 calibrated catchments located all over Norway and all model parameters showed significant correlations with catchment characteristics. The significant correlation coefficients (with p- value < 0.05) ranged from 0.22-0.55. The suitability of DDD for predictions in ungauged basins was tested for 17 catchments not used to estimate the multiple regression equations. For 10 of the 17 catchments, deviations in Nash-Suthcliffe Efficiency (NSE) criteria between the calibrated and regionalised model were less than 0.1. The median NSE for the regionalised DDD for the 17 catchments, for two
Urban micro-scale flood risk estimation with parsimonious hydraulic modelling and census data
C. Arrighi
2013-05-01
Full Text Available The adoption of 2007/60/EC Directive requires European countries to implement flood hazard and flood risk maps by the end of 2013. Flood risk is the product of flood hazard, vulnerability and exposure, all three to be estimated with comparable level of accuracy. The route to flood risk assessment is consequently much more than hydraulic modelling of inundation, that is hazard mapping. While hazard maps have already been implemented in many countries, quantitative damage and risk maps are still at a preliminary level. A parsimonious quasi-2-D hydraulic model is here adopted, having many advantages in terms of easy set-up. It is here evaluated as being accurate in flood depth estimation in urban areas with a high-resolution and up-to-date Digital Surface Model (DSM. The accuracy, estimated by comparison with marble-plate records of a historic flood in the city of Florence, is characterized in the downtown's most flooded area by a bias of a very few centimetres and a determination coefficient of 0.73. The average risk is found to be about 14 € m−2 yr−1, corresponding to about 8.3% of residents' income. The spatial distribution of estimated risk highlights a complex interaction between the flood pattern and the building characteristics. As a final example application, the estimated risk values have been used to compare different retrofitting measures. Proceeding through the risk estimation steps, a new micro-scale potential damage assessment method is proposed. This is based on the georeferenced census system as the optimal compromise between spatial detail and open availability of socio-economic data. The results of flood risk assessment at the census section scale resolve most of the risk spatial variability, and they can be easily aggregated to whatever upper scale is needed given that they are geographically defined as contiguous polygons. Damage is calculated through stage–damage curves, starting from census data on building type and
A search for model parsimony in a real time flood forecasting system
Grossi, G.; Balistrocchi, M.
2009-04-01
As regards the hydrological simulation of flood events, a physically based distributed approach is the most appealing one, especially in those areas where the spatial variability of the soil hydraulic properties as well as of the meteorological forcing cannot be left apart, such as in mountainous regions. On the other hand, dealing with real time flood forecasting systems, less detailed models requiring a minor number of parameters may be more convenient, reducing both the computational costs and the calibration uncertainty. In fact in this case a precise quantification of the entire hydrograph pattern is not necessary, while the expected output of a real time flood forecasting system is just an estimate of the peak discharge, the time to peak and in some cases the flood volume. In this perspective a parsimonious model has to be found in order to increase the efficiency of the system. A suitable case study was identified in the northern Apennines: the Taro river is a right tributary to the Po river and drains about 2000 km2 of mountains, hills and floodplain, equally distributed . The hydrometeorological monitoring of this medium sized watershed is managed by ARPA Emilia Romagna through a dense network of uptodate gauges (about 30 rain gauges and 10 hydrometers). Detailed maps of the surface elevation, land use and soil texture characteristics are also available. Five flood events were recorded by the new monitoring network in the years 2003-2007: during these events the peak discharge was higher than 1000 m3/s, which is actually quite a high value when compared to the mean discharge rate of about 30 m3/s. The rainfall spatial patterns of such storms were analyzed in previous works by means of geostatistical tools and a typical semivariogram was defined, with the aim of establishing a typical storm structure leading to flood events in the Taro river. The available information was implemented into a distributed flood event model with a spatial resolution of 90m
Canfield, Elaine
2002-01-01
Describes a fifth-grade art activity that offers a new approach to creating pictures of Aspen trees. Explains that the students learned about art concepts, such as line and balance, in this lesson. Discusses the process in detail for creating the pictures. (CMK)
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
Algebraic curves of maximal cyclicity
Caubergh, Magdalena; Dumortier, Freddy
2006-01-01
The paper deals with analytic families of planar vector fields, studying methods to detect the cyclicity of a non-isolated closed orbit, i.e. the maximum number of limit cycles that can locally bifurcate from it. It is known that this multi-parameter problem can be reduced to a single-parameter one, in the sense that there exist analytic curves in parameter space along which the maximal cyclicity can be attained. In that case one speaks about a maximal cyclicity curve (mcc) in case only the number is considered and of a maximal multiplicity curve (mmc) in case the multiplicity is also taken into account. In view of obtaining efficient algorithms for detecting the cyclicity, we investigate whether such mcc or mmc can be algebraic or even linear depending on certain general properties of the families or of their associated Bautin ideal. In any case by well chosen examples we show that prudence is appropriate.
BOUNDEDNESS OF MAXIMAL SINGULAR INTEGRALS
CHEN JIECHENG; ZHU XIANGRONG
2005-01-01
The authors study the singular integrals under the Hormander condition and the measure not satisfying the doubling condition. At first, if the corresponding singular integral is bounded from L2 to itseff, it is proved that the maximal singu lar integral is bounded from L∞ to RBMO except that it is infinite μ-a.e. on Rd. A sufficient condition and a necessary condition such that the maximal singular integral is bounded from L2 to itself are also obtained. There is a small gap between the two conditions.
Maximal Unitarity at Two Loops
Kosower, David A
2012-01-01
We show how to compute the coefficients of the double box basis integrals in a massless four-point amplitude in terms of tree amplitudes. We show how to choose suitable multidimensional contours for performing the required cuts, and derive consistency equations from the requirement that integrals of total derivatives vanish. Our formulae for the coefficients can be used either analytically or numerically.
Understanding maximal repetitions in strings
Crochemore, Maxime
2008-01-01
The cornerstone of any algorithm computing all repetitions in a string of length n in O(n) time is the fact that the number of runs (or maximal repetitions) is O(n). We give a simple proof of this result. As a consequence of our approach, the stronger result concerning the linearity of the sum of exponents of all runs follows easily.
Large-scale parsimony analysis of metazoan indels in protein-coding genes.
Belinky, Frida; Cohen, Ofir; Huchon, Dorothée
2010-02-01
Insertions and deletions (indels) are considered to be rare evolutionary events, the analysis of which may resolve controversial phylogenetic relationships. Indeed, indel characters are often assumed to be less homoplastic than amino acid and nucleotide substitutions and, consequently, more reliable markers for phylogenetic reconstruction. In this study, we analyzed indels from over 1,000 metazoan orthologous genes. We studied the impact of different species sampling, ortholog data sets, lengths of included indels, and indel-coding methods on the resulting metazoan tree. Our results show that, similar to sequence substitutions, indels are homoplastic characters, and their analysis is sensitive to the long-branch attraction artifact. Furthermore, improving the taxon sampling and choosing a closely related outgroup greatly impact the phylogenetic inference. Our indel-based inferences support the Ecdysozoa hypothesis over the Coelomata hypothesis and suggest that sponges are a sister clade to other animals.
Extracting Conflict-free Information from Multi-labeled Trees
Deepak, Akshay; McMahon, Michelle M
2012-01-01
A multi-labeled tree, or MUL-tree, is a phylogenetic tree where two or more leaves share a label, e.g., a species name. A MUL-tree can imply multiple conflicting phylogenetic relationships for the same set of taxa, but can also contain conflict-free information that is of interest and yet is not obvious. We define the information content of a MUL-tree T as the set of all conflict-free quartet topologies implied by T, and define the maximal reduced form of T as the smallest tree that can be obtained from T by pruning leaves and contracting edges while retaining the same information content. We show that any two MUL-trees with the same information content exhibit the same reduced form. This introduces an equivalence relation in MUL-trees with potential applications to comparing MUL-trees. We present an efficient algorithm to reduce a MUL-tree to its maximally reduced form and evaluate its performance on empirical datasets in terms of both quality of the reduced tree and the degree of data reduction achieved.
Achenbach-Richter, L.; Gupta, R.; Zillig, W.; Woese, C. R.
1988-01-01
The sequence of the 16S ribosomal RNA gene from the archaebacterium Thermococcus celer shows the organism to be related to the methanogenic archaebacteria rather than to its phenotypic counterparts, the extremely thermophilic archaebacteria. This conclusion turns on the position of the root of the archaebacterial phylogenetic tree, however. The problems encountered in rooting this tree are analyzed in detail. Under conditions that suppress evolutionary noise both the parsimony and evolutionary distance methods yield a root location (using a number of eubacterial or eukaryotic outgroup sequences) that is consistent with that determined by an "internal rooting" method, based upon an (approximate) determination of relative evolutionary rates.
Finite Sholander Trees, Trees, and their Betweenness
Chvátal, Vašek; Schäfer, Philipp Matthias
2011-01-01
We provide a proof of Sholander's claim (Trees, lattices, order, and betweenness, Proc. Amer. Math. Soc. 3, 369-381 (1952)) concerning the representability of collections of so-called segments by trees, which yields a characterization of the interval function of a tree. Furthermore, we streamline Burigana's characterization (Tree representations of betweenness relations defined by intersection and inclusion, Mathematics and Social Sciences 185, 5-36 (2009)) of tree betweenness and provide a relatively short proof.
Mirus, Benjamin B.; Nimmo, J.R.
2013-01-01
The impact of preferential flow on recharge and contaminant transport poses a considerable challenge to water-resources management. Typical hydrologic models require extensive site characterization, but can underestimate fluxes when preferential flow is significant. A recently developed source-responsive model incorporates film-flow theory with conservation of mass to estimate unsaturated-zone preferential fluxes with readily available data. The term source-responsive describes the sensitivity of preferential flow in response to water availability at the source of input. We present the first rigorous tests of a parsimonious formulation for simulating water table fluctuations using two case studies, both in arid regions with thick unsaturated zones of fractured volcanic rock. Diffuse flow theory cannot adequately capture the observed water table responses at both sites; the source-responsive model is a viable alternative. We treat the active area fraction of preferential flow paths as a scaled function of water inputs at the land surface then calibrate the macropore density to fit observed water table rises. Unlike previous applications, we allow the characteristic film-flow velocity to vary, reflecting the lag time between source and deep water table responses. Analysis of model performance and parameter sensitivity for the two case studies underscores the importance of identifying thresholds for initiation of film flow in unsaturated rocks, and suggests that this parsimonious approach is potentially of great practical value.
Note on maximal distance separable codes
YANG Jian-sheng; WANG De-xiu; JIN Qing-fang
2009-01-01
In this paper, the maximal length of maximal distance separable(MDS)codes is studied, and a new upper bound formula of the maximal length of MDS codes is obtained. Especially, the exact values of the maximal length of MDS codes in some parameters are given.
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Majority rule has transition ratio 4 on Yule trees under a 2-state symmetric model.
Mossel, Elchanan; Steel, Mike
2014-11-01
Inferring the ancestral state at the root of a phylogenetic tree from states observed at the leaves is a problem arising in evolutionary biology. The simplest technique - majority rule - estimates the root state by the most frequently occurring state at the leaves. Alternative methods - such as maximum parsimony - explicitly take the tree structure into account. Since either method can outperform the other on particular trees, it is useful to consider the accuracy of the methods on trees generated under some evolutionary null model, such as a Yule pure-birth model. In this short note, we answer a recently posed question concerning the performance of majority rule on Yule trees under a symmetric 2-state Markovian substitution model of character state change. We show that majority rule is accurate precisely when the ratio of the birth (speciation) rate of the Yule process to the substitution rate exceeds the value 4. By contrast, maximum parsimony has been shown to be accurate only when this ratio is at least 6. Our proof relies on a second moment calculation, coupling, and a novel application of a reflection principle.
Smith, James J; Cheruvelil, Kendra Spence; Auvenshine, Stacie
2013-01-01
Phylogenetic trees provide visual representations of ancestor-descendant relationships, a core concept of evolutionary theory. We introduced "tree thinking" into our introductory organismal biology course (freshman/sophomore majors) to help teach organismal diversity within an evolutionary framework. Our instructional strategy consisted of designing and implementing a set of experiences to help students learn to read, interpret, and manipulate phylogenetic trees, with a particular emphasis on using data to evaluate alternative phylogenetic hypotheses (trees). To assess the outcomes of these learning experiences, we designed and implemented a Phylogeny Assessment Tool (PhAT), an open-ended response instrument that asked students to: 1) map characters on phylogenetic trees; 2) apply an objective criterion to decide which of two trees (alternative hypotheses) is "better"; and 3) demonstrate understanding of phylogenetic trees as depictions of ancestor-descendant relationships. A pre-post test design was used with the PhAT to collect data from students in two consecutive Fall semesters. Students in both semesters made significant gains in their abilities to map characters onto phylogenetic trees and to choose between two alternative hypotheses of relationship (trees) by applying the principle of parsimony (Occam's razor). However, learning gains were much lower in the area of student interpretation of phylogenetic trees as representations of ancestor-descendant relationships.
Bahr, Patrick
2012-01-01
Tree automata are traditionally used to study properties of tree languages and tree transformations. In this paper, we consider tree automata as the basis for modular and extensible recursion schemes. We show, using well-known techniques, how to derive from standard tree automata highly modular r...
David J. Nowak; Jeffrey T. Walton; James Baldwin; Jerry. Bond
2015-01-01
Information on street trees is critical for management of this important resource. Sampling of street tree populations provides an efficient means to obtain street tree population information. Long-term repeat measures of street tree samples supply additional information on street tree changes and can be used to report damages from catastrophic events. Analyses of...
Bahr, Patrick
2012-01-01
Tree automata are traditionally used to study properties of tree languages and tree transformations. In this paper, we consider tree automata as the basis for modular and extensible recursion schemes. We show, using well-known techniques, how to derive from standard tree automata highly modular...
Efficient FPT Algorithms for (Strict) Compatibility of Unrooted Phylogenetic Trees.
Baste, Julien; Paul, Christophe; Sau, Ignasi; Scornavacca, Celine
2017-02-28
In phylogenetics, a central problem is to infer the evolutionary relationships between a set of species X; these relationships are often depicted via a phylogenetic tree-a tree having its leaves labeled bijectively by elements of X and without degree-2 nodes-called the "species tree." One common approach for reconstructing a species tree consists in first constructing several phylogenetic trees from primary data (e.g., DNA sequences originating from some species in X), and then constructing a single phylogenetic tree maximizing the "concordance" with the input trees. The obtained tree is our estimation of the species tree and, when the input trees are defined on overlapping-but not identical-sets of labels, is called "supertree." In this paper, we focus on two problems that are central when combining phylogenetic trees into a supertree: the compatibility and the strict compatibility problems for unrooted phylogenetic trees. These problems are strongly related, respectively, to the notions of "containing as a minor" and "containing as a topological minor" in the graph community. Both problems are known to be fixed parameter tractable in the number of input trees k, by using their expressibility in monadic second-order logic and a reduction to graphs of bounded treewidth. Motivated by the fact that the dependency on k of these algorithms is prohibitively large, we give the first explicit dynamic programming algorithms for solving these problems, both running in time [Formula: see text], where n is the total size of the input.
Efficient exploration of the space of reconciled gene trees.
Szöllõsi, Gergely J; Rosikiewicz, Wojciech; Boussau, Bastien; Tannier, Eric; Daubin, Vincent
2013-11-01
Gene trees record the combination of gene-level events, such as duplication, transfer and loss (DTL), and species-level events, such as speciation and extinction. Gene tree-species tree reconciliation methods model these processes by drawing gene trees into the species tree using a series of gene and species-level events. The reconstruction of gene trees based on sequence alone almost always involves choosing between statistically equivalent or weakly distinguishable relationships that could be much better resolved based on a putative species tree. To exploit this potential for accurate reconstruction of gene trees, the space of reconciled gene trees must be explored according to a joint model of sequence evolution and gene tree-species tree reconciliation. Here we present amalgamated likelihood estimation (ALE), a probabilistic approach to exhaustively explore all reconciled gene trees that can be amalgamated as a combination of clades observed in a sample of gene trees. We implement the ALE approach in the context of a reconciliation model (Szöllősi et al. 2013), which allows for the DTL of genes. We use ALE to efficiently approximate the sum of the joint likelihood over amalgamations and to find the reconciled gene tree that maximizes the joint likelihood among all such trees. We demonstrate using simulations that gene trees reconstructed using the joint likelihood are substantially more accurate than those reconstructed using sequence alone. Using realistic gene tree topologies, branch lengths, and alignment sizes, we demonstrate that ALE produces more accurate gene trees even if the model of sequence evolution is greatly simplified. Finally, examining 1099 gene families from 36 cyanobacterial genomes we find that joint likelihood-based inference results in a striking reduction in apparent phylogenetic discord, with respectively. 24%, 59%, and 46% reductions in the mean numbers of duplications, transfers, and losses per gene family. The open source
Gribov ambiguities at the Landau -- maximal Abelian interpolating gauge
Pereira, A D
2014-01-01
In a previous work, we presented a new method to account for the Gribov ambiguities in non-Abelian gauge theories. The method consists on the introduction of an extra constraint which directly eliminates the infinitesimal Gribov copies without the usual geometric approach. Such strategy allows to treat gauges with non-hermitian Faddeev-Popov operator. In this work, we apply this method to a gauge which interpolates among the Landau and maximal Abelian gauges. The result is a local and power counting renormalizable action, free of infinitesimal Gribov copies. Moreover, the interpolating tree-level gluon propagator is derived.
Asymptotics of robust utility maximization
Knispel, Thomas
2012-01-01
For a stochastic factor model we maximize the long-term growth rate of robust expected power utility with parameter $\\lambda\\in(0,1)$. Using duality methods the problem is reformulated as an infinite time horizon, risk-sensitive control problem. Our results characterize the optimal growth rate, an optimal long-term trading strategy and an asymptotic worst-case model in terms of an ergodic Bellman equation. With these results we propose a duality approach to a "robust large deviations" criterion for optimal long-term investment.
Beeping a Maximal Independent Set
Afek, Yehuda; Alon, Noga; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot...
Maximal Congruences on Some Semigroups
Jintana Sanwong; R.P. Sullivan
2007-01-01
In 1976 Howie proved that a finite congruence-free semigroup is a simple group if it has at least three elements but no zero elementInfinite congruence-free semigroups are far more complicated to describe, but some have been constructed using semigroups of transformations (for example, by Howie in 1981 and by Marques in 1983)Here, forcertain semigroups S of numbers and of transformations, we determine all congruences p on S such that S/p is congruence-free, that is, we describe all maximal congruences on such semigroups S.
Carlos Alberto Gonçalves
2013-09-01
Full Text Available This work aims to presents the settings and the degree of intensity on the Organizational Performance based on significant antecedents in two sectors categorized as manufacturing and service. It is present measurements of the effects and combinations set composed by Managerial Factors, External Environment, Internal Organizational efforts, Strategy Process in the Organizational Performance. The research used data collection by interview, survey research and it was made statistical analysis by of Structural Equation Modeling methods and Qualitative Comparative Analysis - ACQ. It can be seen that the construct Strategy Process is the most important in explaining Organizational Performance in relation to other reports. It was also observed that the industry and service sectors have different sets parsimonious explanation for the Organizational Performance.
Demirhan Erdal
2015-01-01
Full Text Available This paper aims to investigate the effect of exchange-rate stability on real export volume in Turkey, using monthly data for the period February 2001 to January 2010. The Johansen multivariate cointegration method and the parsimonious error-correction model are applied to determine long-run and short-run relationships between real export volume and its determinants. In this study, the conditional variance of the GARCH (1, 1 model is taken as a proxy for exchange-rate stability, and generalized impulse-response functions and variance-decomposition analyses are applied to analyze the dynamic effects of variables on real export volume. The empirical findings suggest that exchangerate stability has a significant positive effect on real export volume, both in the short and the long run.
Knowledge discovery by accuracy maximization.
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-04-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.
Inapproximability of maximal strip recovery
Jiang, Minghui
2009-01-01
In comparative genomic, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given $d$ genomic maps as sequences of gene markers, the objective of \\msr{d} is to find $d$ subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant $d \\ge 2$, a polynomial-time 2d-approximation for \\msr{d} was previously known. In this paper, we show that for any $d \\ge 2$, \\msr{d} is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provi...
Maximal right smooth extension chains
Huang, Yun Bao
2010-01-01
If $w=u\\alpha$ for $\\alpha\\in \\Sigma=\\{1,2\\}$ and $u\\in \\Sigma^*$, then $w$ is said to be a \\textit{simple right extension}of $u$ and denoted by $u\\prec w$. Let $k$ be a positive integer and $P^k(\\epsilon)$ denote the set of all $C^\\infty$-words of height $k$. Set $u_{1},\\,u_{2},..., u_{m}\\in P^{k}(\\epsilon)$, if $u_{1}\\prec u_{2}\\prec ...\\prec u_{m}$ and there is no element $v$ of $P^{k}(\\epsilon)$ such that $v\\prec u_{1}\\text{or} u_{m}\\prec v$, then $u_{1}\\prec u_{2}\\prec...\\prec u_{m}$ is said to be a \\textit{maximal right smooth extension (MRSE) chains}of height $k$. In this paper, we show that \\textit{MRSE} chains of height $k$ constitutes a partition of smooth words of height $k$ and give the formula of the number of \\textit{MRSE} chains of height $k$ for each positive integer $k$. Moreover, since there exist the minimal height $h_1$ and maximal height $h_2$ of smooth words of length $n$ for each positive integer $n$, we find that \\textit{MRSE} chains of heights $h_1-1$ and $h_2+1$ are good candidates t...
C. Hahn
2013-10-01
Full Text Available Eutrophication of surface waters due to diffuse phosphorus (P losses continues to be a severe water quality problem worldwide, causing the loss of ecosystem functions of the respective water bodies. Phosphorus in runoff often originates from a small fraction of a catchment only. Targeting mitigation measures to these critical source areas (CSAs is expected to be most efficient and cost-effective, but requires suitable tools. Here we investigated the capability of the parsimonious Rainfall-Runoff-Phosphorus (RRP model to identify CSAs in grassland-dominated catchments based on readily available soil and topographic data. After simultaneous calibration on runoff data from four small hilly catchments on the Swiss Plateau, the model was validated on a different catchment in the same region without further calibration. The RRP model adequately simulated the discharge and dissolved reactive P (DRP export from the validation catchment. Sensitivity analysis showed that the model predictions were robust with respect to the classification of soils into "poorly drained" and "well drained", based on the available soil map. Comparing spatial hydrological model predictions with field data from the validation catchment provided further evidence that the assumptions underlying the model are valid and that the model adequately accounts for the dominant P export processes in the target region. Thus, the parsimonious RRP model is a valuable tool that can be used to determine CSAs. Despite the considerable predictive uncertainty regarding the spatial extent of CSAs, the RRP can provide guidance for the implementation of mitigation measures. The model helps to identify those parts of a catchment where high DRP losses are expected or can be excluded with high confidence. Legacy P was predicted to be the dominant source for DRP losses and thus, in combination with hydrologic active areas, a high risk for water quality.
Hydrologic behaviour of the Lake of Monate (Italy): a parsimonious modelling strategy
Tomesani, Giulia; Soligno, Irene; Castellarin, Attilio; Baratti, Emanuele; Cervi, Federico; Montanari, Alberto
2016-04-01
the second year as validation data and we compared two calibration strategies which maximize two different objective functions: (1) Nash-Sutcliffe efficiency of simulated daily water-level fluctuations, NSE, and (2) linear correlation coefficient between daily series of simulated groundwater inflow and observed water table elevation multiplied by NSE. The validation exercise seems to point out the value of incorporating groundwater measurements for improving the reliability and robustness of the conceptual model.
Survival associated pathway identification with group Lp penalized global AUC maximization
Liu Zhenqiu
2010-08-01
Full Text Available Abstract It has been demonstrated that genes in a cell do not act independently. They interact with one another to complete certain biological processes or to implement certain molecular functions. How to incorporate biological pathways or functional groups into the model and identify survival associated gene pathways is still a challenging problem. In this paper, we propose a novel iterative gradient based method for survival analysis with group Lp penalized global AUC summary maximization. Unlike LASSO, Lp (p 1. We first extend Lp for individual gene identification to group Lp penalty for pathway selection, and then develop a novel iterative gradient algorithm for penalized global AUC summary maximization (IGGAUCS. This method incorporates the genetic pathways into global AUC summary maximization and identifies survival associated pathways instead of individual genes. The tuning parameters are determined using 10-fold cross validation with training data only. The prediction performance is evaluated using test data. We apply the proposed method to survival outcome analysis with gene expression profile and identify multiple pathways simultaneously. Experimental results with simulation and gene expression data demonstrate that the proposed procedures can be used for identifying important biological pathways that are related to survival phenotype and for building a parsimonious model for predicting the survival times.
Molecular phylogenetic trees - On the validity of the Goodman-Moore augmentation algorithm
Holmquist, R.
1979-01-01
A response is made to the reply of Nei and Tateno (1979) to the letter of Holmquist (1978) supporting the validity of the augmentation algorithm of Moore (1977) in reconstructions of nucleotide substitutions by means of the maximum parsimony principle. It is argued that the overestimation of the augmented numbers of nucleotide substitutions (augmented distances) found by Tateno and Nei (1978) is due to an unrepresentative data sample and that it is only necessary that evolution be stochastically uniform in different regions of the phylogenetic network for the augmentation method to be useful. The importance of the average value of the true distance over all links is explained, and the relative variances of the true and augmented distances are calculated to be almost identical. The effects of topological changes in the phylogenetic tree on the augmented distance and the question of the correctness of ancestral sequences inferred by the method of parsimony are also clarified.
Modeling Dynamic Height and Crown Growth in Trees
Franklin, O.; Fransson, P.; Brännström, Å.
2015-12-01
Previously we have shown how principles based on productivity maximization (e.g. maximization of net primary production, net growth maximization, or functional balance) can explain allocation responses to resources, such as nutrients and light (Franklin et al., 2012). However, the success of these approaches depend on how well they align with the ultimate driver of plant behavior, fitness, or life time reproductive success. Consequently, they may not fully explain how allocation changes during the life cycle of trees where not only growth but also survival and reproduction are important. In addition, maximizing instantaneous productivity does not account for path dependence of tree growth. For example, maximizing productivity during early growth in shade may delay emergence in the forest canopy and reduce lifetime fitness compared to a more height oriented strategy. Here we present an approach to model how growth of stem diameter and leaf area in relation to stem height dynamically responds to light conditions in a way that maximizes life-time fitness (rather than instantaneous growth). The model is able to predict growth of trees growing in different types of forests, including trees emerging under a closed canopy and seedlings planted in a clear-cut area. It can also predict the response to sudden changes in the light environment, due to disturbances or harvesting. We envisage two main applications of the model, (i) Modeling effects of forest management, including thinning and planting (ii) Elucidating height growth strategies in trees and how they can be represented in vegetation models. ReferenceFranklin O, Johansson J, Dewar RC, Dieckmann U, McMurtrie RE, Brännström Å, Dybzinski R. 2012. Modeling carbon allocation in trees: a search for principles. Tree Physiology 32(6): 648-666.
The maximal D = 4 supergravities
Wit, Bernard de [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, Postbus 80.195, NL-3508 TD Utrecht (Netherlands); Samtleben, Henning [Laboratoire de Physique, ENS Lyon, 46 allee d' Italie, F-69364 Lyon CEDEX 07 (France); Trigiante, Mario [Dept. of Physics, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Turin (Italy)
2007-06-15
All maximal supergravities in four space-time dimensions are presented. The ungauged Lagrangians can be encoded in an E{sub 7(7)}-Sp(56; R)/GL(28) matrix associated with the freedom of performing electric/magnetic duality transformations. The gauging is defined in terms of an embedding tensor {theta} which encodes the subgroup of E{sub 7(7)} that is realized as a local invariance. This embedding tensor may imply the presence of magnetic charges which require corresponding dual gauge fields. The latter can be incorporated by using a recently proposed formulation that involves tensor gauge fields in the adjoint representation of E{sub 7(7)}. In this formulation the results take a universal form irrespective of the electric/magnetic duality basis. We present the general class of supersymmetric and gauge invariant Lagrangians and discuss a number of applications.
Maximizing profit using recommender systems
Das, Aparna; Ricketts, Daniel
2009-01-01
Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
The maximal D=5 supergravities
de Wit, Bernard; Trigiante, M; Wit, Bernard de; Samtleben, Henning; Trigiante, Mario
2007-01-01
The general Lagrangian for maximal supergravity in five spacetime dimensions is presented with vector potentials in the \\bar{27} and tensor fields in the 27 representation of E_6. This novel tensor-vector system is subject to an intricate set of gauge transformations, describing 3(27-t) massless helicity degrees of freedom for the vector fields and 3t massive spin degrees of freedom for the tensor fields, where the (even) value of t depends on the gauging. The kinetic term of the tensor fields is accompanied by a unique Chern-Simons coupling which involves both vector and tensor fields. The Lagrangians are completely encoded in terms of the embedding tensor which defines the E_6 subgroup that is gauged by the vectors. The embedding tensor is subject to two constraints which ensure the consistency of the combined vector-tensor gauge transformations and the supersymmetry of the full Lagrangian. This new formulation encompasses all possible gaugings.
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
无
2000-01-01
Healthy trees are important to us all. Trees provide shade, beauty, and homes for wildlife. Trees give us products like paper and wood. Trees can give us all this only if they are healthy.They must be well cared for to remain healthy.
Classification and regression trees
Breiman, Leo; Olshen, Richard A; Stone, Charles J
1984-01-01
The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.
Which trees should be removed in thinning treatments?
Timo Pukkala; Erkki Lhde; Olavi Laiho
2016-01-01
Background:In economically optimal management, trees that are removed in a thinning treatment should be selected on the basis of their value, relative value increment and the effect of removal on the growth of remaining trees. Large valuable trees with decreased value increment should be removed, especially when they overtop smaller trees. Methods:This study optimized the tree selection rule in the thinning treatments of continuous cover management when the aim is to maximize the profitability of forest management. The weights of three criteria (stem value, relative value increment and effect of removal on the competition of remaining trees) were optimized together with thinning intervals. Results and conclusions:The results confirmed the hypothesis that optimal thinning involves removing predominantly large trees. Increasing stumpage value, decreasing relative value increment, and increasing competitive influence increased the likelihood that removal is optimal decision. However, if the spatial distribution of trees is irregular, it is optimal to leave large trees in sparse places and remove somewhat smaller trees from dense places. However, the benefit of optimal thinning, as compared to diameter limit cutting is not usually large in pure one-species stands. On the contrary, removing the smallest trees from the stand may lead to significant (30–40%) reductions in the net present value of harvest incomes.
Beeping a Maximal Independent Set
Afek, Yehuda; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possi...
Maximal switchability of centralized networks
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Ancestral state reconstruction for Dendroctonus bark beetles: evolution of a tree killer.
Reeve, John D; Anderson, Frank E; Kelley, Scott T
2012-06-01
While most bark beetles attack only dead or weakened trees, many species in the genus Dendroctonus have the ability to kill healthy conifers through mass attack of the host tree, and can exhibit devastating outbreaks. Other species in this group are able to successfully colonize trees in small numbers without killing the host. We reconstruct the evolution of these ecological and life history traits, first classifying the extant Dendroctonus species by attack type (mass or few), outbreaks (yes or no), host genus (Pinus and others), location of attacks on the tree (bole, base, etc.), whether the host is killed (yes or no), and if the larvae are gregarious or have individual galleries (yes or no). We then estimated a molecular phylogeny for a data set of cytochrome oxidase I sequences sampled from nearly all Dendroctonus species, and used this phylogeny to reconstruct the ancestral state at various nodes on the tree, employing maximum parsimony, maximum likelihood, and Bayesian methods. Our reconstructions suggest that extant Dendroctonus species likely evolved from an ancestor that killed host pines through mass attack of the bole, had individual larvae, and exhibited outbreaks. The ability to colonize a host tree in small numbers (as well as gregarious larvae and attacks at the tree base) apparently evolved later, possibly as two separate events in different clades. It is likely that tree mortality and outbreaks have been continuing features of the interaction between conifers and Dendroctonus bark beetles.
Unifying constructal theory of tree roots, canopies and forests.
Bejan, A; Lorente, S; Lee, J
2008-10-07
Here, we show that the most basic features of tree and forest architecture can be put on a unifying theoretical basis, which is provided by the constructal law. Key is the integrative approach to understanding the emergence of "designedness" in nature. Trees and forests are viewed as integral components (along with dendritic river basins, aerodynamic raindrops, and atmospheric and oceanic circulation) of the much greater global architecture that facilitates the cyclical flow of water in nature (Fig. 1) and the flow of stresses between wind and ground. Theoretical features derived in this paper are: the tapered shape of the root and longitudinally uniform diameter and density of internal flow tubes, the near-conical shape of tree trunks and branches, the proportionality between tree length and wood mass raised to 1/3, the proportionality between total water mass flow rate and tree length, the proportionality between the tree flow conductance and the tree length scale raised to a power between 1 and 2, the existence of forest floor plans that maximize ground-air flow access, the proportionality between the length scale of the tree and its rank raised to a power between -1 and -1/2, and the inverse proportionality between the tree size and number of trees of the same size. This paper further shows that there exists an optimal ratio of leaf volume divided by total tree volume, trees of the same size must have a larger wood volume fraction in windy climates, and larger trees must pack more wood per unit of tree volume than smaller trees. Comparisons with empirical correlations and formulas based on ad hoc models are provided. This theory predicts classical notions such as Leonardo's rule, Huber's rule, Zipf's distribution, and the Fibonacci sequence. The difference between modeling (description) and theory (prediction) is brought into evidence.
Gomez-Velez, J. D.; Harvey, J. W.
2014-12-01
Hyporheic exchange has been hypothesized to have basin-scale consequences; however, predictions throughout river networks are limited by available geomorphic and hydrogeologic data as well as models that can analyze and aggregate hyporheic exchange flows across large spatial scales. We developed a parsimonious but physically-based model of hyporheic flow for application in large river basins: Networks with EXchange and Subsurface Storage (NEXSS). At the core of NEXSS is a characterization of the channel geometry, geomorphic features, and related hydraulic drivers based on scaling equations from the literature and readily accessible information such as river discharge, bankfull width, median grain size, sinuosity, channel slope, and regional groundwater gradients. Multi-scale hyporheic flow is computed based on combining simple but powerful analytical and numerical expressions that have been previously published. We applied NEXSS across a broad range of geomorphic diversity in river reaches and synthetic river networks. NEXSS demonstrates that vertical exchange beneath submerged bedforms dominates hyporheic fluxes and turnover rates along the river corridor. Moreover, the hyporheic zone's potential for biogeochemical transformations is comparable across stream orders, but the abundance of lower-order channels results in a considerably higher cumulative effect for low-order streams. Thus, vertical exchange beneath submerged bedforms has more potential for biogeochemical transformations than lateral exchange beneath banks, although lateral exchange through meanders may be important in large rivers. These results have implications for predicting outcomes of river and basin management practices.
Reed, David L; Carpenter, Kent E; deGravelle, Martin J
2002-06-01
The Carangidae represent a diverse family of marine fishes that include both ecologically and economically important species. Currently, there are four recognized tribes within the family, but phylogenetic relationships among them based on morphology are not resolved. In addition, the tribe Carangini contains species with a variety of body forms and no study has tried to interpret the evolution of this diversity. We used DNA sequences from the mitochondrial cytochrome b gene to reconstruct the phylogenetic history of 50 species from each of the four tribes of Carangidae and four carangoid outgroup taxa. We found support for the monophyly of three tribes within the Carangidae (Carangini, Naucratini, and Trachinotini); however, monophyly of the fourth tribe (Scomberoidini) remains questionable. A sister group relationship between the Carangini and the Naucratini is well supported. This clade is apparently sister to the Trachinotini plus Scomberoidini but there is uncertain support for this relationship. Additionally, we examined the evolution of body form within the tribe Carangini and determined that each of the predominant clades has a distinct evolutionary trend in body form. We tested three methods of phylogenetic inference, parsimony, maximum-likelihood, and Bayesian inference. Whereas the three analyses produced largely congruent hypotheses, they differed in several important relationships. Maximum-likelihood and Bayesian methods produced hypotheses with higher support values for deep branches. The Bayesian analysis was computationally much faster and yet produced phylogenetic hypotheses that were very similar to those of the maximum-likelihood analysis. (c) 2002 Elsevier Science (USA).
Butler, Ricky W.; Boerschlein, David P.
1993-01-01
Fault-Tree Compiler (FTC) program, is software tool used to calculate probability of top event in fault tree. Gates of five different types allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language easy to understand and use. In addition, program supports hierarchical fault-tree definition feature, which simplifies tree-description process and reduces execution time. Set of programs created forming basis for reliability-analysis workstation: SURE, ASSIST, PAWS/STEM, and FTC fault-tree tool (LAR-14586). Written in PASCAL, ANSI-compliant C language, and FORTRAN 77. Other versions available upon request.
Categorizing Ideas about Trees: A Tree of Trees
Fisler, Marie; Lecointre, Guillaume
2013-01-01
The aim of this study is to explore whether matrices and MP trees used to produce systematic categories of organisms could be useful to produce categories of ideas in history of science. We study the history of the use of trees in systematics to represent the diversity of life from 1766 to 1991. We apply to those ideas a method inspired from coding homologous parts of organisms. We discretize conceptual parts of ideas, writings and drawings about trees contained in 41 main writings; we detect shared parts among authors and code them into a 91-characters matrix and use a tree representation to show who shares what with whom. In other words, we propose a hierarchical representation of the shared ideas about trees among authors: this produces a “tree of trees.” Then, we categorize schools of tree-representations. Classical schools like “cladists” and “pheneticists” are recovered but others are not: “gradists” are separated into two blocks, one of them being called here “grade theoreticians.” We propose new interesting categories like the “buffonian school,” the “metaphoricians,” and those using “strictly genealogical classifications.” We consider that networks are not useful to represent shared ideas at the present step of the study. A cladogram is made for showing who is sharing what with whom, but also heterobathmy and homoplasy of characters. The present cladogram is not modelling processes of transmission of ideas about trees, and here it is mostly used to test for proximity of ideas of the same age and for categorization. PMID:23950877
Maximal inequalities for demimartingales and their applications
WANG XueJun; HU ShuHe
2009-01-01
In this paper,we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides.The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob's type maximal inequality for demimartingales,strong laws of large numbers and growth rate for demimartingales and associated random variables.At last,we give an equivalent condition of uniform integrability for demisubmartingales.
Maximal inequalities for demimartingales and their applications
无
2009-01-01
In this paper, we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides. The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob’s type maximal inequality for demimartingales, strong laws of large numbers and growth rate for demimartingales and associated random variables. At last, we give an equivalent condition of uniform integrability for demisubmartingales.
A new decision tree learning algorithm
FANG Yong; QI Fei-hu
2005-01-01
In order to improve the generalization ability of binary decision trees, a new learning algorithm, the MMDT algorithm, is presented. Based on statistical learning theory the generalization performance of binary decision trees is analyzed, and the assessment rule is proposed. Under the direction of the assessment rule, the MMDT algorithm is implemented. The algorithm maps training examples from an original space to a high dimension featurespace, and constructs a decision tree in it. In the feature space, a new decision node splitting criterion, the max-min rule, is used, and the margin of each decision node is maximized using a support vector machine, to improve the generalization performance. Experimental results show that the new learning algorithm is much superior to others such as C4. 5 and OC1.
Task-oriented maximally entangled states
Agrawal, Pankaj; Pradhan, B, E-mail: agrawal@iopb.res.i, E-mail: bpradhan@iopb.res.i [Institute of Physics, Sachivalaya Marg, Bhubaneswar, Orissa 751 005 (India)
2010-06-11
We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.
Inflation in maximal gauged supergravities
Kodama, Hideo [Theory Center, KEK,Tsukuba 305-0801 (Japan); Department of Particles and Nuclear Physics,The Graduate University for Advanced Studies,Tsukuba 305-0801 (Japan); Nozawa, Masato [Dipartimento di Fisica, Università di Milano, and INFN, Sezione di Milano,Via Celoria 16, 20133 Milano (Italy)
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Yen Hung Chen
2012-01-01
minimum cost spanning tree T in G such that the total weight in T is at most a given bound B. In this paper, we present two polynomial time approximation schemes (PTASs for the constrained minimum spanning tree problem.
Making Tree Ensembles Interpretable
Hara, Satoshi; Hayashi, Kohei
2016-01-01
Tree ensembles, such as random forest and boosted trees, are renowned for their high prediction performance, whereas their interpretability is critically limited. In this paper, we propose a post processing method that improves the model interpretability of tree ensembles. After learning a complex tree ensembles in a standard way, we approximate it by a simpler model that is interpretable for human. To obtain the simpler model, we derive the EM algorithm minimizing the KL divergence from the ...
Mitchell, William
1992-01-01
This paper, dating from May 1991, contains preliminary (and unpublishable) notes on investigations about iteration trees. They will be of interest only to the specialist. In the first two sections I define notions of support and embeddings for tree iterations, proving for example that every tree iteration is a direct limit of finite tree iterations. This is a generalization to models with extenders of basic ideas of iterated ultrapowers using only ultrapowers. In the final section (which is m...
Meij, E.; Trieschnigg, D.; Rijke, M.de; Kraaij, W.
2008-01-01
In many collections, documents are annotated using concepts from a structured knowledge source such as an ontology or thesaurus. Examples include the news domain [7], where each news item is categorized according to the nature of the event that took place, and Wikipedia, with its per-article categor
Baumbach, Jan; Guo, Jian-Ying; Ibragimov, Rashid
2013-01-01
We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting ...
Baumbach, Jan; Guo, Jiong; Ibragimov, Rashid
2015-01-01
We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting ...
Engelfriet, Joost; Vogler, Heiko
1985-01-01
Macro tree transducers are a combination of top-down tree transducers and macro grammars. They serve as a model for syntax-directed semantics in which context information can be handled. In this paper the formal model of macro tree transducers is studied by investigating typical automata theoretical
Sweeney, Debra; Rounds, Judy
2011-01-01
Trees are great inspiration for artists. Many art teachers find themselves inspired and maybe somewhat obsessed with the natural beauty and elegance of the lofty tree, and how it changes through the seasons. One such tree that grows in several regions and always looks magnificent, regardless of the time of year, is the birch. In this article, the…
Finbow, Arthur; Frendrup, Allan; Vestergaard, Preben D.
cardinality then G is a total well dominated graph. In this paper we study composition and decomposition of total well dominated trees. By a reversible process we prove that any total well dominated tree can both be reduced to and constructed from a family of three small trees....
Brooks, Sarah DeWitt
2010-01-01
This article describes the author's experience in implementing a Wish Tree project in her school in an effort to bring the school community together with a positive art-making experience during a potentially stressful time. The concept of a wish tree is simple: plant a tree; provide tags and pencils for writing wishes; and encourage everyone to…
Brooks, Sarah DeWitt
2010-01-01
This article describes the author's experience in implementing a Wish Tree project in her school in an effort to bring the school community together with a positive art-making experience during a potentially stressful time. The concept of a wish tree is simple: plant a tree; provide tags and pencils for writing wishes; and encourage everyone to…
Engelfriet, Joost; Vogler, Heiko
1985-01-01
Macro tree transducers are a combination of top-down tree transducers and macro grammars. They serve as a model for syntax-directed semantics in which context information can be handled. In this paper the formal model of macro tree transducers is studied by investigating typical automata theoretical
Are all maximally entangled states pure?
Cavalcanti, D; Terra-Cunha, M O
2005-01-01
In this Letter we study if all maximally entangled states are pure through several entanglement monotones. Our conclusions allow us to generalize the idea of monogamy of entanglement. Then we propose a polygamy of entanglement, which express that if a general multipartite state is maximally entangled it is necessarily factorized by any other system.
Sampling and Representation Complexity of Revenue Maximization
Dughmi, Shaddin; Han, Li; Nisan, Noam
2014-01-01
We consider (approximate) revenue maximization in auctions where the distribution on input valuations is given via "black box" access to samples from the distribution. We observe that the number of samples required -- the sample complexity -- is tightly related to the representation complexity of an approximately revenue-maximizing auction. Our main results are upper bounds and an exponential lower bound on these complexities.
Lisonek, Petr
1996-01-01
our classifications confirmthe maximality of previously known sets, the results in E^7 and E^8are new. Their counterpart in dimension larger than 10is a set of unit vectors with only two values of inner products in the Lorentz space R^{d,1}.The maximality of this set again follows from a bound due...
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing b...
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
Cohomology of Weakly Reducible Maximal Triangular Algebras
董浙; 鲁世杰
2000-01-01
In this paper, we introduce the concept of weakly reducible maximal triangular algebras φwhich form a large class of maximal triangular algebras. Let B be a weakly closed algebra containing 5φ, we prove that the cohomology spaces Hn(φ, B) (n≥1) are trivial.
Iqbal, Abdullah; Valous, Nektarios A; Sun, Da-Wen; Allen, Paul
2011-02-01
Lacunarity is about quantifying the degree of spatial heterogeneity in the visual texture of imagery through the identification of the relationships between patterns and their spatial configurations in a two-dimensional setting. The computed lacunarity data can designate a mathematical index of spatial heterogeneity, therefore the corresponding feature vectors should possess the necessary inter-class statistical properties that would enable them to be used for pattern recognition purposes. The objectives of this study is to construct a supervised parsimonious classification model of binary lacunarity data-computed by Valous et al. (2009)-from pork ham slice surface images, with the aid of kernel principal component analysis (KPCA) and artificial neural networks (ANNs), using a portion of informative salient features. At first, the dimension of the initial space (510 features) was reduced by 90% in order to avoid any noise effects in the subsequent classification. Then, using KPCA, the first nineteen kernel principal components (99.04% of total variance) were extracted from the reduced feature space, and were used as input in the ANN. An adaptive feedforward multilayer perceptron (MLP) classifier was employed to obtain a suitable mapping from the input dataset. The correct classification percentages for the training, test and validation sets were 86.7%, 86.7%, and 85.0%, respectively. The results confirm that the classification performance was satisfactory. The binary lacunarity spatial metric captured relevant information that provided a good level of differentiation among pork ham slice images. Copyright © 2010 The American Meat Science Association. Published by Elsevier Ltd. All rights reserved.
Kimberly J Van Meter
Full Text Available Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy and groundwater travel time distributions (hydrologic legacy. The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
Van Meter, Kimberly J; Basu, Nandita B
2015-01-01
Nutrient legacies in anthropogenic landscapes, accumulated over decades of fertilizer application, lead to time lags between implementation of conservation measures and improvements in water quality. Quantification of such time lags has remained difficult, however, due to an incomplete understanding of controls on nutrient depletion trajectories after changes in land-use or management practices. In this study, we have developed a parsimonious watershed model for quantifying catchment-scale time lags based on both soil nutrient accumulations (biogeochemical legacy) and groundwater travel time distributions (hydrologic legacy). The model accurately predicted the time lags observed in an Iowa watershed that had undergone a 41% conversion of area from row crop to native prairie. We explored the time scales of change for stream nutrient concentrations as a function of both natural and anthropogenic controls, from topography to spatial patterns of land-use change. Our results demonstrate that the existence of biogeochemical nutrient legacies increases time lags beyond those due to hydrologic legacy alone. In addition, we show that the maximum concentration reduction benefits vary according to the spatial pattern of intervention, with preferential conversion of land parcels having the shortest catchment-scale travel times providing proportionally greater concentration reductions as well as faster response times. In contrast, a random pattern of conversion results in a 1:1 relationship between percent land conversion and percent concentration reduction, irrespective of denitrification rates within the landscape. Our modeling framework allows for the quantification of tradeoffs between costs associated with implementation of conservation measures and the time needed to see the desired concentration reductions, making it of great value to decision makers regarding optimal implementation of watershed conservation measures.
Maximum Leaf Spanning Trees of Growing Sierpinski Networks Models
Yao, Bing; Xu, Jin
2016-01-01
The dynamical phenomena of complex networks are very difficult to predict from local information due to the rich microstructures and corresponding complex dynamics. On the other hands, it is a horrible job to compute some stochastic parameters of a large network having thousand and thousand nodes. We design several recursive algorithms for finding spanning trees having maximal leaves (MLS-trees) in investigation of topological structures of Sierpinski growing network models, and use MLS-trees to determine the kernels, dominating and balanced sets of the models. We propose a new stochastic method for the models, called the edge-cumulative distribution, and show that it obeys a power law distribution.
Compact Ancestry Labeling Schemes for Trees of Small Depth
2009-01-01
An {\\em ancestry labeling scheme} labels the nodes of any tree in such a way that ancestry queries between any two nodes in a tree can be answered just by looking at their corresponding labels. The common measure to evaluate the quality of an ancestry labeling scheme is by its {\\em label size}, that is the maximal number of bits stored in a label, taken over all $n$-node trees. The design of ancestry labeling schemes finds applications in XML search engines. In the context of these applicatio...
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
Maximal Hypersurfaces in Spacetimes with Translational Symmetry
Bulawa, Andrew
2016-01-01
We consider four-dimensional vacuum spacetimes which admit a free isometric spacelike R-action. Taking a quotient with respect to the R-action produces a three-dimensional quotient spacetime. We establish several results regarding maximal hypersurfaces (spacelike hypersurfaces of zero mean curvature) in quotient spacetimes. First, we show that complete noncompact maximal hypersurfaces must either be flat cylinders S^1 x R or conformal to the Euclidean plane. Second, we establish a positive mass theorem for certain maximal hypersurfaces. Finally, while it is meaningful to use a bounded lapse when adopting the maximal hypersurface gauge condition in the four-dimensional (asymptotically flat) setting, it is shown here that nontrivial quotient spacetimes admit the maximal hypersurface gauge only with an unbounded lapse.
Phylogenetic Trees From Sequences
Ryvkin, Paul; Wang, Li-San
In this chapter, we review important concepts and approaches for phylogeny reconstruction from sequence data.We first cover some basic definitions and properties of phylogenetics, and briefly explain how scientists model sequence evolution and measure sequence divergence. We then discuss three major approaches for phylogenetic reconstruction: distance-based phylogenetic reconstruction, maximum parsimony, and maximum likelihood. In the third part of the chapter, we review how multiple phylogenies are compared by consensus methods and how to assess confidence using bootstrapping. At the end of the chapter are two sections that list popular software packages and additional reading.
Degyi
2008-01-01
Trees are flourishing in Lhasa wherever the history exists. There is such a man. He has already been through cus-toms after his annual trek to Lhasa, which he has been doing for over twenty years in succession to visit his tree.Although he has been making this journey for so long,it is neither to visit friends or family,nor is it his hometown.It is a tree that is tied so profoundly to his heart.When the wind blows fiercely on the bare tree and winter snow falls,he stands be-fore the tree with tears of jo...
Morozov, Dmitriy; Weber, Gunther H.
2014-03-31
Topological techniques provide robust tools for data analysis. They are used, for example, for feature extraction, for data de-noising, and for comparison of data sets. This chapter concerns contour trees, a topological descriptor that records the connectivity of the isosurfaces of scalar functions. These trees are fundamental to analysis and visualization of physical phenomena modeled by real-valued measurements. We study the parallel analysis of contour trees. After describing a particular representation of a contour tree, called local{global representation, we illustrate how di erent problems that rely on contour trees can be solved in parallel with minimal communication.
Baumeister, Dorothea
2007-01-01
Holzer and Holzer (Discrete Applied Mathematics 144(3):345--358, 2004) proved the Tantrix(TM) rotation puzzle problem with four colors NP-complete. Baumeister and Rothe (MCU 2007) modified their construction to achieve a parsimonious reduction from satisfiability to this problem. Since parsimonious reductions preserve the number of solutions, it follows that the unique version of the four-color Tantrix(TM) rotation puzzle problem is DP-complete under randomized reductions. In this paper, we study the three-color and the two-color Tantrix(TM) rotation puzzle problem. Restricting the number of allowed colors to three (respectively, to two) reduces the set of available Tantrix(TM) tiles from 56 to 14 (respectively, to 8). We prove that both the three-color and the two-color Tantrix(TM) rotation puzzle problem is NP-complete, which answers a question raised by Holzer and Holzer in the affirmative. Since both these reductions are parsimonious, it follows that both the unique three-color and the unique two-color Ta...
A tree-based model for homogeneous groupings of multinomials.
Yang, Tae Young
2005-11-30
The motivation of this paper is to provide a tree-based method for grouping multinomial data according to their classification probability vectors. We produce an initial tree by binary recursive partitioning whereby multinomials are successively split into two subsets and the splits are determined by maximizing the likelihood function. If the number of multinomials k is too large, we propose to order the multinomials, and then build the initial tree based on a dramatically smaller number k-1 of possible splits. The tree is then pruned from the bottom up. The pruning process involves a sequence of hypothesis tests of a single homogeneous group against the alternative that there are two distinct, internally homogeneous groups. As pruning criteria, the Bayesian information criterion and the Wilcoxon rank-sum test are proposed. The tree-based model is illustrated on genetic sequence data. Homogeneous groupings of genetic sequences present new opportunities to understand and align these sequences.
Rollinson, Susan Wells
2012-01-01
The growth of a pine tree is examined by preparing "tree cookies" (cross-sectional disks) between whorls of branches. The use of Christmas trees allows the tree cookies to be obtained with inexpensive, commonly available tools. Students use the tree cookies to investigate the annual growth of the tree and how it corresponds to the number of whorls…
Rollinson, Susan Wells
2012-01-01
The growth of a pine tree is examined by preparing "tree cookies" (cross-sectional disks) between whorls of branches. The use of Christmas trees allows the tree cookies to be obtained with inexpensive, commonly available tools. Students use the tree cookies to investigate the annual growth of the tree and how it corresponds to the number of whorls…
Programming macro tree transducers
Bahr, Patrick; Day, Laurence E.
2013-01-01
A tree transducer is a set of mutually recursive functions transforming an input tree into an output tree. Macro tree transducers extend this recursion scheme by allowing each function to be defined in terms of an arbitrary number of accumulation parameters. In this paper, we show how macro tree...... transducers can be concisely represented in Haskell, and demonstrate the benefits of utilising such an approach with a number of examples. In particular, tree transducers afford a modular programming style as they can be easily composed and manipulated. Our Haskell representation generalises the original...... definition of (macro) tree transducers, abolishing a restriction on finite state spaces. However, as we demonstrate, this generalisation does not affect compositionality....
Programming macro tree transducers
Bahr, Patrick; Day, Laurence E.
2013-01-01
A tree transducer is a set of mutually recursive functions transforming an input tree into an output tree. Macro tree transducers extend this recursion scheme by allowing each function to be defined in terms of an arbitrary number of accumulation parameters. In this paper, we show how macro tree...... transducers can be concisely represented in Haskell, and demonstrate the benefits of utilising such an approach with a number of examples. In particular, tree transducers afford a modular programming style as they can be easily composed and manipulated. Our Haskell representation generalises the original...... definition of (macro) tree transducers, abolishing a restriction on finite state spaces. However, as we demonstrate, this generalisation does not affect compositionality....
A note on a conjecture concerning tree-partitioning 3-regular graphs
Bohme, T.; Broersma, Hajo; Tuinstra, Hilde
1998-01-01
If G is a 4-connected maximal planar graph, then G is hamiltonian (by a theorem of Whitney), implying that its dual graph G� is a cyclically 4-edge connected 3- regular planar graph admitting a partition of the vertex set into two parts, each inducing a tree in G�, a so-called tree-partition. It is
Liran Carmel
2010-01-01
Full Text Available Evolutionary binary characters are features of species or genes, indicating the absence (value zero or presence (value one of some property. Examples include eukaryotic gene architecture (the presence or absence of an intron in a particular locus, gene content, and morphological characters. In many studies, the acquisition of such binary characters is assumed to represent a rare evolutionary event, and consequently, their evolution is analyzed using various flavors of parsimony. However, when gain and loss of the character are not rare enough, a probabilistic analysis becomes essential. Here, we present a comprehensive probabilistic model to describe the evolution of binary characters on a bifurcating phylogenetic tree. A fast software tool, EREM, is provided, using maximum likelihood to estimate the parameters of the model and to reconstruct ancestral states (presence and absence in internal nodes and events (gain and loss events along branches.
Are all maximally entangled states pure?
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...
Robust utility maximization in a discontinuous filtration
Jeanblanc, Monique; Ngoupeyou, Armand
2012-01-01
We study a problem of utility maximization under model uncertainty with information including jumps. We prove first that the value process of the robust stochastic control problem is described by the solution of a quadratic-exponential backward stochastic differential equation with jumps. Then, we establish a dynamic maximum principle for the optimal control of the maximization problem. The characterization of the optimal model and the optimal control (consumption-investment) is given via a forward-backward system which generalizes the result of Duffie and Skiadas (1994) and El Karoui, Peng and Quenez (2001) in the case of maximization of recursive utilities including model with jumps.
Pattern Avoidance in Ternary Trees
Gabriel, Nathan; Pudwell, Lara; Tay, Samuel
2011-01-01
This paper considers the enumeration of ternary trees (i.e. rooted ordered trees in which each vertex has 0 or 3 children) avoiding a contiguous ternary tree pattern. We begin by finding recurrence relations for several simple tree patterns; then, for more complex trees, we compute generating functions by extending a known algorithm for pattern-avoiding binary trees. Next, we present an alternate one-dimensional notation for trees which we use to find bijections that explain why certain pairs of tree patterns yield the same avoidance generating function. Finally, we compare our bijections to known "replacement rules" for binary trees and generalize these bijections to a larger class of trees.
All Tree-level Amplitudes in Massless QCD
Dixon, Lance J.; /CERN /SLAC; Henn, Johannes M.; Plefka, Jan; Schuster, Theodor; /Humboldt U., Berlin
2010-10-25
We derive compact analytical formulae for all tree-level color-ordered gauge theory amplitudes involving any number of external gluons and up to three massless quark-anti-quark pairs. A general formula is presented based on the combinatorics of paths along a rooted tree and associated determinants. Explicit expressions are displayed for the next-to-maximally helicity violating (NMHV) and next-to-next-to-maximally helicity violating (NNMHV) gauge theory amplitudes. Our results are obtained by projecting the previously-found expressions for the super-amplitudes of the maximally supersymmetric Yang-Mills theory (N = 4 SYM) onto the relevant components yielding all gluon-gluino tree amplitudes in N = 4 SYM. We show how these results carry over to the corresponding QCD amplitudes, including massless quarks of different flavors as well as a single electroweak vector boson. The public Mathematica package GGT is described, which encodes the results of this work and yields analytical formulae for all N = 4 SYM gluon-gluino trees. These in turn yield all QCD trees with up to four external arbitrary-flavored massless quark-anti-quark-pairs.
Skaugen, Thomas; Haddeland, Ingjerd
2014-05-01
A new parameter-parsimonious rainfall-runoff model, DDD (Distance Distribution Dynamics) has been run operationally at the Norwegian Flood Forecasting Service for approximately a year. DDD has been calibrated for, altogether, 104 catchments throughout Norway, and provide runoff forecasts 8 days ahead on a daily temporal resolution driven by precipitation and temperature from the meteorological forecast models AROME (48 hrs) and EC (192 hrs). The current version of DDD differs from the standard model used for flood forecasting in Norway, the HBV model, in its description of the subsurface and runoff dynamics. In DDD, the capacity of the subsurface water reservoir M, is the only parameter to be calibrated whereas the runoff dynamics is completely parameterised from observed characteristics derived from GIS and runoff recession analysis. Water is conveyed through the soils to the river network by waves with celerities determined by the level of saturation in the catchment. The distributions of distances between points in the catchment to the nearest river reach and of the river network give, together with the celerities, distributions of travel times, and, consequently unit hydrographs. DDD has 6 parameters less to calibrate in the runoff module than the HBV model. Experiences using DDD show that especially the timing of flood peaks has improved considerably and in a comparison between DDD and HBV, when assessing timeseries of 64 years for 75 catchments, DDD had a higher hit rate and a lower false alarm rate than HBV. For flood peaks higher than the mean annual flood the median hit rate is 0.45 and 0.41 for the DDD and HBV models respectively. Corresponding number for the false alarm rate is 0.62 and 0.75 For floods over the five year return interval, the median hit rate is 0.29 and 0.28 for the DDD and HBV models, respectively with false alarm rates equal to 0.67 and 0.80. During 2014 the Norwegian flood forecasting service will run DDD operationally at a 3h temporal
M. Coustau
2012-04-01
Full Text Available Rainfall-runoff models are crucial tools for the statistical prediction of flash floods and real-time forecasting. This paper focuses on a karstic basin in the South of France and proposes a distributed parsimonious event-based rainfall-runoff model, coherent with the poor knowledge of both evaporative and underground fluxes. The model combines a SCS runoff model and a Lag and Route routing model for each cell of a regular grid mesh. The efficiency of the model is discussed not only to satisfactorily simulate floods but also to get powerful relationships between the initial condition of the model and various predictors of the initial wetness state of the basin, such as the base flow, the Hu2 index from the Meteo-France SIM model and the piezometric levels of the aquifer. The advantage of using meteorological radar rainfall in flood modelling is also assessed. Model calibration proved to be satisfactory by using an hourly time step with Nash criterion values, ranging between 0.66 and 0.94 for eighteen of the twenty-one selected events. The radar rainfall inputs significantly improved the simulations or the assessment of the initial condition of the model for 5 events at the beginning of autumn, mostly in September–October (mean improvement of Nash is 0.09; correction in the initial condition ranges from −205 to 124 mm, but were less efficient for the events at the end of autumn. In this period, the weak vertical extension of the precipitation system and the low altitude of the 0 °C isotherm could affect the efficiency of radar measurements due to the distance between the basin and the radar (~60 km. The model initial condition S is correlated with the three tested predictors (R^{2} > 0.6. The interpretation of the model suggests that groundwater does not affect the first peaks of the flood, but can strongly impact subsequent peaks in the case of a multi-storm event. Because this kind of model is based on a limited
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Maximizing throughput by evaluating critical utilization paths
Weeda, P.J.
1991-01-01
Recently the relationship between batch structure, bottleneck machine and maximum throughput has been explored for serial, convergent and divergent process configurations consisting of two machines and three processes. In three of the seven possible configurations a multiple batch structure maximize
Relationship between maximal exercise parameters and individual ...
Relationship between maximal exercise parameters and individual time trial ... It is widely accepted that the ventilatory threshold (VT) is an important ... This study investigated whether the physiological responses during a 20km time trial (TT) ...
Simple technique for maximal thoracic muscle harvest.
Marshall, M Blair; Kaiser, Larry R; Kucharczuk, John C
2004-04-01
We present a modification of technique for standard muscle flap harvest, the placement of cutaneous traction sutures. This technique allows for maximal dissection of the thoracic muscles even through minimal incisions. Through improved exposure and traction, complete dissection of the muscle bed can be performed and the tissue obtained maximized. Because more muscle bulk is obtained with this technique, the need for a second muscle may be prevented.
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
Maximal Subgroups of Skew Linear Groups
M. Mahdavi-Hezavehi
2002-01-01
Let D be an infinite division algebra of finite dimension over its centre Z(D) = F, and n a positive integer. The structure of maximal subgroups of skew linear groups are investigated. In particular, assume N is a normal subgroup of GLn(D) and M is a maximal subgroup of N containing Z(N). It is shown that if M/Z(N) is finite, then N is central.
Additive Approximation Algorithms for Modularity Maximization
Kawase, Yasushi; Matsui, Tomomi; Miyauchi, Atsushi
2016-01-01
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph $G=(V,E)$, we are asked to find a partition $\\mathcal{C}$ of $V$ that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity max...
Maximal Frequent Itemset Generation Using Segmentation Apporach
M.Rajalakshmi
2011-09-01
Full Text Available Finding frequent itemsets in a data source is a fundamental operation behind Association Rule Mining.Generally, many algorithms use either the bottom-up or top-down approaches for finding these frequentitemsets. When the length of frequent itemsets to be found is large, the traditional algorithms find all thefrequent itemsets from 1-length to n-length, which is a difficult process. This problem can be solved bymining only the Maximal Frequent Itemsets (MFS. Maximal Frequent Itemsets are frequent itemsets whichhave no proper frequent superset. Thus, the generation of only maximal frequent itemsets reduces thenumber of itemsets and also time needed for the generation of all frequent itemsets as each maximal itemsetof length m implies the presence of 2m-2 frequent itemsets. Furthermore, mining only maximal frequentitemset is sufficient in many data mining applications like minimal key discovery and theory extraction. Inthis paper, we suggest a novel method for finding the maximal frequent itemset from huge data sourcesusing the concept of segmentation of data source and prioritization of segments. Empirical evaluationshows that this method outperforms various other known methods.
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits.
Blundell, Charles; Heller, Katherine A
2012-01-01
Hierarchical structure is ubiquitous in data across many domains. There are many hier- archical clustering methods, frequently used by domain experts, which strive to discover this structure. However, most of these meth- ods limit discoverable hierarchies to those with binary branching structure. This lim- itation, while computationally convenient, is often undesirable. In this paper we ex- plore a Bayesian hierarchical clustering algo- rithm that can produce trees with arbitrary branching structure at each node, known as rose trees. We interpret these trees as mixtures over partitions of a data set, and use a computationally efficient, greedy ag- glomerative algorithm to find the rose trees which have high marginal likelihood given the data. Lastly, we perform experiments which demonstrate that rose trees are better models of data than the typical binary trees returned by other hierarchical clustering algorithms.
Favre, Charles
2004-01-01
This volume is devoted to a beautiful object, called the valuative tree and designed as a powerful tool for the study of singularities in two complex dimensions. Its intricate yet manageable structure can be analyzed by both algebraic and geometric means. Many types of singularities, including those of curves, ideals, and plurisubharmonic functions, can be encoded in terms of positive measures on the valuative tree. The construction of these measures uses a natural tree Laplace operator of independent interest.
Cardona, Gabriel; Llabrés, Mercè; Rosselló, Francesc; Valiente, Gabriel
2011-01-01
Galled trees, directed acyclic graphs that model evolutionary histories with isolated hybridization events, have become very popular due to both their biological significance and the existence of polynomial-time algorithms for their reconstruction. In this paper, we establish to which extent several distance measures for the comparison of evolutionary networks are metrics for galled trees, and hence, when they can be safely used to evaluate galled tree reconstruction methods.
A theory of game trees, based on solution trees
W.H.L.M. Pijls (Wim); A. de Bruin (Arie); A. Plaat (Aske)
1996-01-01
textabstractIn this paper a complete theory of game tree algorithms is presented, entirely based upon the notion of a solution tree. Two types of solution trees are distinguished: max and min solution trees respectively. We show that most game tree algorithms construct a superposition of a max and a
A theory of game trees, based on solution trees
W.H.L.M. Pijls (Wim); A. de Bruin (Arie); A. Plaat (Aske)
1996-01-01
textabstractIn this paper a complete theory of game tree algorithms is presented, entirely based upon the notion of a solution tree. Two types of solution trees are distinguished: max and min solution trees respectively. We show that most game tree algorithms construct a superposition of a max and a
Brodal, Gerth Stølting; Sioutas, Spyros; Pantazos, Kostas;
2015-01-01
We present a new overlay, called the Deterministic Decentralized tree (D2-tree). The D2-tree compares favorably to other overlays for the following reasons: (a) it provides matching and better complexities, which are deterministic for the supported operations; (b) the management of nodes (peers......-balancing scheme of elements into nodes is deterministic and general enough to be applied to other hierarchical tree-based overlays. This load-balancing mechanism is based on an innovative lazy weight-balancing mechanism, which is interesting in its own right....
Sexton, Alan P
2010-01-01
The M-tree is a paged, dynamically balanced metric access method that responds gracefully to the insertion of new objects. To date, no algorithm has been published for the corresponding Delete operation. We believe this to be non-trivial because of the design of the M-tree's Insert algorithm. We propose a modification to Insert that overcomes this problem and give the corresponding Delete algorithm. The performance of the tree is comparable to the M-tree and offers additional benefits in terms of supported operations, which we briefly discuss.
Sitchinava, Nodar; Zeh, Norbert
2012-01-01
We present the parallel buffer tree, a parallel external memory (PEM) data structure for batched search problems. This data structure is a non-trivial extension of Arge's sequential buffer tree to a private-cache multiprocessor environment and reduces the number of I/O operations by the number...... of available processor cores compared to its sequential counterpart, thereby taking full advantage of multicore parallelism. The parallel buffer tree is a search tree data structure that supports the batched parallel processing of a sequence of N insertions, deletions, membership queries, and range queries...
Welfare-maximizing and revenue-maximizing tariffs with a few domestic firms
Bruno Larue; Jean-Philippe Gervais
2002-01-01
In this paper we compare the orthodox optimal tariff formula with the appropriate welfare-maximizing tariff when there are a few producing or importing firms. The welfare-maximizing tariff can be very low, voire negative in some cases, while in others it can even exceed the maximum-revenue tariff. The relationship between the welfare-maximizing tariff and the number of firms need not be monotonically increasing, because the tariff is not strictly used to internalize terms of trade externality...
Maximizing Complementary Quantities by Projective Measurements
M. Souza, Leonardo A.; Bernardes, Nadja K.; Rossi, Romeu
2017-04-01
In this work, we study the so-called quantitative complementarity quantities. We focus in the following physical situation: two qubits ( q A and q B ) are initially in a maximally entangled state. One of them ( q B ) interacts with a N-qubit system ( R). After the interaction, projective measurements are performed on each of the qubits of R, in a basis that is chosen after independent optimization procedures: maximization of the visibility, the concurrence, and the predictability. For a specific maximization procedure, we study in detail how each of the complementary quantities behave, conditioned on the intensity of the coupling between q B and the N qubits. We show that, if the coupling is sufficiently "strong," independent of the maximization procedure, the concurrence tends to decay quickly. Interestingly enough, the behavior of the concurrence in this model is similar to the entanglement dynamics of a two qubit system subjected to a thermal reservoir, despite that we consider finite N. However, the visibility shows a different behavior: its maximization is more efficient for stronger coupling constants. Moreover, we investigate how the distinguishability, or the information stored in different parts of the system, is distributed for different couplings.
Subgroup finding via Bayesian additive regression trees.
Sivaganesan, Siva; Müller, Peter; Huang, Bin
2017-03-09
We provide a Bayesian decision theoretic approach to finding subgroups that have elevated treatment effects. Our approach separates the modeling of the response variable from the task of subgroup finding and allows a flexible modeling of the response variable irrespective of potential subgroups of interest. We use Bayesian additive regression trees to model the response variable and use a utility function defined in terms of a candidate subgroup and the predicted response for that subgroup. Subgroups are identified by maximizing the expected utility where the expectation is taken with respect to the posterior predictive distribution of the response, and the maximization is carried out over an a priori specified set of candidate subgroups. Our approach allows subgroups based on both quantitative and categorical covariates. We illustrate the approach using simulated data set study and a real data set. Copyright © 2017 John Wiley & Sons, Ltd.
J.R. Simpson; E.G. McPherson
2011-01-01
Urban trees can produce a number of benefits, among them improved air quality. Biogenic volatile organic compounds (BVOCs) emitted by some species are ozone precursors. Modifying future tree planting to favor lower-emitting species can reduce these emissions and aid air management districts in meeting federally mandated emissions reductions for these compounds. Changes...
Matching Subsequences in Trees
Bille, Philip; Gørtz, Inge Li
2009-01-01
Given two rooted, labeled trees P and T the tree path subsequence problem is to determine which paths in P are subsequences of which paths in T. Here a path begins at the root and ends at a leaf. In this paper we propose this problem as a useful query primitive for XML data, and provide new...
Structural Equation Model Trees
Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman
2013-01-01
In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…
Tree biology and dendrochemistry
Kevin T. Smith; Walter C. Shortle
1996-01-01
Dendrochemistry, the interpretation of elemental analysis of dated tree rings, can provide a temporal record of environmental change. Using the dendrochemical record requires an understanding of tree biology. In this review, we pose four questions concerning assumptions that underlie recent dendrochemical research: 1) Does the chemical composition of the wood directly...
The major tree nuts include almonds, Brazil nuts, cashew nuts, hazelnuts, macadamia nuts, pecans, pine nuts, pistachio nuts, and walnuts. Tree nut oils are appreciated in food applications because of their flavors and are generally more expensive than other gourmet oils. Research during the last de...
Polyploidy Induction of Pteroceltis tatarinowii Maxim
Lin ZHANG; Feng WANG; Zhongkui SUN; Cuicui ZHU; Rongwei CHEN
2015-01-01
3%Objective] This study was conducted to obtain tetraploid Pteroceltis tatari-nowi Maxim. with excel ent ornamental traits. [Method] The stem apex growing points of Pteroceltis tatarinowi Maxim. were treated with different concentrations of colchicine solution for different hours to figure out a proper method and obtain poly-ploids. [Result] The most effective induction was obtained by treatment with 0.6%-0.8% colchicine for 72 h with 34.2% mutation rate. Flow cytometry and chromosome observation of the stem apex growing point of P. tatarinowi Maxim. proved that the tetraploid plants were successful y obtained with chromosome number 2n=4x=36. [Conclusion] The result not only fil s the blank of polyploid breeding of P. tatarinowi , but also provides an effective way to broaden the methods of cultivation of fast-growing, high-quality, disease-resilience, new varieties of Pteroceltis.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
The maximal process of nonlinear shot noise
Eliazar, Iddo; Klafter, Joseph
2009-05-01
In the nonlinear shot noise system-model shots’ statistics are governed by general Poisson processes, and shots’ decay-dynamics are governed by general nonlinear differential equations. In this research we consider a nonlinear shot noise system and explore the process tracking, along time, the system’s maximal shot magnitude. This ‘maximal process’ is a stationary Markov process following a decay-surge evolution; it is highly robust, and it is capable of displaying both a wide spectrum of statistical behaviors and a rich variety of random decay-surge sample-path trajectories. A comprehensive analysis of the maximal process is conducted, including its Markovian structure, its decay-surge structure, and its correlation structure. All results are obtained analytically and in closed-form.
Energy Band Calculations for Maximally Even Superlattices
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Coded Splitting Tree Protocols
Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar
2013-01-01
This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...... as possible. Evaluations show that the proposed protocol provides considerable gains over the standard tree splitting protocol applying SIC. The improvement comes at the expense of an increased feedback and receiver complexity....
Absence of parasympathetic reactivation after maximal exercise.
de Oliveira, Tiago Peçanha; de Alvarenga Mattos, Raphael; da Silva, Rhenan Bartels Ferreira; Rezende, Rafael Andrade; de Lima, Jorge Roberto Perrout
2013-03-01
The ability of the human organism to recover its autonomic balance soon after physical exercise cessation has an important impact on the individual's health status. Although the dynamics of heart rate recovery after maximal exercise has been studied, little is known about heart rate variability after this type of exercise. The aim of this study is to analyse the dynamics of heart rate and heart rate variability recovery after maximal exercise in healthy young men. Fifteen healthy male subjects (21·7 ± 3·4 years; 24·0 ± 2·1 kg m(-2) ) participated in the study. The experimental protocol consisted of an incremental maximal exercise test on a cycle ergometer, until maximal voluntary exhaustion. After the test, recovery R-R intervals were recorded for 5 min. From the absolute differences between peak heart rate values and the heart rate values at 1 and 5 min of the recovery, the heart rate recovery was calculated. Postexercise heart rate variability was analysed from calculations of the SDNN and RMSSD indexes, in 30-s windows (SDNN(30s) and RMSSD(30s) ) throughout recovery. One and 5 min after maximal exercise cessation, the heart rate recovered 34·7 (±6·6) and 75·5 (±6·1) bpm, respectively. With regard to HRV recovery, while the SDNN(30s) index had a slight increase, RMSSD(30s) index remained totally suppressed throughout the recovery, suggesting an absence of vagal modulation reactivation and, possibly, a discrete sympathetic withdrawal. Therefore, it is possible that the main mechanism associated with the fall of HR after maximal exercise is sympathetic withdrawal or a vagal tone restoration without vagal modulation recovery. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
Oil Palm Tree Detection with High Resolution Multi-Spectral Satellite Imagery
Panu Srestasathiern
2014-10-01
Full Text Available Oil palm tree is an important cash crop in Thailand. To maximize the productivity from planting, oil palm plantation managers need to know the number of oil palm trees in the plantation area. In order to obtain this information, an approach for palm tree detection using high resolution satellite images is proposed. This approach makes it possible to count the number of oil palm trees in a plantation. The process begins with the selection of the vegetation index having the highest discriminating power between oil palm trees and background. The index having highest discriminating power is then used as the primary feature for palm tree detection. We hypothesize that oil palm trees are located at the local peak within the oil palm area. To enhance the separability between oil palm tree crowns and background, the rank transformation is applied to the index image. The local peak on the enhanced index image is then detected by using the non-maximal suppression algorithm. Since both rank transformation and non-maximal suppression are window based, semi-variogram analysis is used to determine the appropriate window size. The performance of the proposed method was tested on high resolution satellite images. In general, our approach uses produced very accurate results, e.g., about 90 percent detection rate when compared with manual labeling.
Maximizing band gaps in plate structures
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated...
Maximal and Minimal Congruences on Some Semigroups
Jintana SANWONG; Boorapa SINGHA; R.P.SULLIVAN
2009-01-01
In 2006,Sanwong and Sullivan described the maximal congruences on the semigroup N consisting of all non-negative integers under standard multiplication,and on the semigroup T(X) consisting of all total transformations of an infinite set X under composition. Here,we determine all maximal congruences on the semigroup Zn under multiplication modulo n. And,when Y X,we do the same for the semigroup T(X,Y) consisting of all elements of T(X) whose range is contained in Y. We also characterise the minimal congruences on T(X,Y).
Maximizing oil yields may not optimize economics
1987-03-01
The Los Alamos National Laboratory has used the ASPEN computer code to calculate the economics of different hydroretorting conditions. When the oil yield was maximized and a oil shale plant designed around this process, the costs turned out much higher than expected. However, calculations based on runs of less than maximum yields showed lower cost estimates. It is recommended that future efforts should be concentrated on minimizing production costs rather than maximizing yields. An oil shale plant has been designed around minimum production cost, but has not been able to be tested experimentally.
Maximal Inequalities for Dependent Random Variables
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X...
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....
Building phylogenetic trees from molecular data with MEGA.
Hall, Barry G
2013-05-01
Phylogenetic analysis is sometimes regarded as being an intimidating, complex process that requires expertise and years of experience. In fact, it is a fairly straightforward process that can be learned quickly and applied effectively. This Protocol describes the several steps required to produce a phylogenetic tree from molecular data for novices. In the example illustrated here, the program MEGA is used to implement all those steps, thereby eliminating the need to learn several programs, and to deal with multiple file formats from one step to another (Tamura K, Peterson D, Peterson N, Stecher G, Nei M, Kumar S. 2011. MEGA5: molecular evolutionary genetics analysis using maximum likelihood, evolutionary distance, and maximum parsimony methods. Mol Biol Evol. 28:2731-2739). The first step, identification of a set of homologous sequences and downloading those sequences, is implemented by MEGA's own browser built on top of the Google Chrome toolkit. For the second step, alignment of those sequences, MEGA offers two different algorithms: ClustalW and MUSCLE. For the third step, construction of a phylogenetic tree from the aligned sequences, MEGA offers many different methods. Here we illustrate the maximum likelihood method, beginning with MEGA's Models feature, which permits selecting the most suitable substitution model. Finally, MEGA provides a powerful and flexible interface for the final step, actually drawing the tree for publication. Here a step-by-step protocol is presented in sufficient detail to allow a novice to start with a sequence of interest and to build a publication-quality tree illustrating the evolution of an appropriate set of homologs of that sequence. MEGA is available for use on PCs and Macs from www.megasoftware.net.
Phylogenetic trees in bioinformatics
Burr, Tom L [Los Alamos National Laboratory
2008-01-01
Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.
Brodal, Gerth Stølting; Moruz, Gabriel
2006-01-01
It is well-known that to minimize the number of comparisons a binary search tree should be perfectly balanced. Previous work has shown that a dominating factor over the running time for a search is the number of cache faults performed, and that an appropriate memory layout of a binary search tree...... can reduce the number of cache faults by several hundred percent. Motivated by the fact that during a search branching to the left or right at a node does not necessarily have the same cost, e.g. because of branch prediction schemes, we in this paper study the class of skewed binary search trees....... For all nodes in a skewed binary search tree the ratio between the size of the left subtree and the size of the tree is a fixed constant (a ratio of 1/2 gives perfect balanced trees). In this paper we present an experimental study of various memory layouts of static skewed binary search trees, where each...
Böcker, Sebastian; Dührkop, Kai
2016-01-01
Untargeted metabolomics commonly uses liquid chromatography mass spectrometry to measure abundances of metabolites; subsequent tandem mass spectrometry is used to derive information about individual compounds. One of the bottlenecks in this experimental setup is the interpretation of fragmentation spectra to accurately and efficiently identify compounds. Fragmentation trees have become a powerful tool for the interpretation of tandem mass spectrometry data of small molecules. These trees are determined from the data using combinatorial optimization, and aim at explaining the experimental data via fragmentation cascades. Fragmentation tree computation does not require spectral or structural databases. To obtain biochemically meaningful trees, one needs an elaborate optimization function (scoring). We present a new scoring for computing fragmentation trees, transforming the combinatorial optimization into a Maximum A Posteriori estimator. We demonstrate the superiority of the new scoring for two tasks: both for the de novo identification of molecular formulas of unknown compounds, and for searching a database for structurally similar compounds, our method SIRIUS 3, performs significantly better than the previous version of our method, as well as other methods for this task. SIRIUS 3 can be a part of an untargeted metabolomics workflow, allowing researchers to investigate unknowns using automated computational methods.Graphical abstractWe present a new scoring for computing fragmentation trees from tandem mass spectrometry data based on Bayesian statistics. The best scoring fragmentation tree most likely explains the molecular formula of the measured parent ion.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching;
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...
Gradient dynamics and entropy production maximization
Janečka, Adam
2016-01-01
Gradient dynamics describes irreversible evolution by means of a dissipation potential, which leads to several advantageous features like Maxwell--Onsager relations, distinguishing between thermodynamic forces and fluxes or geometrical interpretation of the dynamics. Entropy production maximization is a powerful tool for predicting constitutive relations in engineering. In this paper, both approaches are compared and their shortcomings and advantages are discussed.
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis, E-mail: anis.matoussi@univ-lemans.fr [Université du Maine, Risk and Insurance institut of Le Mans Laboratoire Manceau de Mathématiques (France); Mezghani, Hanen, E-mail: hanen.mezghani@lamsin.rnu.tn; Mnif, Mohamed, E-mail: mohamed.mnif@enit.rnu.tn [University of Tunis El Manar, Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT (Tunisia)
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Maximizing the Motivated Mind for Emergent Giftedness.
Rea, Dan
2001-01-01
This article explains how the theory of the motivated mind conceptualizes the productive interaction of intelligence, creativity, and achievement motivation and how this theory can help educators to maximize students' emergent potential for giftedness. It discusses the integration of cold-order thinking and hot-chaotic thinking into fluid-adaptive…
The Winning Edge: Maximizing Success in College.
Schmitt, David E.
This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…
MAXIMAL ELEMENTS AND EQUILIBRIUM OF ABSTRACT ECONOMY
刘心歌; 蔡海涛
2001-01-01
An existence theorem of maximal elements for a new type of preference correspondences which are Qθ-majorized is given. Then some existence theorems of equilibrium for abstract economy and qualitative game in which the constraint or preference correspondences are Qθ-majorized are obtained in locally convex topological vector spaces.
DNA solution of the maximal clique problem.
Ouyang, Q; Kaplan, P D; Liu, S; Libchaber, A
1997-10-17
The maximal clique problem has been solved by means of molecular biology techniques. A pool of DNA molecules corresponding to the total ensemble of six-vertex cliques was built, followed by a series of selection processes. The algorithm is highly parallel and has satisfactory fidelity. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main qu
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing throughput in an automated test system
朱君
2007-01-01
@@ Overview This guide is collection of whitepapers designed to help you develop test systems that lower your cost, increase your test throughput, and can scale with future requirements. This whitepaper provides strategies for maximizing system throughput. To download the complete developers guide (120 pages), visit ni. com/automatedtest.
The gaugings of maximal D=6 supergravity
Bergshoeff, E.; Samtleben, H.; Sezgin, E.
2008-01-01
We construct the most general gaugings of the maximal D = 6 supergravity. The theory is ( 2, 2) supersymmetric, and possesses an on-shell SO( 5, 5) duality symmetry which plays a key role in determining its couplings. The field content includes 16 vector fields that carry a chiral spinor representat
WEIGHTED BOUNDEDNESS OF A ROUGH MAXIMAL OPERATOR
无
2000-01-01
In this note the authors give the weighted Lp-boundedness fora class of maximal singular integral operators with rough kernel.The result in this note is an improvement and extension ofthe result obtained by Chen and Lin in 1990.
Maximizing the Range of a Projectile.
Brown, Ronald A.
1992-01-01
Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Testing maximality in muon neutrino flavor mixing
Choubey, S; Choubey, Sandhya; Roy, Probir
2003-01-01
The small difference between the survival probabilities of muon neutrino and antineutrino beams, traveling through earth matter in a long baseline experiment such as MINOS, is shown to be an important measure of any possible deviation from maximality in the flavor mixing of those states.
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
On the Hardy-Littlewood maximal theorem
Shinji Yamashita
1982-01-01
Full Text Available The Hardy-Littlewood maximal theorem is extended to functions of class PL in the sense of E. F. Beckenbach and T. Radó, with a more precise expression of the absolute constant in the inequality. As applications we deduce some results on hyperbolic Hardy classes in terms of the non-Euclidean hyperbolic distance in the unit disk.
Maximal Cartel Pricing and Leniency Programs
Houba, H.E.D.; Motchenkova, E.; Wen, Q.
2008-01-01
For a general class of oligopoly models with price competition, we analyze the impact of ex-ante leniency programs in antitrust regulation on the endogenous maximal-sustainable cartel price. This impact depends upon industry characteristics including its cartel culture. Our analysis disentangles the
How to Generate Good Profit Maximization Problems
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Maximally entangled mixed states made easy
Aiello, A; Voigt, D; Woerdman, J P
2006-01-01
We show that, contrarily to a recent claim [M. Ziman and V. Bu\\v{z}ek, Phys. Rev. A. \\textbf{72}, 052325 (2005)], it is possible to achieve maximally entangled mixed states of two qubits from the singlet state via the action of local nonunital quantum channels. Moreover, we present a simple, feasible linear optical implementation of one of such channels.
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...
Maximal Heat Generation in Nanoscale Systems
ZHOU Li-Ling; LI Shu-Shen; ZENG Zhao-Yang
2009-01-01
We investigate the heat generation in a nanoscale system coupled to normal leads and find that it is maximal when the average occupation of the electrons in the nanoscale system is 0.5,no matter what mechanism induces the heat generation.
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Understanding Violations of Gricean Maxims in Preschoolers and Adults
Mako eOkanda
2015-07-01
Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Core set approach to reduce uncertainty of gene trees
Okuhara Yoshiyasu
2006-05-01
Full Text Available Abstract Background A genealogy based on gene sequences within a species plays an essential role in the estimation of the character, structure, and evolutionary history of that species. Because intraspecific sequences are more closely related than interspecific ones, detailed information on the evolutionary process may be available by determining all the node sequences of trees and provide insight into functional constraints and adaptations. However, strong evolutionary correlations on a few lineages make this determination difficult as a whole, and the maximum parsimony (MP method frequently allows a number of topologies with a same total branching length. Results Kitazoe et al. developed multidimensional vector-space representation of phylogeny. It converts additivity of evolutionary distances to orthogonality among the vectors expressing branches, and provides a unified index to measure deviations from the orthogoality. In this paper, this index is used to detect and exclude sequences with large deviations from orthogonality, and then selects a maximum subset ("core set" of sequences for which MP generates a single solution. Once the core set tree is formed whose all the node sequences are given, the excluded sequences are found to have basically two phylogenetic positions on this tree, respectively. Fortunately, since multiple substitutions are rare in intra-species sequences, the variance of nucleotide transitions is confined to a small range. By applying the core set approach to 38 partial env sequences of HIV-1 in a single patient and also 198 mitochondrial COI and COII DNA sequences of Anopheles dirus, we demonstrate how consistently this approach constructs the tree. Conclusion In the HIV dataset, we confirmed that the obtained core set tree is the unique maximum set for which MP proposes a single tree. In the mosquito data set, the fluctuation of nucleotide transitions caused by the sequences excluded from the core set was very small
Fan Aihua
2004-01-01
The vertices of an infinite locally finite tree T are labelled by a collection of i.i.d. real random variables {Xσ}σ∈T which defines a tree indexed walk Sσ = ∑θ＜r≤σXr. We introduce and study the oscillations of the walk:Exact Hausdorff dimension of the set of such ξ 's is calculated. An application is given to study the local variation of Brownian motion. A general limsup deviation problem on trees is also studied.
Modularity maximization and tree clustering: Novel ways to determine effective geographic borders
Grady, Daniel; Thiemann, Christian; Theis, Fabian; Brockmann, Dirk
2011-01-01
Territorial subdivisions and geographic borders are essential for understanding phenomena in sociology, political science, history, and economics. They influence the interregional flow of information and cross-border trade and affect the diffusion of innovation and technology. However, most existing administrative borders were determined by a variety of historic and political circumstances along with some degree of arbitrariness. Societies have changed drastically, and it is doubtful that currently existing borders reflect the most logical divisions. Fortunately, at this point in history we are in a position to actually measure some aspects of the geographic structure of society through human mobility. Large-scale transportation systems such as trains and airlines provide data about the number of people traveling between geographic locations, and many promising human mobility proxies are being discovered, such as cell phones, bank notes, and various online social networks. In this chapter we apply two optimiz...
Principal components analysis in the space of phylogenetic trees
Nye, Tom M W
2012-01-01
Phylogenetic analysis of DNA or other data commonly gives rise to a collection or sample of inferred evolutionary trees. Principal Components Analysis (PCA) cannot be applied directly to collections of trees since the space of evolutionary trees on a fixed set of taxa is not a vector space. This paper describes a novel geometrical approach to PCA in tree-space that constructs the first principal path in an analogous way to standard linear Euclidean PCA. Given a data set of phylogenetic trees, a geodesic principal path is sought that maximizes the variance of the data under a form of projection onto the path. Due to the high dimensionality of tree-space and the nonlinear nature of this problem, the computational complexity is potentially very high, so approximate optimization algorithms are used to search for the optimal path. Principal paths identified in this way reveal and quantify the main sources of variation in the original collection of trees in terms of both topology and branch lengths. The approach is...
Tree-growth analyses to estimate tree species' drought tolerance
Eilmann, B.; Rigling, A.
2012-01-01
Climate change is challenging forestry management and practices. Among other things, tree species with the ability to cope with more extreme climate conditions have to be identified. However, while environmental factors may severely limit tree growth or even cause tree death, assessing a tree specie
Generalising tree traversals and tree transformations to DAGs
Bahr, Patrick; Axelsson, Emil
2017-01-01
We present a recursion scheme based on attribute grammars that can be transparently applied to trees and acyclic graphs. Our recursion scheme allows the programmer to implement a tree traversal or a tree transformation and then apply it to compact graph representations of trees instead...
0.7 and 3 T MRI and sap flow in intact trees: xylem and phloem in action
Homan, N.; Windt, C.W.; Vergeldt, F.J.; Gerkema, E.; As, van H.
2007-01-01
Dedicated magnetic resonance imaging (MRI) hardware is described that allows imaging of sap flow in intact trees with a maximal trunk diameter of 4 cm and height of several meters. This setup is used to investigate xylem and phloem flow in an intact tree quantitatively. Due to the fragile gradients
Kevin M. Potter; Christopher W. Woodall
2014-01-01
Biodiversity conveys numerous functional benefits to forested ecosystems, including community stability and resilience. In the context of managing forests for climate change mitigation/adaptation, maximizing and/or maintaining aboveground biomass will require understanding the interactions between tree biodiversity, site productivity, and the stocking of live trees....
0.7 and 3 T MRI and sap flow in intact trees: xylem and phloem in action
Homan, N.; Windt, C.W.; Vergeldt, F.J.; Gerkema, E.; As, van H.
2007-01-01
Dedicated magnetic resonance imaging (MRI) hardware is described that allows imaging of sap flow in intact trees with a maximal trunk diameter of 4 cm and height of several meters. This setup is used to investigate xylem and phloem flow in an intact tree quantitatively. Due to the fragile gradients
Larson, David; Jacob, Sharon E
2012-01-01
Tea tree oil is an increasingly popular ingredient in a variety of household and cosmetic products, including shampoos, massage oils, skin and nail creams, and laundry detergents. Known for its potential antiseptic properties, it has been shown to be active against a variety of bacteria, fungi, viruses, and mites. The oil is extracted from the leaves of the tea tree via steam distillation. This essential oil possesses a sharp camphoraceous odor followed by a menthol-like cooling sensation. Most commonly an ingredient in topical products, it is used at a concentration of 5% to 10%. Even at this concentration, it has been reported to induce contact sensitization and allergic contact dermatitis reactions. In 1999, tea tree oil was added to the North American Contact Dermatitis Group screening panel. The latest prevalence rates suggest that 1.4% of patients referred for patch testing had a positive reaction to tea tree oil.
Aval, Jean-Christophe; Nadeau, Philippe
2011-01-01
In this work we introduce and study tree-like tableaux, which are certain fillings of Ferrers diagrams in simple bijection with permutation tableaux and alternative tableaux. We exhibit an elementary insertion procedure on our tableaux which gives a clear proof that tableaux of size n are counted by n!, and which moreover respects most of the well-known statistics studied originally on alternative and permutation tableaux. Our insertion procedure allows to define in particular two simple new bijections between tree-like tableaux and permutations: the first one is conceived specifically to respect the generalized pattern 2-31, while the second one respects the underlying tree of a tree-like tableau.
Minnesota Department of Natural Resources — The National Land Cover Database 2001 tree canopy layer for Minnesota (mapping zones 39-42, 50-51) was produced through a cooperative project conducted by the...
Kelly, K.; White, K.
1981-03-01
An important harvesting alternative in North America is the Full Tree Method, in which trees are felled and transported to roadside, intermediate or primary landings with limbs and branches intact. The acceptance of Full Tree Systems is due to many factors including: labour productivity and increased demands on the forest for ''new products''. These conditions are shaping the future look for forest Harvesting Systems, but must not be the sole determinants. All harvesting implications, such as those affecting Productivity and silviculture, should be thoroughly understood. This paper does not try to discuss every implication, nor any particular one in depth; its purpose is to highlight those areas requiring consideration and to review several current North American Full Tree Systems. (Refs. 5).
1981-01-01
to be Evaluated Manufacturer Location Seismic Susceptibility Flood Susceptibility Temperature Humidity Radiation Wear-out Susceptibility Test...For the category " Seismic Susceptibility," we might define several sensitivity levels ranging from no sensitivity to extreme sensitivity, and for more... Hanford Company, Richland, Wash- ington, ARH-ST-l 12, July 1975. 40. W.E. Vesely, "Analysis of Fault Trees by Kinetic Tree Theory," Idaho Nuclear
Jaeger, Manfred
2006-01-01
We introduce type extension trees as a formal representation language for complex combinatorial features of relational data. Based on a very simple syntax this language provides a unified framework for expressing features as diverse as embedded subgraphs on the one hand, and marginal counts...... of attribute values on the other. We show by various examples how many existing relational data mining techniques can be expressed as the problem of constructing a type extension tree and a discriminant function....
Jaeger, Manfred
2006-01-01
We introduce type extension trees as a formal representation language for complex combinatorial features of relational data. Based on a very simple syntax this language provides a unified framework for expressing features as diverse as embedded subgraphs on the one hand, and marginal counts...... of attribute values on the other. We show by various examples how many existing relational data mining techniques can be expressed as the problem of constructing a type extension tree and a discriminant function....
Somchaipeng, Kerawit; Sporring, Jon; Johansen, Peter
2007-01-01
We propose MultiScale Singularity Trees (MSSTs) as a structure to represent images, and we propose an algorithm for image comparison based on comparing MSSTs. The algorithm is tested on 3 public image databases and compared to 2 state-of-theart methods. We conclude that the computational complexity...... of our algorithm only allows for the comparison of small trees, and that the results of our method are comparable with state-of-the-art using much fewer parameters for image representation....
Schmidt, Lars Holger
Forest tree improvement encompasses a number of scientific and technical areas like floral-, reproductive- and micro-biology, genetics breeding methods and strategies, propagation, gene conservation, data analysis and statistics, each area with a comprehensive terminology. The terms selected...... for definition here are those most frequently used in tree improvement literature. Clonal propagation is included in the view of the great expansion of that field as a means of mass multiplication of improved material....
Manwani, Naresh
2010-01-01
In this paper we present a new algorithm for learning oblique decision trees. Most of the current decision tree algorithms rely on impurity measures to assess the goodness of hyperplanes at each node while learning a decision tree in a top-down fashion. These impurity measures do not properly capture the geometric structures in the data. Motivated by this, our algorithm uses a strategy to assess the hyperplanes in such a way that the geometric structure in the data is taken into account. At each node of the decision tree, we find the clustering hyperplanes for both the classes and use their angle bisectors as the split rule at that node. We show through empirical studies that this idea leads to small decision trees and better performance. We also present some analysis to show that the angle bisectors of clustering hyperplanes that we use as the split rules at each node, are solutions of an interesting optimization problem and hence argue that this is a principled method of learning a decision tree.
2014-01-01
With a view to creating new landscapes and making its population of trees safer and healthier, this winter CERN will complete the tree-felling campaign started in 2010. Tree felling will take place between 15 and 22 November on the Swiss part of the Meyrin site. This work is being carried out above all for safety reasons. The trees to be cut down are at risk of falling as they are too old and too tall to withstand the wind. In addition, the roots of poplar trees are very powerful and spread widely, potentially damaging underground networks, pavements and roadways. Compensatory tree planting campaigns will take place in the future, subject to the availability of funding, with the aim of creating coherent landscapes while also respecting the functional constraints of the site. These matters are being considered in close collaboration with the Geneva nature and countryside directorate (Direction générale de la nature et du paysage, DGNP). GS-SE Group
Maximal Unitarity for the Four-Mass Double Box
Johansson, Henrik; Larsen, Kasper J.
2014-01-01
We extend the maximal-unitarity formalism at two loops to double-box integrals with four massive external legs. These are relevant for higher-point processes, as well as for heavy vector rescattering, VV -> VV. In this formalism, the two-loop amplitude is expanded over a basis of integrals. We obtain formulas for the coefficients of the double-box integrals, expressing them as products of tree-level amplitudes integrated over specific complex multidimensional contours. The contours are subject to the consistency condition that integrals over them annihilate any integrand whose integral over real Minkowski space vanishes. These include integrals over parity-odd integrands and total derivatives arising from integration-by-parts (IBP) identities. We find that, unlike the zero- through three-mass cases, the IBP identities impose no constraints on the contours in the four-mass case. We also discuss the algebraic varieties connected with various double-box integrals, and show how discrete symmetries of these variet...
Attack Trees with Sequential Conjunction
Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando
2015-01-01
We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of
Measurable Maximal Energy and Minimal Time Interval
Dahab, Eiman Abou El
2014-01-01
The possibility of finding the measurable maximal energy and the minimal time interval is discussed in different quantum aspects. It is found that the linear generalized uncertainty principle (GUP) approach gives a non-physical result. Based on large scale Schwarzshild solution, the quadratic GUP approach is utilized. The calculations are performed at the shortest distance, at which the general relativity is assumed to be a good approximation for the quantum gravity and at larger distances, as well. It is found that both maximal energy and minimal time have the order of the Planck time. Then, the uncertainties in both quantities are accordingly bounded. Some physical insights are addressed. Also, the implications on the physics of early Universe and on quantized mass are outlined. The results are related to the existence of finite cosmological constant and minimum mass (mass quanta).
Maximal temperature in a simple thermodynamical system
Dai, De-Chang
2016-01-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Hamiltonian formalism and path entropy maximization
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Predicting Contextual Sequences via Submodular Function Maximization
Dey, Debadeepta; Hebert, Martial; Bagnell, J Andrew
2012-01-01
Sequence optimization, where the items in a list are ordered to maximize some reward has many applications such as web advertisement placement, search, and control libraries in robotics. Previous work in sequence optimization produces a static ordering that does not take any features of the item or context of the problem into account. In this work, we propose a general approach to order the items within the sequence based on the context (e.g., perceptual information, environment description, and goals). We take a simple, efficient, reduction-based approach where the choice and order of the items is established by repeatedly learning simple classifiers or regressors for each "slot" in the sequence. Our approach leverages recent work on submodular function maximization to provide a formal regret reduction from submodular sequence optimization to simple cost-sensitive prediction. We apply our contextual sequence prediction algorithm to optimize control libraries and demonstrate results on two robotics problems: ...
Nonlinear trading models through Sharpe Ratio maximization.
Choey, M; Weigend, A S
1997-08-01
While many trading strategies are based on price prediction, traders in financial markets are typically interested in optimizing risk-adjusted performance such as the Sharpe Ratio, rather than the price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can indeed be achieved with this nonlinear approach.
Maximally Symmetric Spacetimes emerging from thermodynamic fluctuations
Bravetti, A; Quevedo, H
2015-01-01
In this work we prove that the maximally symmetric vacuum solutions of General Relativity emerge from the geometric structure of statistical mechanics and thermodynamic fluctuation theory. To present our argument, we begin by showing that the pseudo-Riemannian structure of the Thermodynamic Phase Space is a solution to the vacuum Einstein-Gauss-Bonnet theory of gravity with a cosmological constant. Then, we use the geometry of equilibrium thermodynamics to demonstrate that the maximally symmetric vacuum solutions of Einstein's Field Equations -- Minkowski, de-Sitter and Anti-de-Sitter spacetimes -- correspond to thermodynamic fluctuations. Moreover, we argue that these might be the only possible solutions that can be derived in this manner. Thus, the results presented here are the first concrete examples of spacetimes effectively emerging from the thermodynamic limit over an unspecified microscopic theory without any further assumptions.
Consistent 4-form fluxes for maximal supergravity
Godazgar, Hadi; Krueger, Olaf; Nicolai, Hermann
2015-01-01
We derive new ansaetze for the 4-form field strength of D=11 supergravity corresponding to uplifts of four-dimensional maximal gauged supergravity. In particular, the ansaetze directly yield the components of the 4-form field strength in terms of the scalars and vectors of the four-dimensional maximal gauged supergravity---in this way they provide an explicit uplift of all four-dimensional consistent truncations of D=11 supergravity. The new ansaetze provide a substantially simpler method for uplifting d=4 flows compared to the previously available method using the 3-form and 6-form potential ansaetze. The ansatz for the Freund-Rubin term allows us to conjecture a `master formula' for the latter in terms of the scalar potential of d=4 gauged supergravity and its first derivative. We also resolve a long-standing puzzle concerning the antisymmetry of the flux obtained from uplift ansaetze.
Modularity maximization using completely positive programming
Yazdanparast, Sakineh; Havens, Timothy C.
2017-04-01
Community detection is one of the most prominent problems of social network analysis. In this paper, a novel method for Modularity Maximization (MM) for community detection is presented which exploits the Alternating Direction Augmented Lagrangian (ADAL) method for maximizing a generalized form of Newman's modularity function. We first transform Newman's modularity function into a quadratic program and then use Completely Positive Programming (CPP) to map the quadratic program to a linear program, which provides the globally optimal maximum modularity partition. In order to solve the proposed CPP problem, a closed form solution using the ADAL merged with a rank minimization approach is proposed. The performance of the proposed method is evaluated on several real-world data sets used for benchmarks community detection. Simulation results shows the proposed technique provides outstanding results in terms of modularity value for crisp partitions.
Utility maximization in incomplete markets with default
Lim, Thomas
2008-01-01
We adress the maximization problem of expected utility from terminal wealth. The special feature of this paper is that we consider a financial market where the price process of risky assets can have a default time. Using dynamic programming, we characterize the value function with a backward stochastic differential equation and the optimal portfolio policies. We separately treat the cases of exponential, power and logarithmic utility.
Operational Modal Analysis using Expectation Maximization Algorithm
Cara Cañas, Francisco Javier; Carpio Huertas, Jaime; Juan Ruiz, Jesús; Alarcón Álvarez, Enrique
2011-01-01
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method...
Revenue Maximizing Head Starts in Contests
Franke, Jörg; Leininger, Wolfgang; Wasser, Cédric
2014-01-01
We characterize revenue maximizing head starts for all-pay auctions and lottery contests with many heterogeneous players. We show that under optimal head starts all-pay auctions revenue-dominate lottery contests for any degree of heterogeneity among players. Moreover, all-pay auctions with optimal head starts induce higher revenue than any multiplicatively biased all-pay auction or lottery contest. While head starts are more effective than multiplicative biases in all-pay auctions, they are l...
Approximate Revenue Maximization in Interdependent Value Settings
Chawla, Shuchi; Fu, Hu; Karlin, Anna
2014-01-01
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, ...
Maximal supersymmetry and B-mode targets
Kallosh, Renata; Linde, Andrei; Wrase, Timm; Yamada, Yusuke
2017-04-01
Extending the work of Ferrara and one of the authors [1], we present dynamical cosmological models of α-attractors with plateau potentials for 3 α = 1, 2, 3, 4, 5, 6, 7. These models are motivated by geometric properties of maximally supersymmetric theories: M-theory, superstring theory, and maximal N = 8 supergravity. After a consistent truncation of maximal to minimal supersymmetry in a seven-disk geometry, we perform a two-step procedure: 1) we introduce a superpotential, which stabilizes the moduli of the seven-disk geometry in a supersymmetric minimum, 2) we add a cosmological sector with a nilpotent stabilizer, which breaks supersymmetry spontaneously and leads to a desirable class of cosmological attractor models. These models with n s consistent with observational data, and with tensor-to-scalar ratio r ≈ 10-2 - 10-3, provide natural targets for future B-mode searches. We relate the issue of stability of inflationary trajectories in these models to tessellations of a hyperbolic geometry.
Maximal respiratory pressures among adolescent swimmers.
Rocha Crispino Santos, M A; Pinto, M L; Couto Sant'Anna, C; Bernhoeft, M
2011-01-01
Maximal inspiratory pressures (MIP) and maximal expiratory pressures (MEP) are useful indices of respiratory muscle strength in athletes. The aims of this study were: to describe the strength of the respiratory muscles of Olympic junior swim team, at baseline and after a standard physical training; and to determine if there is a differential inspiratory and expiratory pressure response to the physical training. A cross-sectional study evaluated 28 international-level swimmers with ages ranging from 15 to 17 years, 19 (61 %) being males. At baseline, MIP was found to be lower in females (P = .001). The mean values reached by males and females were: MIP(cmH2O) = M: 100.4 (± 26.5)/F: 67.8 (± 23.2); MEP (cmH2O) = M: 87.4 (± 20.7)/F: 73.9 (± 17.3). After the physical training they reached: MIP (cmH2O) = M: 95.3 (± 30.3)/F: 71.8 (± 35.6); MEP (cmH2O) = M: 82.8 (± 26.2)/F: 70.4 (± 8.3). No differential pressure responses were observed in either males or females. These results suggest that swimmers can sustain the magnitude of the initial maximal pressures. Other studies should be developed to clarify if MIP and MEP could be used as a marker of an athlete's performance.
Voss, C. I.; Soliman, S. M.; Aggarwal, P. K.
2013-12-01
Important information for management of large aquifer systems can be obtained via a parsimonious approach to groundwater modeling, in part, employing isotope-interpreted groundwater ages. ';Parsimonious' modeling implies active avoidance of overly-complex representations when constructing models. This approach is essential for evaluation of aquifer systems that lack informative hydrogeologic databases. Even in the most remote aquifers, despite lack of typical data, groundwater ages can be interpreted from isotope samples at only a few downstream locations. These samples incorporate hydrogeologic information from the entire upstream groundwater flowpath; thus, interpreted ages are among the most-effective information sources for groundwater model development. This approach is applied to the world's largest non-renewable aquifer, the transboundary Nubian Aquifer System (NAS) of Chad, Egypt, Libya and Sudan. In the NAS countries, water availability is a critical problem and NAS can reliably serve as a water supply for an extended future period. However, there are national concerns about transboundary impacts of water use by neighbors. These concerns include excessive depletion of shared groundwater by individual countries and the spread of water-table drawdown across borders, where neighboring country near-border shallow wells and oases may dry. Development of a parsimonious groundwater flow model, based on limited available NAS hydrogeologic data and on 81Kr groundwater ages below oases in Egypt, is a key step in providing a technical basis for international discussion concerning management of this non-renewable water resource. Simply-structured model analyses, undertaken as part of an IAEA/UNDP/GEF project, show that although the main transboundary issue is indeed drawdown crossing national boundaries, given the large scale of NAS and its plausible ranges of aquifer parameter values, the magnitude of transboundary drawdown will likely be small and may not be a
On the neighbourhoods of trees
Humphries, Peter J
2012-01-01
Tree rearrangement operations typically induce a metric on the space of phylogenetic trees. One important property of these metrics is the size of the neighbourhood, that is, the number of trees exactly one operation from a given tree. We present an expression for the size of the TBR (tree bisection and reconnection) neighbourhood, thus answering a question first posed in [Annals of Combinatorics, 5, 2001 1-15].
Cardiorespiratory Coordination in Repeated Maximal Exercise
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Cardiorespiratory Coordination in Repeated Maximal Exercise.
Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L; Balagué, Natàlia
2017-01-01
Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in
Cost and Benefit Tradeoffs in Using a Shade Tree for Residential Building Energy Saving
Sappinandana Akamphon
2014-01-01
Full Text Available Global warming and urban heat islands result in increased cooling energy consumption in buildings. Previous literature shows that planting trees to shade a building can reduce its cooling load. This work proposes a model to determine the cost effectiveness and profitability of planting a shade tree by considering both its potential to reduce cooling energy and its purchase and maintenance cost. A comparison between six selected tree species is used for illustration. Using growth rates, crown sizes, and shading coefficients, cooling energy savings from the tree shades are computed using an industrial-standard building energy simulation program, offset by costs of purchase, planting, and maintenance of these trees. The result shows that most worthwhile tree to plant should have high shading coefficient and moderate crown size to maximize shading while keeping the maintenance costs manageable.
A. Townsend Peterson
2008-12-01
Full Text Available Parsimony analysis of endemism (PAE has become a popular analytical approach in efforts to map the biogeography of Mexican biotas. Although attractive, the technique has serious drawbacks that make correct inferences of biogeographic history unlikely, which has been noted amply in the broader literature.El PAE se ha convertido en un método popular en los esfuerzos por resumir, en forma de mapas, la biogeografía de la biota de México. A pesar de su atractivo, la técnica tiene problemas serios que impiden que las conclusiones resultantes sean las correctas. Estos problemas se han hecho ampliamente evidentes en la literatura sobre este campo.
The inference of gene trees with species trees.
Szöllősi, Gergely J; Tannier, Eric; Daubin, Vincent; Boussau, Bastien
2015-01-01
This article reviews the various models that have been used to describe the relationships between gene trees and species trees. Molecular phylogeny has focused mainly on improving models for the reconstruction of gene trees based on sequence alignments. Yet, most phylogeneticists seek to reveal the history of species. Although the histories of genes and species are tightly linked, they are seldom identical, because genes duplicate, are lost or horizontally transferred, and because alleles can coexist in populations for periods that may span several speciation events. Building models describing the relationship between gene and species trees can thus improve the reconstruction of gene trees when a species tree is known, and vice versa. Several approaches have been proposed to solve the problem in one direction or the other, but in general neither gene trees nor species trees are known. Only a few studies have attempted to jointly infer gene trees and species trees. These models account for gene duplication and loss, transfer or incomplete lineage sorting. Some of them consider several types of events together, but none exists currently that considers the full repertoire of processes that generate gene trees along the species tree. Simulations as well as empirical studies on genomic data show that combining gene tree-species tree models with models of sequence evolution improves gene tree reconstruction. In turn, these better gene trees provide a more reliable basis for studying genome evolution or reconstructing ancestral chromosomes and ancestral gene sequences. We predict that gene tree-species tree methods that can deal with genomic data sets will be instrumental to advancing our understanding of genomic evolution.
The Impact of Missing Data on Species Tree Estimation.
Xi, Zhenxiang; Liu, Liang; Davis, Charles C
2016-03-01
Phylogeneticists are increasingly assembling genome-scale data sets that include hundreds of genes to resolve their focal clades. Although these data sets commonly include a moderate to high amount of missing data, there remains no consensus on their impact to species tree estimation. Here, using several simulated and empirical data sets, we assess the effects of missing data on species tree estimation under varying degrees of incomplete lineage sorting (ILS) and gene rate heterogeneity. We demonstrate that concatenation (RAxML), gene-tree-based coalescent (ASTRAL, MP-EST, and STAR), and supertree (matrix representation with parsimony [MRP]) methods perform reliably, so long as missing data are randomly distributed (by gene and/or by species) and that a sufficiently large number of genes are sampled. When data sets are indecisive sensu Sanderson et al. (2010. Phylogenomics with incomplete taxon coverage: the limits to inference. BMC Evol Biol. 10:155) and/or ILS is high, however, high amounts of missing data that are randomly distributed require exhaustive levels of gene sampling, likely exceeding most empirical studies to date. Moreover, missing data become especially problematic when they are nonrandomly distributed. We demonstrate that STAR produces inconsistent results when the amount of nonrandom missing data is high, regardless of the degree of ILS and gene rate heterogeneity. Similarly, concatenation methods using maximum likelihood can be misled by nonrandom missing data in the presence of gene rate heterogeneity, which becomes further exacerbated when combined with high ILS. In contrast, ASTRAL, MP-EST, and MRP are more robust under all of these scenarios. These results underscore the importance of understanding the influence of missing data in the phylogenomics era.
Du, Ding-Zhu
2001-01-01
This book is a collection of articles studying various Steiner tree prob lems with applications in industries, such as the design of electronic cir cuits, computer networking, telecommunication, and perfect phylogeny. The Steiner tree problem was initiated in the Euclidean plane. Given a set of points in the Euclidean plane, the shortest network interconnect ing the points in the set is called the Steiner minimum tree. The Steiner minimum tree may contain some vertices which are not the given points. Those vertices are called Steiner points while the given points are called terminals. The shortest network for three terminals was first studied by Fermat (1601-1665). Fermat proposed the problem of finding a point to minimize the total distance from it to three terminals in the Euclidean plane. The direct generalization is to find a point to minimize the total distance from it to n terminals, which is still called the Fermat problem today. The Steiner minimum tree problem is an indirect generalization. Sch...
Bose, Prosenjit; Douieb, Karim; Dujmovic, Vida; King, James; Morin, Pat
2010-01-01
Let R^d -> A be a query problem over R^d for which there exists a data structure S that can compute P(q) in O(log n) time for any query point q in R^d. Let D be a probability measure over R^d representing a distribution of queries. We describe a data structure called the odds-on tree, of size O(n^\\epsilon) that can be used as a filter that quickly computes P(q) for some query values q in R^d and relies on S for the remaining queries. With an odds-on tree, the expected query time for a point drawn according to D is O(H*+1), where H* is a lower-bound on the expected cost of any linear decision tree that solves P. Odds-on trees have a number of applications, including distribution-sensitive data structures for point location in 2-d, point-in-polytope testing in d dimensions, ray shooting in simple polygons, ray shooting in polytopes, nearest-neighbour queries in R^d, point-location in arrangements of hyperplanes in R^d, and many other geometric searching problems that can be solved in the linear-decision tree mo...
Zhu, Ruoqing; Zeng, Donglin; Kosorok, Michael R
In this paper, we introduce a new type of tree-based method, reinforcement learning trees (RLT), which exhibits significantly improved performance over traditional methods such as random forests (Breiman, 2001) under high-dimensional settings. The innovations are three-fold. First, the new method implements reinforcement learning at each selection of a splitting variable during the tree construction processes. By splitting on the variable that brings the greatest future improvement in later splits, rather than choosing the one with largest marginal effect from the immediate split, the constructed tree utilizes the available samples in a more efficient way. Moreover, such an approach enables linear combination cuts at little extra computational cost. Second, we propose a variable muting procedure that progressively eliminates noise variables during the construction of each individual tree. The muting procedure also takes advantage of reinforcement learning and prevents noise variables from being considered in the search for splitting rules, so that towards terminal nodes, where the sample size is small, the splitting rules are still constructed from only strong variables. Last, we investigate asymptotic properties of the proposed method under basic assumptions and discuss rationale in general settings.
Berestovskii, V N
2007-01-01
We show that every inner metric space X is the metric quotient of a complete R-tree via a free isometric action, which we call the covering R-tree of X. The quotient mapping is a weak submetry (hence, open) and light. In the case of compact 1-dimensional geodesic space X, the free isometric action is via a subgroup of the fundamental group of X. In particular, the Sierpin'ski gasket and carpet, and the Menger sponge all have the same covering R-tree, which is complete and has at each point valency equal to the continuum. This latter R-tree is of particular interest because it is "universal" in at least two senses: First, every R-tree of valency at most the continuum can be isometrically embedded in it. Second, every Peano continuum is the image of it via an open light mapping. We provide a sketch of our previous construction of the uniform universal cover in the special case of inner metric spaces, the properties of which are used in the proof.
Tree-level split helicity amplitudes in ambitwistor space
Chen, Bin; Wu, Jun-Bao
2009-12-01
We study all tree-level split helicity gluon amplitudes by using the recently proposed Britto-Cachazo-Feng-Witten recursion relation and Hodges diagrams in ambitwistor space. We pick out the contributing diagrams and find that all of them can be divided into triangles in a suitable way. We give the explicit expressions for all of these amplitudes. As an example, we reproduce the six-gluon split next-to-maximally-helicity-violating amplitudes in momentum space.
Alexander Safatli
2015-06-01
Full Text Available Summary. Pylogeny is a cross-platform library for the Python programming language that provides an object-oriented application programming interface for phylogenetic heuristic searches. Its primary function is to permit both heuristic search and analysis of the phylogenetic tree search space, as well as to enable the design of novel algorithms to search this space. To this end, the framework supports the structural manipulation of phylogenetic trees, in particular using rearrangement operators such as NNI, SPR, and TBR, the scoring of trees using parsimony and likelihood methods, the construction of a tree search space graph, and the programmatic execution of a few existing heuristic programs. The library supports a range of common phylogenetic file formats and can be used for both nucleotide and protein data. Furthermore, it is also capable of supporting GPU likelihood calculation on nucleotide character data through the BEAGLE library.Availability. Existing development and source code is available for contribution and for download by the public from GitHub (http://github.com/AlexSafatli/Pylogeny. A stable release of this framework is available for download through PyPi (Python Package Index at http://pypi.python.org/pypi/pylogeny.
Fast algorithm for the reconciliation of gene trees and LGT networks.
Scornavacca, Celine; Mayol, Joan Carles Pons; Cardona, Gabriel
2017-04-07
In phylogenomics, reconciliations aim at explaining the discrepancies between the evolutionary histories of genes and species. Several reconciliation models are available when the evolution of the species of interest is modelled via phylogenetic trees; the most commonly used are the DL model, accounting for duplications and losses in gene evolution and yielding polynomially-solvable problems, and the DTL model, which also accounts for gene transfers and implies NP-hard problems. However, when dealing with non-tree-like evolutionary events such as hybridisations, phylogenetic networks - and not phylogenetic trees - should be used to model species evolution. Reconciliation models involving phylogenetic networks are still at their early days. In this paper, we propose a new reconciliation model in which the evolution of species is modelled by a special kind of phylogenetic networks - the LGT networks. Our model considers duplications, losses and transfers of genes, but restricts transfers to happen through some specific arcs of the network, called secondary arcs. Moreover, we provide a polynomial algorithm to compute the most parsimonious reconciliation between a gene tree and an LGT network under this model. Our method, when combined with quartet decomposition methods to detect putative "highways" of transfers, permits to refine their analyses by allowing to examine the two possible directions of a highway and even consider combinations of highways.
Chemical classification of cattle. 2. Phylogenetic tree and specific status of the Zebu.
Manwell, C; Baker, C M
1980-01-01
Phylogenetic trees for the ten major breed groups of cattle were constructed by Farris's (1972) maximum parsimony method, or Fitch & Margoliash's (1967) method, which averages ou the deviation over the entire assemblage. Both techniques yield essentially identical trees. The phylogenetic tree for the ten major cattle breed groups can be superimposed on a map of Europe and western Asia, the root of the tree being close to the 'fertile crescent' in Asia Minor, believed to be a primary centre of bovine domestication. For some but not all protein variants there is a cline of gene frequencies as one proceeds from the British Isles and northwest Europe towards southeast Europe and Asia Minor, with the most extreme gene frequencies in the Zebu breeds of India. It is not clear to what extent the observed clines are primary or secondary, i.e., consequent to the initial migrations of cattle towards the end of the Pleistocene or consequent to the many migrations of man with his domesticated cattle. Such clines as exist are not in themselves sufficient to prove either selection versus genetic drift or to establish taxonomic ranking. Contrary to some suggestions in the literature, the biochemical evidence supports Linnaeus's original conclusions: Bos taurus and Bos indicus are distinct species.
Visualization of Uncertain Contour Trees
Kraus, Martin
2010-01-01
Contour trees can represent the topology of large volume data sets in a relatively compact, discrete data structure. However, the resulting trees often contain many thousands of nodes; thus, many graph drawing techniques fail to produce satisfactory results. Therefore, several visualization methods...... were proposed recently for the visualization of contour trees. Unfortunately, none of these techniques is able to handle uncertain contour trees although any uncertainty of the volume data inevitably results in partially uncertain contour trees. In this work, we visualize uncertain contour trees...... by combining the contour trees of two morphologically filtered versions of a volume data set, which represent the range of uncertainty. These two contour trees are combined and visualized within a single image such that a range of potential contour trees is represented by the resulting visualization. Thus...
Postactivation Potentiation Biases Maximal Isometric Strength Assessment
Leonardo Coelho Rabello Lima
2014-01-01
Full Text Available Postactivation potentiation (PAP is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs. The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n=23 performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT, time to achieve it (tPTI, contractile impulse (CI, root mean square of the electromyographic signal during PTI (RMS, and rate of torque development (RTD, in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m, RTD (746 ± 152 N·m·s−1versus 727 ± 158 N·m·s−1, and RMS (59.1 ± 12.2% RMSMAX versus 54.8 ± 9.4% RMSMAX were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms. We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.
Forrow, Aden; Dunkel, Jörn
2016-01-01
Coherent, large scale dynamics in many nonequilibrium physical, biological, or information transport networks are driven by small-scale local energy input. We introduce and explore a generic model for compressible active flows on tree networks. In contrast to thermally-driven systems, active friction selects discrete states with only a small number of oscillation modes activated at distinct fixed amplitudes. This state selection interacts with graph topology to produce different localized dynamical time scales in separate regions of large networks. Using perturbation theory, we systematically predict the stationary states of noisy networks and find good agreement with a Bayesian state estimation based on a hidden Markov model applied to simulated time series data on binary trees. While the number of stable states per tree scales exponentially with the number of edges, the mean number of activated modes in each state averages $\\sim 1/4$ the number of edges. More broadly, these results suggest that the macrosco...
Durhuus, Bergfinnur Jøgvan; Napolitano, George Maria
2012-01-01
The Ising model on a class of infinite random trees is defined as a thermodynamiclimit of finite systems. A detailed description of the corresponding distribution of infinite spin configurations is given. As an application, we study the magnetization properties of such systems and prove that they......The Ising model on a class of infinite random trees is defined as a thermodynamiclimit of finite systems. A detailed description of the corresponding distribution of infinite spin configurations is given. As an application, we study the magnetization properties of such systems and prove...... that they exhibit no spontaneous magnetization. Furthermore, the values of the Hausdorff and spectral dimensions of the underlying trees are calculated and found to be, respectively,¯dh =2 and¯ds = 4/3....
Roux, Kenneth H; Teuber, Suzanne S; Sathe, Shridhar K
2003-08-01
Allergic reactions to tree nuts can be serious and life threatening. Considerable research has been conducted in recent years in an attempt to characterize those allergens that are most responsible for allergy sensitization and triggering. Both native and recombinant nut allergens have been identified and characterized and, for some, the IgE-reactive epitopes described. Some allergens, such as lipid transfer proteins, profilins, and members of the Bet v 1-related family, represent minor constituents in tree nuts. These allergens are frequently cross-reactive with other food and pollen homologues, and are considered panallergens. Others, such as legumins, vicilins, and 2S albumins, represent major seed storage protein constituents of the nuts. The allergenic tree nuts discussed in this review include those most commonly responsible for allergic reactions such as hazelnut, walnut, cashew, and almond as well as those less frequently associated with allergies including pecan, chestnut, Brazil nut, pine nut, macadamia nut, pistachio, coconut, Nangai nut, and acorn.
DNA barcoding: species delimitation in tree peonies
ZHANG JinMei; WANG JianXiu; XIA Tao; ZHOU ShiLiang
2009-01-01
Delimitations of species are crucial for correct and precise identification of taxa. Unfortunately "spe-cies" is more a subjective than an objective concept in taxonomic practice due to difficulties in re-vealing patterns of infra- or inter-specific variations. Molecular phylogenetic studies at the population level solve this problem and lay a sound foundation for DNA barcoding. In this paper we exemplify the necessity of adopting a phylogenetic concept of species in DNA barcoding for tree peonies (Paeonia sect. Moutan). We used 40 samples representing all known populations of rare and endangered species and several populations of widely distributed tree peonies. All currently recognized species and majorbvariants have been included in this study. Four chloroplast gene fragments, I.e. ndhF, rps16-trnQ, trnL.F and trnS-G (a total of 5040 characters, 96 variable and 69 parsimony-informative characters) and one variable and single-copy nuclear GPAT gene fragment (2093-2197 bp, 279 variable and 148 parsi-mony-informative characters) were used to construct phylogenetic relationships among the taxa. The evolutionary lineages revealed by the nuclear gene and the chloroplast genes are inconsistent with the current circumscriptions of P. Decomposita, P. Jishanensis, P. Qiui, and P. Rockii based on morphology. The inconsistencies come from (1) significant chloroplast gene divergence but little nuclear GPAT gene divergence among population systems of P. Decomposita + P. Rockii, and (2) well-diverged nuclear GPAT gene but little chloroplast gene divergence between P. Jishanensis and P. Qiui. The incongruence of the phylogenies based on the chloroplast genes and the nuclear GPAT gene is probably due to the chloro-plast capture event in evolutionary history, as no reproductive barriers exist to prevent inter-specific hybridization. We also evaluated the suitability of these genes for use as DNA barcodes for tree peonies. The variability of chloroplast genes among well
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...... on the numbers of cycles in graphs depending on numbers of vertices and edges, girth, and homomorphisms to small fixed graphs; and use the bounds to show that among regular graphs, the conjecture holds. We also consider graphs that are close to being regular, with the minimum and maximum degrees differing...
ON THE SPACES OF THE MAXIMAL POINTS
梁基华; 刘应明
2003-01-01
For a continuous domain D, some characterization that the convex powerdomain CD is adomain hull of Max(CD) is given in terms of compact subsets of D. And in this case, it isproved that the set of the maximal points Max(CD) of CD with the relative Scott topology ishomeomorphic to the set of all Scott compact subsets of Max(D) with the topology induced bythe Hausdorff metric derived from a metric on Max(D) when Max(D) is metrizable.
Understanding of English Contracts though Relation Maxims
XU Chi-ying; JIANG Li-hui
2013-01-01
Contract is the legal evidence of the concerning parties of business. And this lead to its unique characteristics:technical terms, archaism, borrowed words, juxtaposition, and abbreviation. The understanding of contracts is of vital importance for each party, because it concerns the share of interests. In order to avoid ambiguity that some words or sentence in English contracts may lead to, and achieve“best relevance and least effort”of communication, this paper, by applying relation maxim, deeply analyze how to understand English contracts though selection of words, modification, the complexity and simplicity of sentence.
Maximizing results in reconstruction of cheek defects.
Mureau, Marc A M; Hofer, Stefan O P
2009-07-01
The face is exceedingly important, as it is the medium through which individuals interact with the rest of society. Reconstruction of cheek defects after trauma or surgery is a continuing challenge for surgeons who wish to reliably restore facial function and appearance. Important in aesthetic facial reconstruction are the aesthetic unit principles, by which the face can be divided in central facial units (nose, lips, eyelids) and peripheral facial units (cheeks, forehead, chin). This article summarizes established options for reconstruction of cheek defects and provides an overview of several modifications as well as tips and tricks to avoid complications and maximize aesthetic results.
Maximizing policy learning in international committees
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Springer, Mark S; Gatesy, John
2016-01-01
Higher-level relationships among placental mammals are mostly resolved, but several polytomies remain contentious. Song et al. (2012) claimed to have resolved three of these using shortcut coalescence methods (MP-EST, STAR) and further concluded that these methods, which assume no within-locus recombination, are required to unravel deep-level phylogenetic problems that have stymied concatenation. Here, we reanalyze Song et al.'s (2012) data and leverage these re-analyses to explore key issues in systematics including the recombination ratchet, gene tree stoichiometry, the proportion of gene tree incongruence that results from deep coalescence versus other factors, and simulations that compare the performance of coalescence and concatenation methods in species tree estimation. Song et al. (2012) reported an average locus length of 3.1 kb for the 447 protein-coding genes in their phylogenomic dataset, but the true mean length of these loci (start codon to stop codon) is 139.6 kb. Empirical estimates of recombination breakpoints in primates, coupled with consideration of the recombination ratchet, suggest that individual coalescence genes (c-genes) approach ∼12 bp or less for Song et al.'s (2012) dataset, three to four orders of magnitude shorter than the c-genes reported by these authors. This result has general implications for the application of coalescence methods in species tree estimation. We contend that it is illogical to apply coalescence methods to complete protein-coding sequences. Such analyses amalgamate c-genes with different evolutionary histories (i.e., exons separated by >100,000 bp), distort true gene tree stoichiometry that is required for accurate species tree inference, and contradict the central rationale for applying coalescence methods to difficult phylogenetic problems. In addition, Song et al.'s (2012) dataset of 447 genes includes 21 loci with switched taxonomic names, eight duplicated loci, 26 loci with non-homologous sequences that are
Adaptive Context Tree Weighting
O'Neill, Alexander; Shao, Wen; Sunehag, Peter
2012-01-01
We describe an adaptive context tree weighting (ACTW) algorithm, as an extension to the standard context tree weighting (CTW) algorithm. Unlike the standard CTW algorithm, which weights all observations equally regardless of the depth, ACTW gives increasing weight to more recent observations, aiming to improve performance in cases where the input sequence is from a non-stationary distribution. Data compression results show ACTW variants improving over CTW on merged files from standard compression benchmark tests while never being significantly worse on any individual file.
Oflazer, K
1996-01-01
This paper presents an efficient algorithm for retrieving from a database of trees, all trees that match a given query tree approximately, that is, within a certain error tolerance. It has natural language processing applications in searching for matches in example-based translation systems, and retrieval from lexical databases containing entries of complex feature structures. The algorithm has been implemented on SparcStations, and for large randomly generated synthetic tree databases (some having tens of thousands of trees) it can associatively search for trees with a small error, in a matter of tenths of a second to few seconds.
Maximal subbundles, quot schemes, and curve counting
Gillam, W D
2011-01-01
Let $E$ be a rank 2, degree $d$ vector bundle over a genus $g$ curve $C$. The loci of stable pairs on $E$ in class $2[C]$ fixed by the scaling action are expressed as products of $\\Quot$ schemes. Using virtual localization, the stable pairs invariants of $E$ are related to the virtual intersection theory of $\\Quot E$. The latter theory is extensively discussed for an $E$ of arbitrary rank; the tautological ring of $\\Quot E$ is defined and is computed on the locus parameterizing rank one subsheaves. In case $E$ has rank 2, $d$ and $g$ have opposite parity, and $E$ is sufficiently generic, it is known that $E$ has exactly $2^g$ line subbundles of maximal degree. Doubling the zero section along such a subbundle gives a curve in the total space of $E$ in class $2[C]$. We relate this count of maximal subbundles with stable pairs/Donaldson-Thomas theory on the total space of $E$. This endows the residue invariants of $E$ with enumerative significance: they actually \\emph{count} curves in $E$.
Maximal coherence in a generic basis
Yao, Yao; Dong, G. H.; Ge, Li; Li, Mo; Sun, C. P.
2016-12-01
Since quantum coherence is an undoubted characteristic trait of quantum physics, the quantification and application of quantum coherence has been one of the long-standing central topics in quantum information science. Within the framework of a resource theory of quantum coherence proposed recently, a fiducial basis should be preselected for characterizing the quantum coherence in specific circumstances, namely, the quantum coherence is a basis-dependent quantity. Therefore, a natural question is raised: what are the maximum and minimum coherences contained in a certain quantum state with respect to a generic basis? While the minimum case is trivial, it is not so intuitive to verify in which basis the quantum coherence is maximal. Based on the coherence measure of relative entropy, we indicate the particular basis in which the quantum coherence is maximal for a given state, where the Fourier matrix (or more generally, complex Hadamard matrices) plays a critical role in determining the basis. Intriguingly, though we can prove that the basis associated with the Fourier matrix is a stationary point for optimizing the l1 norm of coherence, numerical simulation shows that it is not a global optimal choice.
Symmetry and approximability of submodular maximization problems
Vondrak, Jan
2011-01-01
A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, where the objective function is accessible via a black box returning f(S) for a given S. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of symmetry gap. Our main result is that for any fixed instance that exhibits a certain symmetry gap in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, and implies several new ones. In particular, we prove that there is no constant-factor approximation for the problem of maximizing a non-negative submodular function over the bases of a matroid. We also provide a closely matching approximation algorithm for...
The Tree Core Problem with Some Constraint in a Tree Network%树状网络上多约束的tree core问题
杨建芳; 刘建贞
2012-01-01
According to background of actual application,because of the processing ability of each equipment is limited generally in the computer and correspondence networks,and the cost between each equipment is controlled.The paper puts forward the tree core problem with degree and radius constraint in a tree network,denoted as problem.The first is to structure-maximal subtree of,then the second is to use the method of dynamic programming to solve problem.It takes time algorithm for this problem.%考虑到在实际应用中,由于计算机和通信网络中一般每个设备的处理能力是有限的,以及控制各设备放置点之间的营运成本,该文在tree core问题的基础上,提出了同时带有度和半径约束的tree core问题,记为（q,l）-DTC问题（Degree constrained Tree Core）。该文先构造出极大子树集,然后在极大子树中利用动态规划的方法,求解（q,l）-DTC问题,可在O（n2）时间内求得该问题的最优解。
A Suffix Tree Or Not a Suffix Tree?
Starikovskaya, Tatiana; Vildhøj, Hjalte Wedel
2015-01-01
, in particular we do not require that S ends with a unique symbol. This corresponds to considering the more general definition of implicit or extended suffix trees. Such general suffix trees have many applications and are for example needed to allow efficient updates when suffix trees are built online. We prove...
Tree Modeling with Real Tree-Parts Examples.
Xie, Ke; Yan, Feilong; Sharf, Andrei; Deussen, Oliver; Huang, Hui; Chen, Baoquan
2016-12-01
We introduce a 3D tree modeling technique that utilizes examples of real trees to enhance tree creation with realistic structures and fine-level details. In contrast to previous works that use smooth generalized cylinders to represent tree branches, our method generates realistic looking tree models with complex branching geometry by employing an exemplar database consisting of real-life trees reconstructed from scanned data. These trees are sliced into representative parts (denoted as tree-cuts), representing trunk logs and branching structures. In the modeling process, tree-cuts are positioned in space in an intuitive manner, serving as efficient proxies that guide the creation of the complete tree. Allometry rules are taken into account to ensure reasonable relations between adjacent branches. Realism is further enhanced by automatically transferring geometric textures from our database onto tree branches as well as by guided growing of foliage. Our results demonstrate the complexity and variety of trees that can be generated with our method within few minutes. We carry a user study to test the effectiveness of our modeling technique.
Balgooy, van M.M.J.
1998-01-01
With the publication of the second volume of the series ‘Malesian Seed Plants’, entitled ‘Portraits of Tree Families’, I would like to refer to the Introduction of the first volume, ‘Spot-characters’ for a historical background and an explanation of the aims of this series. The present book treats 1
Certified Kruskal's Tree Theorem
Christian Sternagel
2014-07-01
Full Text Available This article presents the first formalization of Kurskal's tree theorem in aproof assistant. The Isabelle/HOL development is along the lines of Nash-Williams' original minimal bad sequence argument for proving the treetheorem. Along the way, proofs of Dickson's lemma and Higman's lemma, as well as some technical details of the formalization are discussed.
2009-01-01
west of Tiananmen Square in Beijing, in Zhongshan Park, there stand several ancient cypress trees, each more than 1,000 years old. Their leafy crowns are all more than 20 meters high, while four have trunks that are 6 meters in circumference. The most unique of these
Assent, Ira; Krieger, Ralph; Afschari, Farzad;
2008-01-01
Continuous growth in sensor data and other temporal data increases the importance of retrieval and similarity search in time series data. Efficient time series query processing is crucial for interactive applications. Existing multidimensional indexes like the R-tree provide efficient querying fo...
Assent, Ira; Krieger, Ralph; Afschari, Farzad
2008-01-01
Continuous growth in sensor data and other temporal data increases the importance of retrieval and similarity search in time series data. Efficient time series query processing is crucial for interactive applications. Existing multidimensional indexes like the R-tree provide efficient querying fo...
Tree Transduction Tools for Cdec
Austin Matthews
2014-09-01
Full Text Available We describe a collection of open source tools for learning tree-to-string and tree-to-tree transducers and the extensions to the cdec decoder that enable translation with these. Our modular, easy-to-extend tools extract rules from trees or forests aligned to strings and trees subject to different structural constraints. A fast, multithreaded implementation of the Cohn and Blunsom (2009 model for extracting compact tree-to-string rules is also included. The implementation of the tree composition algorithm used by cdec is described, and translation quality and decoding time results are presented. Our experimental results add to the body of evidence suggesting that tree transducers are a compelling option for translation, particularly when decoding speed and translation model size are important.
Tree Formation Using Coordinate Method
Monika Choudhary
2015-06-01
Full Text Available In this paper we are introducing a new method of tree formation, we propose a coordinate based method by which we can store and access tree structures. As we know in NLP, parsing is the most important module. The output of this module is generally parsed trees. Currently, TAG (Tree Adjoining Grammar is widely used grammar due to its linguistic and formal nature. It is simply tree generating system. The unit structure used in TAG is structured trees. So we used our new method to store trees where we worked on English to Hindi language. We worked on different sentences from English to Hindi, our method is the easiest way to manipulate tree. We have implemented within small corpus and for finite number of structures and further can be extended in future.
Andersen, Esben Sloth
2002-01-01
The purpose of this paper is to bring forth an interaction between evolutionary economics and industrial systematics. The suggested solution is to reconstruct the "family tree" of the industries. Such a tree is based on similarities, but it may also reflect the evolutionary history in industries...... finding of optimal industrial trees. The results are presented as taxonomic trees that can easily be compared with the hierarchical structure of existing systems of industrial classification....
Maximal lattice free bodies, test sets and the Frobenius problem
Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt
Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram;
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...... variable. Generation of trial databases and/or biobanks originating in large randomized clinical trials has successfully increased the knowledge obtained from those trials. At the 10th Cardiovascular Trialist Workshop, possibilities and pitfalls in designing and accessing clinical trial databases were......, in particular with respect to collaboration with the trial sponsor and to analytic pitfalls. The advantages of creating screening databases in conjunction with a given clinical trial are described; and finally, the potential for posttrial database studies to become a platform for training young scientists...
Characterizing maximally singular phase-space distributions
Sperling, J.
2016-07-01
Phase-space distributions are widely applied in quantum optics to access the nonclassical features of radiations fields. In particular, the inability to interpret the Glauber-Sudarshan distribution in terms of a classical probability density is the fundamental benchmark for quantum light. However, this phase-space distribution cannot be directly reconstructed for arbitrary states, because of its singular behavior. In this work, we perform a characterization of the Glauber-Sudarshan representation in terms of distribution theory. We address important features of such distributions: (i) the maximal degree of their singularities is studied, (ii) the ambiguity of representation is shown, and (iii) their dual space for nonclassicality tests is specified. In this view, we reconsider the methods for regularizing the Glauber-Sudarshan distribution for verifying its nonclassicality. This treatment is supported with comprehensive examples and counterexamples.
Maximization of eigenvalues using topology optimization
Pedersen, Niels Leergaard
2000-01-01
Topology optimization is used to optimize the eigenvalues of plates. The results are intended especially for MicroElectroMechanical Systems (MEMS) but call be seen as more general. The problem is not formulated as a case of reinforcement of an existing structure, so there is a problem related...... to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...... is a practical MEMS application; a probe used in an Atomic Force Microscope (AFM). For the AFM probe the optimization is complicated by a constraint on the stiffness and constraints on higher order eigenvalues....
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Paulo André da Conceição Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Paulo André Da Conceiçao Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
Reflection Quasilattices and the Maximal Quasilattice
Boyle, Latham
2016-01-01
We introduce the concept of a {\\it reflection quasilattice}, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e. Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we prove that reflection quasilattices only exist in dimensions two, three and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. We further show that, unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. W...
Distributed Maximality based CTL Model Checking
Djamel Eddine Saidouni
2010-05-01
Full Text Available In this paper we investigate an approach to perform a distributed CTL Model checker algorithm on a network of workstations using Kleen three value logic, the state spaces is partitioned among the network nodes, We represent the incomplete state spaces as a Maximality labeled Transition System MLTS which are able to express true concurrency. we execute in parallel the same algorithm in each node, for a certain property on an incomplete MLTS , this last compute the set of states which satisfy or which if they fail are assigned the value .The third value mean unknown whether true or false because the partial state space lacks sufficient information needed for a precise answer concerning the complete state space .To solve this problem each node exchange the information needed to conclude the result about the complete state space. The experimental version of the algorithm is currently being implemented using the functional programming language Erlang.
Evolution of correlated multiplexity through stability maximization
Dwivedi, Sanjiv K
2016-01-01
Investigating relation between various structural patterns found in real-world networks and stability of underlying systems is crucial to understand importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising of anti-symmetric couplings in one layer, depicting predator-prey relation, and symmetric couplings in the other, depicting mutualistic (or competitive) relation, based on stability maximization through the largest eigenvalue. We find that the correlated multiplexity emerges as evolution progresses. The evolved values of the correlated multiplexity exhibit a dependence on the inter-link coupling strength. Furthermore, the inter-layer coupling strength governs the evolution of disassortativity property in the individual layers. We provide analytical understanding to these findings by considering star like networks in both the layers. The model and tools used here are useful for understanding the principles governing the stability as well as importance of such patterns in ...
Witten spinors on maximal, conformally flat hypersurfaces
Frauendiener, Jörg; Szabados, László B
2011-01-01
The boundary conditions that exclude zeros of the solutions of the Witten equation (and hence guarantee the existence of a 3-frame satisfying the so-called special orthonormal frame gauge conditions) are investigated. We determine the general form of the conformally invariant boundary conditions for the Witten equation, and find the boundary conditions that characterize the constant and the conformally constant spinor fields among the solutions of the Witten equations on compact domains in extrinsically and intrinsically flat, and on maximal, intrinsically globally conformally flat spacelike hypersurfaces, respectively. We also provide a number of exact solutions of the Witten equation with various boundary conditions (both at infinity and on inner or outer boundaries) that single out nowhere vanishing spinor fields on the flat, non-extreme Reissner--Nordstr\\"om and Brill--Lindquist data sets. Our examples show that there is an interplay between the boundary conditions, the global topology of the hypersurface...
Greedy Maximal Scheduling in Wireless Networks
Li, Qiao
2010-01-01
In this paper we consider greedy scheduling algorithms in wireless networks, i.e., the schedules are computed by adding links greedily based on some priority vector. Two special cases are considered: 1) Longest Queue First (LQF) scheduling, where the priorities are computed using queue lengths, and 2) Static Priority (SP) scheduling, where the priorities are pre-assigned. We first propose a closed-form lower bound stability region for LQF scheduling, and discuss the tightness result in some scenarios. We then propose an lower bound stability region for SP scheduling with multiple priority vectors, as well as a heuristic priority assignment algorithm, which is related to the well-known Expectation-Maximization (EM) algorithm. The performance gain of the proposed heuristic algorithm is finally confirmed by simulations.
Dispatch Scheduling to Maximize Exoplanet Detection
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
A New Biflavone from Selaginella pulvinata Maxim
XU Kang-Ping; XU Zhi; DENG Yin-Hua; LI Fu-Shuang; ZHOU Ying-Jun; HU Gao-Yun; TAN Gui-Shan
2003-01-01
@@ Selaginella pulvinata Maxim. distributes all over the country of China and is used for the treatment for haemor rhage. [1] We studied on the chemical constituents of S. pulvinata in order to find the active compounds. Dried stems and leaves of S. pulvinata (6.5 kg) were extracted with 70% ethanol twice. The extract was evaporated under vacuum and than suspended in water, extracted with petroleum and EtOAc sequentially. The EtOAc extract was chromatographed on silica gel, eluted with CHCl3-MeOH. As a result, a novel biflavone, named pulvinatabiflavone, was obtained from fractions 75 ～ 78. Its structure was determined on the basis of spectroscopic analysis as 5,5″, 4′″ trihydroxy-7,7″-dimethoxy-[4′-O-6″]-biflavone (compound 1).
Maximal energy extraction under discrete diffusive exchange
Hay, M. J., E-mail: hay@princeton.edu [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Schiff, J. [Department of Mathematics, Bar-Ilan University, Ramat Gan 52900 (Israel); Fisch, N. J. [Department of Astrophysical Sciences, Princeton University, Princeton, New Jersey 08544 (United States); Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543 (United States)
2015-10-15
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Maximal energy extraction under discrete diffusive exchange
Hay, Michael J; Fisch, Nathaniel J
2015-01-01
Waves propagating through a bounded plasma can rearrange the densities of states in the six-dimensional velocity-configuration phase space. Depending on the rearrangement, the wave energy can either increase or decrease, with the difference taken up by the total plasma energy. In the case where the rearrangement is diffusive, only certain plasma states can be reached. It turns out that the set of reachable states through such diffusive rearrangements has been described in very different contexts. Building upon those descriptions, and making use of the fact that the plasma energy is a linear functional of the state densities, the maximal extractable energy under diffusive rearrangement can then be addressed through linear programming.
Maximally reliable Markov chains under energy constraints.
Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam
2009-07-01
Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.
Protecting Trees Means Protecting Ourselves
刘国虹; 张超
2016-01-01
As everyone knows,spring is a planting season.Every year people all over China go out to plant trees.Trees can make our environment more beautifully~①.Trees can stop wind from blowing the earth and sand away.They can also prevent soil from being washed away by wa-
Tree decompositions with small cost
Bodlaender, H.L.; Fomin, F.V.
2002-01-01
The f-cost of a tree decomposition ({Xi | i e I}, T = (I;F)) for a function f : N -> R+ is defined as EieI f(|Xi|). This measure associates with the running time or memory use of some algorithms that use the tree decomposition. In this paper we investigate the problem to find tree decompositions
Distance labeling schemes for trees
Alstrup, Stephen; Gørtz, Inge Li; Bistrup Halvorsen, Esben;
2016-01-01
We consider distance labeling schemes for trees: given a tree with n nodes, label the nodes with binary strings such that, given the labels of any two nodes, one can determine, by looking only at the labels, the distance in the tree between the two nodes. A lower bound by Gavoille et al. [Gavoill...
孟凡洪; 苏耕; 杨继
2000-01-01
The present paper shows the coordinates of a tree and its vertices, defines a kind of Trees with Odd-Number Radiant Type (TONRT), deals with the gracefulness of TONRT by using the edge-moving theorem, and uses graceful TONRT to construct another class of graceful trees.
Generalising tree traversals to DAGs
Bahr, Patrick; Axelsson, Emil
2015-01-01
We present a recursion scheme based on attribute grammars that can be transparently applied to trees and acyclic graphs. Our recursion scheme allows the programmer to implement a tree traversal and then apply it to compact graph representations of trees instead. The resulting graph traversals avoid...
Spanning trees crossing few barriers
Asano, T.; Berg, M. de; Cheong, O.; Guibas, L.J.; Snoeyink, J.; Tamaki, H.
2002-01-01
We consider the problem of finding low-cost spanning trees for sets of n points in the plane, where the cost of a spanning tree is defined as the total number of intersections of tree edges with a given set of m barriers. We obtain the following results: (i) if the barriers are possibly intersecting
Selecting Landscape Plants: Flowering Trees
Relf, Diane; Appleton, Bonnie Lee, 1948-2012
2009-01-01
This publication helps the reader to select wisely among the many species and varieties of flowering trees available. The following are considerations that should be taken into account when choosing flowering trees for the home landscape: selections factors, environmental responses, availability and adaptability, and flowering tree descriptions.
Rectilinear Full Steiner Tree Generation
Zachariasen, Martin
1999-01-01
The fastest exact algorithm (in practice) for the rectilinear Steiner tree problem in the plane uses a two-phase scheme: First, a small but sufficient set of full Steiner trees (FSTs) is generated and then a Steiner minimum tree is constructed from this set by using simple backtrack search, dynamic...
Generalising tree traversals to DAGs
Bahr, Patrick; Axelsson, Emil
2015-01-01
We present a recursion scheme based on attribute grammars that can be transparently applied to trees and acyclic graphs. Our recursion scheme allows the programmer to implement a tree traversal and then apply it to compact graph representations of trees instead. The resulting graph traversals avoid...
Nyhuis, Jane
Referring as often as possible to traditional Hopi practices and to materials readily available on the reservation, the illustrated booklet provides information on the care and maintenance of young fruit trees. An introduction to fruit trees explains the special characteristics of new trees, e.g., grafting, planting pits, and watering. The…
Wyse, Adam E.; Babcock, Ben
2016-01-01
A common suggestion made in the psychometric literature for fixed-length classification tests is that one should design tests so that they have maximum information at the cut score. Designing tests in this way is believed to maximize the classification accuracy and consistency of the assessment. This article uses simulated examples to illustrate…
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
THE EFFECTS MAXIMAL AND SUB MAXIMAL AEROBIC EXERCISE ON THE BRONCHOSPASM INDICES IN NON ATHLETIC
Amir GANJİ
2012-08-01
Full Text Available Background: Exercise-induced bronchospasm (EIB is a transient airway obstruction that occurs during and after the exercise. Exercise-induced bronchospasm is observed in healthy individuals as well as the asthmatic and allergic rhinitis patients. Research question: The study compared the effects of one session of submaximal aerobic exercise and a maximal one on the prevalence of exercise-induced bronchospasm in non-athletic students. Type of study: An experimental study, using human subjects, was designed. Methods: 20 non-athletic male students participated in two sessions of aerobic exercise. The prevalence of EIB was investigated among them. The criteria for assessing exercise-induced bronchospasm were ≥10% fall in FEV1, ≥15% fall in FEF25-75%, or ≥25% fall in PEFR. Results: The results revealed that the maximal exercise did not affect FEF25-75% and PEF, but it led to a meaningful reduction in FEV1. Contrarily, the submaximal exercise affected none of these indices. That is, in both protocols the same result was obtained for PEF and FEF25-75. Moreover, the prevalence of EIB was 15% in the submaximal exercise and 20% in the maximal one. Actually, this difference was significant. Conclusion: This study demonstrated that in contrast to the subjects who performed submaximal exercise, those who participated in the maximal protocol showed more changes in the pulmonary function indices and the prevalence of EIB was greater among them.
Maximal elements of non necessarily acyclic binary relations
Josep Enric Peris Ferrando; Begoña Subiza Martínez
1992-01-01
The existence of maximal elements for binary preference relations is analyzed without imposing transitivity or convexity conditions. From each preference relation a new acyclic relation is defined in such a way that some maximal elements of this new relation characterize maximal elements of the original one. The result covers the case whereby the relation is acyclic.
Yao, W.; Krzystek, P.; Heurich, M.
2012-07-01
In forest ecology, a snag refers to a standing, partly or completely dead tree, often missing a top or most of the smaller branches. The accurate estimation of live and dead biomass in forested ecosystems is important for studies of carbon dynamics, biodiversity, and forest management. Therefore, an understanding of its availability and spatial distribution is required. So far, LiDAR remote sensing has been successfully used to assess live trees and their biomass, but studies focusing on dead trees are rare. The paper develops a methodology for retrieving individual dead trees in a mixed mountain forest using features that are derived from small-footprint airborne full waveform LIDAR data. First, 3D coordinates of the laser beam reflections, the pulse intensity and width are extracted by waveform decomposition. Secondly, 3D single trees are detected by an integrated approach, which delineates both dominate tree crowns and understory small trees in the canopy height model (CHM) using the watershed algorithm followed by applying normalized cuts segmentation to merged watershed areas. Thus, single trees can be obtained as 3D point segments associated with waveform-specific features per point. Furthermore, the tree segments are delivered to feature definition process to derive geometric and reflectional features at single tree level, e.g. volume and maximal diameter of crown, mean intensity, gap fraction, etc. Finally, the spanned feature space for the tree segments is forwarded to a binary classifier using support vector machine (SVM) in order to discriminate dead trees from the living ones. The methodology is applied to datasets that have been captured with the Riegl LMSQ560 laser scanner at a point density of 25 points/m2 in the Bavarian Forest National Park, Germany, respectively under leaf-on and leaf-off conditions for Norway spruces, European beeches and Sycamore maples. The classification experiments lead in the best case to an overall accuracy of 73% in a leaf
A suffix tree or not a suffix tree?
Starikovskaya, Tatiana; Vildhøj, Hjalte Wedel
2015-01-01
In this paper we study the structure of suffix trees. Given an unlabeled tree τ on n nodes and suffix links of its internal nodes, we ask the question “Is τ a suffix tree?”, i.e., is there a string S whose suffix tree has the same topological structure as τ? We place no restrictions on S, in part......In this paper we study the structure of suffix trees. Given an unlabeled tree τ on n nodes and suffix links of its internal nodes, we ask the question “Is τ a suffix tree?”, i.e., is there a string S whose suffix tree has the same topological structure as τ? We place no restrictions on S......, in particular we do not require that S ends with a unique symbol. This corresponds to considering the more general definition of implicit or extended suffix trees. Such general suffix trees have many applications and are for example needed to allow efficient updates when suffix trees are built online. Deciding...
Fringe trees, Crump-Mode-Jagers branching processes and $m$-ary search trees
Holmgren, Cecilia; Janson, Svante
2016-01-01
This survey studies asymptotics of random fringe trees and extended fringe trees in random trees that can be constructed as family trees of a Crump-Mode-Jagers branching process, stopped at a suitable time. This includes random recursive trees, preferential attachment trees, fragmentation trees, binary search trees and (more generally) $m$-ary search trees, as well as some other classes of random trees. We begin with general results, mainly due to Aldous (1991) and Jagers and Nerman (1984). T...
PoInTree: A Polar and Interactive Phylogenetic Tree
Carreras Marco; Gianti Eleonora; Sartori Luca; Plyte Simon Edward; Isacchi Antonella; Bosotti Roberta
2005-01-01
PoInTree (Polar and Innteractive Tree) is an application that allows to build, visualize, and customize phylogenetic trees in a polar, interactive, and highly flexible view. It takes as input a FASTA file or multiple alignment formats. Phylogenetic tree calculation is based on a sequence distance method and utilizes the Neighbor Joining (NJ) algorithm. It also allows displaying precalculated trees of the major protein families based on Pfam classification. In PoInTree, nodes can be dynamically opened and closed and distances between genes are graphically represented.Tree root can be centered on a selected leaf. Text search mechanism, color-coding and labeling display are integrated. The visualizer can be connected to an Oracle database containing information on sequences and other biological data, helping to guide their interpretation within a given protein family across multiple species.The application is written in Borland Delphi and based on VCL Teechart Pro 6 graphical component (Steema software).
TREE SELECTING AND TREE RING MEASURING IN DENDROCHRONOLOGICAL INVESTIGATIONS
Sefa Akbulut
2004-04-01
Full Text Available Dendrochronology is a method of dating which makes use of the annual nature of tree growth. Dendrochronology may be divided into a number of subfields, each of which covers one or more aspects of the use of tree ring data: dendroclimatology, dendrogeomorphology, dendrohydrology, dendroecology, dendroarchaelogy, and dendrogylaciology. Basic of all form the analysis of the tree rings. The wood or tree rings can aid to dating past events about climatology, ecology, geology, hydrology. Dendrochronological studies are conducted either on increment cores or on discs. It may be seen abnormalities on tree rings during the measurement like that false rings, missing rings, reaction wood. Like that situation, increment cores must be extracted from four different sides of each tree and be studied as more as on tree.
Rate of tree carbon accumulation increases continuously with tree size.
Stephenson, N L; Das, A J; Condit, R; Russo, S E; Baker, P J; Beckman, N G; Coomes, D A; Lines, E R; Morris, W K; Rüger, N; Alvarez, E; Blundo, C; Bunyavejchewin, S; Chuyong, G; Davies, S J; Duque, A; Ewango, C N; Flores, O; Franklin, J F; Grau, H R; Hao, Z; Harmon, M E; Hubbell, S P; Kenfack, D; Lin, Y; Makana, J-R; Malizia, A; Malizia, L R; Pabst, R J; Pongpattananurak, N; Su, S-H; Sun, I-F; Tan, S; Thomas, D; van Mantgem, P J; Wang, X; Wiser, S K; Zavala, M A
2014-03-06
Forests are major components of the global carbon cycle, providing substantial feedback to atmospheric greenhouse gas concentrations. Our ability to understand and predict changes in the forest carbon cycle--particularly net primary productivity and carbon storage--increasingly relies on models that represent biological processes across several scales of biological organization, from tree leaves to forest stands. Yet, despite advances in our understanding of productivity at the scales of leaves and stands, no consensus exists about the nature of productivity at the scale of the individual tree, in part because we lack a broad empirical assessment of whether rates of absolute tree mass growth (and thus carbon accumulation) decrease, remain constant, or increase as trees increase in size and age. Here we present a global analysis of 403 tropical and temperate tree species, showing that for most species mass growth rate increases continuously with tree size. Thus, large, old trees do not act simply as senescent carbon reservoirs but actively fix large amounts of carbon compared to smaller trees; at the extreme, a single big tree can add the same amount of carbon to the forest within a year as is contained in an entire mid-sized tree. The apparent paradoxes of individual tree growth increasing with tree size despite declining leaf-level and stand-level productivity can be explained, respectively, by increases in a tree's total leaf area that outpace declines in productivity per unit of leaf area and, among other factors, age-related reductions in population density. Our results resolve conflicting assumptions about the nature of tree growth, inform efforts to undertand and model forest carbon dynamics, and have additional implications for theories of resource allocation and plant senescence.
T. R. Chandrasekhar
2012-01-01
No attempt has been made to date to model growth in girth of rubber tree (Hevea brasiliansis).We evaluated the few widely used growth functions to identify the most parsimonious and biologically reasonable model for describing the girth growth of young rubber trees based on an incomplete set of young age measurements.Monthly data for girth of immature trees (age 2 to 12 years) from two locations were subjected to modelling.Re-parameterized,unconstrained and constrained growth functions of Richards (RM),Gompertz (GM) and the monomolecular model (MM) were fitted to data.Duration of growth was the constraint introduced.In the first stage,we attempted a population average (PA) model to capture the trend in growth.The best PA model was fitted as a subject specific (SS) model.We used appropriate error variance-covariance structure to account for correlation due to repeated measurements over time.Unconstrained functions underestimated the asymptotic maximum that did not reflect the carrying capacity of the locations.Underestimations were attributed to the partial set of measurements made during the early growth phase of the trees.MM proved superior to RM and GM.In the random coefficient models,both Gf and G0 appeared to be influenced by tree level effects.Inclusion of diagonal definite positive matrix removed the correlation between random effects.The results were similar at both locations.In the overall assessment MM appeared as the candidate model for studying the girth-age relationships in Hevea trees.Based on the fitted model we conclude that,in Hevea trees,growth rate is maintained at maximum value at t0,then decreases until the final state at dG/dt ≥ 0,resulting in yield curve with no period of accelerating growth.One physiological explanation is that photosynthetic activity in Hevea trees decreases as girth increases and constructive metabolism is larger than destructive metabolism.
Wood traits related to size and life history of trees in a Panamanian rainforest.
Hietz, Peter; Rosner, Sabine; Hietz-Seifert, Ursula; Wright, S Joseph
2017-01-01
Wood structure differs widely among tree species and species with faster growth, higher mortality and larger maximum size have been reported to have fewer but larger vessels and higher hydraulic conductivity (Kh). However, previous studies compiled data from various sources, often failed to control tree size and rarely controlled variation in other traits. We measured wood density, tree size and vessel traits for 325 species from a wet forest in Panama, and compared wood and leaf traits to demographic traits using species-level data and phylogenetically independent contrasts. Wood traits showed strong phylogenetic signal whereas pairwise relationships between traits were mostly phylogenetically independent. Trees with larger vessels had a lower fraction of the cross-sectional area occupied by vessel lumina, suggesting that the hydraulic efficiency of large vessels permits trees to dedicate a larger proportion of the wood to functions other than water transport. Vessel traits were more strongly correlated with the size of individual trees than with maximal size of a species. When individual tree size was included in models, Kh scaled positively with maximal size and was the best predictor for both diameter and biomass growth rates, but was unrelated to mortality.
Butler, Ricky W.; Martensen, Anna L.
1992-01-01
FTC, Fault-Tree Compiler program, is reliability-analysis software tool used to calculate probability of top event of fault tree. Five different types of gates allowed in fault tree: AND, OR, EXCLUSIVE OR, INVERT, and M OF N. High-level input language of FTC easy to understand and use. Program supports hierarchical fault-tree-definition feature simplifying process of description of tree and reduces execution time. Solution technique implemented in FORTRAN, and user interface in Pascal. Written to run on DEC VAX computer operating under VMS operating system.
Zhu, Zhen; Puliga, Michelangelo; Cerina, Federica; Chessa, Alessandro; Riccaboni, Massimo
2015-01-01
The fragmentation of production across countries has become an important feature of the globalization in recent decades and is often conceptualized by the term “global value chains” (GVCs). When empirically investigating the GVCs, previous studies are mainly interested in knowing how global the GVCs are rather than how the GVCs look like. From a complex networks perspective, we use the World Input-Output Database (WIOD) to study the evolution of the global production system. We find that the industry-level GVCs are indeed not chain-like but are better characterized by the tree topology. Hence, we compute the global value trees (GVTs) for all the industries available in the WIOD. Moreover, we compute an industry importance measure based on the GVTs and compare it with other network centrality measures. Finally, we discuss some future applications of the GVTs. PMID:25978067
Pushdown machines for the macro tree transducer
Engelfriet, Joost; Vogler, Heiko
1986-01-01
The macro tree transducer can be considered as a system of recursive function procedures with parameters, where the recursion is on a tree (e.g., the syntax tree of a program). We investigate characterizations of the class of tree (tree-to-string) translations which is induced by macro tree transduc
Tree Rings: Timekeepers of the Past.
Phipps, R. L.; McGowan, J.
One of a series of general interest publications on science issues, this booklet describes the uses of tree rings in historical and biological recordkeeping. Separate sections cover the following topics: dating of tree rings, dating with tree rings, tree ring formation, tree ring identification, sample collections, tree ring cross dating, tree…
Pushdown machines for the macro tree transducer
Engelfriet, Joost; Vogler, Heiko
1986-01-01
The macro tree transducer can be considered as a system of recursive function procedures with parameters, where the recursion is on a tree (e.g., the syntax tree of a program). We investigate characterizations of the class of tree (tree-to-string) translations which is induced by macro tree
Rob Garbutt
2013-10-01
Full Text Available Our paper focuses on the materiality, cultural history and cultural relations of selected artworks in the exhibition Wood for the trees (Lismore Regional Gallery, New South Wales, Australia, 10 June – 17 July 2011. The title of the exhibition, intentionally misreading the aphorism “Can’t see the wood for the trees”, by reading the wood for the resource rather than the collective wood[s], implies conservation, preservation, and the need for sustaining the originating resource. These ideas have particular resonance on the NSW far north coast, a region once rich in rainforest. While the Indigenous population had sustainable practices of forest and land management, the colonists deployed felling and harvesting in order to convert the value of the local, abundant rainforest trees into high-value timber. By the late twentieth century, however, a new wave of settlers launched a protest movements against the proposed logging of remnant rainforest at Terania Creek and elsewhere in the region. Wood for the trees, curated by Gallery Director Brett Adlington, plays on this dynamic relationship between wood, trees and people. We discuss the way selected artworks give expression to the themes or concepts of productive labour, nature and culture, conservation and sustainability, and memory. The artworks include Watjinbuy Marrawilil’s (1980 Carved ancestral figure ceremonial pole, Elizabeth Stops’ (2009/10 Explorations into colonisation, Hossein Valamanesh’s (2008 Memory stick, and AñA Wojak’s (2008 Unread book (in a forgotten language. Our art writing on the works, a practice informed by Bal (2002, Muecke (2008 and Papastergiadis (2004, becomes a conversation between the works and the themes or concepts. As a form of material excess of the most productive kind (Grosz, 2008, p. 7, art seeds a response to that which is in the air waiting to be said of the past, present and future.
Jonge, de, H.J.
2002-01-01
Dividing software systems in components improves software reusability as well as software maintainability. Components live at several levels, we concentrate on the implementation level where components are formed by source files, divided over directory structures. Such source code components are usually strongly coupled in the directory structure of a software system. Their compilation is usually controlled by a single global build process. This entangling of source trees and build processes ...
Grünewald, Stefan
2010-01-01
A classical problem in phylogenetic tree analysis is to decide whether there is a phylogenetic tree $T$ that contains all information of a given collection $\\cP$ of phylogenetic trees. If the answer is "yes" we say that $\\cP$ is compatible and $T$ displays $\\cP$. This decision problem is NP-complete even if all input trees are quartets, that is binary trees with exactly four leaves. In this paper, we prove a sufficient condition for a set of binary phylogenetic trees to be compatible. That result is used to give a short and self-contained proof of the known characterization of quartet sets of minimal cardinality which are displayed by a unique phylogenetic tree.
Making CSB + -Trees Processor Conscious
Samuel, Michael; Pedersen, Anders Uhl; Bonnet, Philippe
2005-01-01
Cache-conscious indexes, such as CSB+-tree, are sensitive to the underlying processor architecture. In this paper, we focus on how to adapt the CSB+-tree so that it performs well on a range of different processor architectures. Previous work has focused on the impact of node size on the performance...... of the CSB+-tree. We argue that it is necessary to consider a larger group of parameters in order to adapt CSB+-tree to processor architectures as different as Pentium and Itanium. We identify this group of parameters and study how it impacts the performance of CSB+-tree on Itanium 2. Finally, we propose...... a systematic method for adapting CSB+-tree to new platforms. This work is a first step towards integrating CSB+-tree in MySQL’s heap storage manager....
Forrow, Aden; Woodhouse, Francis G.; Dunkel, Jörn
2016-11-01
Coherent, large scale dynamics in many nonequilibrium physical, biological, or information transport networks are driven by small-scale local energy input. We introduce and explore a generic model for compressible active flows on tree networks. In contrast to thermally-driven systems, active friction selects discrete states with only a small number of oscillation modes activated at distinct fixed amplitudes. This state selection can interact with graph topology to produce different localized dynamical time scales in separate regions of large networks. Using perturbation theory, we systematically predict the stationary states of noisy networks. Our analytical predictions agree well with a Bayesian state estimation based on a hidden Markov model applied to simulated time series data on binary trees. While the number of stable states per tree scales exponentially with the number of edges, the mean number of activated modes in each state averages 1 / 4 the number of edges. More broadly, these results suggest that the macroscopic response of active networks, from actin-myosin networks in cells to flow networks in Physarum polycephalum, can be dominated by a few select modes.
Khina, Anatoly
2016-08-15
We consider the problem of stabilizing an unstable plant driven by bounded noise over a digital noisy communication link, a scenario at the heart of networked control. To stabilize such a plant, one needs real-time encoding and decoding with an error probability profile that decays exponentially with the decoding delay. The works of Schulman and Sahai over the past two decades have developed the notions of tree codes and anytime capacity, and provided the theoretical framework for studying such problems. Nonetheless, there has been little practical progress in this area due to the absence of explicit constructions of tree codes with efficient encoding and decoding algorithms. Recently, linear time-invariant tree codes were proposed to achieve the desired result under maximum-likelihood decoding. In this work, we take one more step towards practicality, by showing that these codes can be efficiently decoded using sequential decoding algorithms, up to some loss in performance (and with some practical complexity caveats). We supplement our theoretical results with numerical simulations that demonstrate the effectiveness of the decoder in a control system setting.
Maximization Paradox: Result of Believing in an Objective Best.
Luan, Mo; Li, Hong
2017-05-01
The results from four studies provide reliable evidence of how beliefs in an objective best influence the decision process and subjective feelings. A belief in an objective best serves as the fundamental mechanism connecting the concept of maximizing and the maximization paradox (i.e., expending great effort but feeling bad when making decisions, Study 1), and randomly chosen decision makers operate similar to maximizers once they are manipulated to believe that the best is objective (Studies 2A, 2B, and 3). In addition, the effect of a belief in an objective best on the maximization paradox is moderated by the presence of a dominant option (Study 3). The findings of this research contribute to the maximization literature by demonstrating that believing in an objective best leads to the maximization paradox. The maximization paradox is indeed the result of believing in an objective best.
Hydraulic constraints modify optimal photosynthetic profiles in giant sequoia trees.
Ambrose, Anthony R; Baxter, Wendy L; Wong, Christopher S; Burgess, Stephen S O; Williams, Cameron B; Næsborg, Rikke R; Koch, George W; Dawson, Todd E
2016-11-01
Optimality theory states that whole-tree carbon gain is maximized when leaf N and photosynthetic capacity profiles are distributed along vertical light gradients such that the marginal gain of nitrogen investment is identical among leaves. However, observed photosynthetic N gradients in trees do not follow this prediction, and the causes for this apparent discrepancy remain uncertain. Our objective was to evaluate how hydraulic limitations potentially modify crown-level optimization in Sequoiadendron giganteum (giant sequoia) trees up to 90 m tall. Leaf water potential (Ψ l ) and branch sap flow closely followed diurnal patterns of solar radiation throughout each tree crown. Minimum leaf water potential correlated negatively with height above ground, while leaf mass per area (LMA), shoot mass per area (SMA), leaf nitrogen content (%N), and bulk leaf stable carbon isotope ratios (δ(13)C) correlated positively with height. We found no significant vertical trends in maximum leaf photosynthesis (A), stomatal conductance (g s), and intrinsic water-use efficiency (A/g s), nor in branch-averaged transpiration (E L), stomatal conductance (G S), and hydraulic conductance (K L). Adjustments in hydraulic architecture appear to partially compensate for increasing hydraulic limitations with height in giant sequoia, allowing them to sustain global maximum summer water use rates exceeding 2000 kg day(-1). However, we found that leaf N and photosynthetic capacity do not follow the vertical light gradient, supporting the hypothesis that increasing limitations on water transport capacity with height modify photosynthetic optimization in tall trees.
Factors influencing bird foraging preferences among conspecific fruit trees
Foster, M.S.
1990-01-01
The rates at which birds visit fruiting individuals of Allophylus edulis (Sapindaceae) differ substantially among trees. Such avian feeding preferences are well-known, but usually involve fruits and trees of different species. Factors controlling avian preferences for particular trees in a population of conspecifics are generally undocumented. To address this issue, I attempted to correlate rates at which individual birds and species fed in trees of Allophylus with 27 fruit or plant characteristics. Birds that swallow fruits whole were considered separately from those that feed in other ways. Plant characters were selected on the basis of their potential influence on feeding efficiency or predation risk, assuming that birds would select feeding trees so as to maximize the net rate of energy or nutrient intake and to minimize predation. Correlations were found between feeding visits by some groups of birds and percent water in the pulp, milligrams of mineral ash in the pulp, and crop size. No character was correlated with feeding visits by all groups of birds in both years of the study. The correlations with water and mineral ash are unexplained and may be artifacts. The correlation with crop size may represent a tactic to minimize predation.
Gardner Benjamin
2012-08-01
Full Text Available Abstract Background The twelve-item Self-Report Habit Index (SRHI is the most popular measure of energy-balance related habits. This measure characterises habit by automatic activation, behavioural frequency, and relevance to self-identity. Previous empirical research suggests that the SRHI may be abbreviated with no losses in reliability or predictive utility. Drawing on recent theorising suggesting that automaticity is the ‘active ingredient’ of habit-behaviour relationships, we tested whether an automaticity-specific SRHI subscale could capture habit-based behaviour patterns in self-report data. Methods A content validity task was undertaken to identify a subset of automaticity indicators within the SRHI. The reliability, convergent validity and predictive validity of the automaticity item subset was subsequently tested in secondary analyses of all previous SRHI applications, identified via systematic review, and in primary analyses of four raw datasets relating to energy‐balance relevant behaviours (inactive travel, active travel, snacking, and alcohol consumption. Results A four-item automaticity subscale (the ‘Self-Report Behavioural Automaticity Index’; ‘SRBAI’ was found to be reliable and sensitive to two hypothesised effects of habit on behaviour: a habit-behaviour correlation, and a moderating effect of habit on the intention-behaviour relationship. Conclusion The SRBAI offers a parsimonious measure that adequately captures habitual behaviour patterns. The SRBAI may be of particular utility in predicting future behaviour and in studies tracking habit formation or disruption.
Voss, Clifford I.; Soliman, Safaa M.
2014-01-01
Parsimonious groundwater modeling provides insight into hydrogeologic functioning of the Nubian Aquifer System (NAS), the world’s largest non-renewable groundwater system (belonging to Chad, Egypt, Libya, and Sudan). Classical groundwater-resource issues exist (magnitude and lateral extent of drawdown near pumping centers) with joint international management questions regarding transboundary drawdown. Much of NAS is thick, containing a large volume of high-quality groundwater, but receives insignificant recharge, so water-resource availability is time-limited. Informative aquifer data are lacking regarding large-scale response, providing only local-scale information near pumps. Proxy data provide primary underpinning for understanding regional response: Holocene water-table decline from the previous pluvial period, after thousands of years, results in current oasis/sabkha locations where the water table still intersects the ground. Depletion is found to be controlled by two regional parameters, hydraulic diffusivity and vertical anisotropy of permeability. Secondary data that provide insight are drawdowns near pumps and isotope-groundwater ages (million-year-old groundwaters in Egypt). The resultant strong simply structured three-dimensional model representation captures the essence of NAS regional groundwater-flow behavior. Model forecasts inform resource management that transboundary drawdown will likely be minimal—a nonissue—whereas drawdown within pumping centers may become excessive, requiring alternative extraction schemes; correspondingly, significant water-table drawdown may occur in pumping centers co-located with oases, causing oasis loss and environmental impacts.
Nicola Magnavita
2012-01-01
Full Text Available Purpose. To perform a parsimonious measurement of workplace psychosocial stress in routine occupational health surveillance, this study tests the psychometric properties of a short version of the original Italian effort-reward imbalance (ERI questionnaire. Methods. 1,803 employees (63 percent women from 19 service companies in the Italian region of Latium participated in a cross-sectional survey containing the short version of the ERI questionnaire (16 items and questions related to self-reported health, musculoskeletal complaints and job satisfaction. Exploratory factor analysis, internal consistency of scales and criterion validity were utilized. Results. The internal consistency of scales was satisfactory. Principal component analysis enabled to identify the model’s main factors. Significant associations with health and job satisfaction in the majority of cases support the notion of criterion validity. A high score on the effort-reward ratio was associated with an elevated odds ratio (OR = 2.71; 95% CI 1.86–3.95 of musculoskeletal complaints in the upper arm. Conclusions. The short form of the Italian ERI questionnaire provides a psychometrically useful tool for routine occupational health surveillance, although further validation is recommended.
Voss, Clifford I.; Soliman, Safaa M.
2014-03-01
Parsimonious groundwater modeling provides insight into hydrogeologic functioning of the Nubian Aquifer System (NAS), the world's largest non-renewable groundwater system (belonging to Chad, Egypt, Libya, and Sudan). Classical groundwater-resource issues exist (magnitude and lateral extent of drawdown near pumping centers) with joint international management questions regarding transboundary drawdown. Much of NAS is thick, containing a large volume of high-quality groundwater, but receives insignificant recharge, so water-resource availability is time-limited. Informative aquifer data are lacking regarding large-scale response, providing only local-scale information near pumps. Proxy data provide primary underpinning for understanding regional response: Holocene water-table decline from the previous pluvial period, after thousands of years, results in current oasis/sabkha locations where the water table still intersects the ground. Depletion is found to be controlled by two regional parameters, hydraulic diffusivity and vertical anisotropy of permeability. Secondary data that provide insight are drawdowns near pumps and isotope-groundwater ages (million-year-old groundwaters in Egypt). The resultant strong simply structured three-dimensional model representation captures the essence of NAS regional groundwater-flow behavior. Model forecasts inform resource management that transboundary drawdown will likely be minimal—a nonissue—whereas drawdown within pumping centers may become excessive, requiring alternative extraction schemes; correspondingly, significant water-table drawdown may occur in pumping centers co-located with oases, causing oasis loss and environmental impacts.
Gene tree correction for reconciliation and species tree inference
Swenson Krister M
2012-11-01
Full Text Available Abstract Background Reconciliation is the commonly used method for inferring the evolutionary scenario for a gene family. It consists in “embedding” inferred gene trees into a known species tree, revealing the evolution of the gene family by duplications and losses. When a species tree is not known, a natural algorithmic problem is to infer a species tree from a set of gene trees, such that the corresponding reconciliation minimizes the number of duplications and/or losses. The main drawback of reconciliation is that the inferred evolutionary scenario is strongly dependent on the considered gene trees, as few misplaced leaves may lead to a completely different history, with significantly more duplications and losses. Results In this paper, we take advantage of certain gene trees’ properties in order to preprocess them for reconciliation or species tree inference. We flag certain duplication vertices of a gene tree, the “non-apparent duplication” (NAD vertices, as resulting from the misplacement of leaves. In the case of species tree inference, we develop a polynomial-time heuristic for removing the minimum number of species leading to a set of gene trees that exhibit no NAD vertices with respect to at least one species tree. In the case of reconciliation, we consider the optimization problem of removing the minimum number of leaves or species leading to a tree without any NAD vertex. We develop a polynomial-time algorithm that is exact for two special classes of gene trees, and show a good performance on simulated data sets in the general case.
Rate of tree carbon accumulation increases continuously with tree size
Stephenson, N.L.; Das, A.J.; Condit, R.; Russo, S.E.; Baker, P.J.; Beckman, N.G.; Coomes, D.A.; Lines, E.R.; Morris, W.K.; Rüger, N.; Álvarez, E.; Blundo, C.; Bunyavejchewin, S.; Chuyong, G.; Davies, S.J.; Duque, Á.; Ewango, C.N.; Flores, O.; Franklin, J.F.; Grau, H.R.; Hao, Z.; Harmon, M.E.; Hubbell, S.P.; Kenfack, D.; Lin, Y.; Makana, J.-R.; Malizia, A.; Malizia, L.R.; Pabst, R.J.; Pongpattananurak, N.; Su, S.-H.; Sun, I-F.; Tan, S.; Thomas, D.; van Mantgem, P.J.; Wang, X.; Wiser, S.K.; Zavala, M.A.
2014-01-01
Forests are major components of the global carbon cycle, providing substantial feedback to atmospheric greenhouse gas concentrations. Our ability to understand and predict changes in the forest carbon cycle—particularly net primary productivity and carbon storage - increasingly relies on models that represent biological processes across several scales of biological organization, from tree leaves to forest stands. Yet, despite advances in our understanding of productivity at the scales of leaves and stands, no consensus exists about the nature of productivity at the scale of the individual tree, in part because we lack a broad empirical assessment of whether rates of absolute tree mass growth (and thus carbon accumulation) decrease, remain constant, or increase as trees increase in size and age. Here we present a global analysis of 403 tropical and temperate tree species, showing that for most species mass growth rate increases continuously with tree size. Thus, large, old trees do not act simply as senescent carbon reservoirs but actively fix large amounts of carbon compared to smaller trees; at the extreme, a single big tree can add the same amount of carbon to the forest within a year as is contained in an entire mid-sized tree. The apparent paradoxes of individual tree growth increasing with tree size despite declining leaf-level and stand-level productivity can be explained, respectively, by increases in a tree’s total leaf area that outpace declines in productivity per unit of leaf area and, among other factors, age-related reductions in population density. Our results resolve conflicting assumptions about the nature of tree growth, inform efforts to understand and model forest carbon dynamics, and have additional implications for theories of resource allocation and plant senescence.
Trait Acclimation Mitigates Mortality Risks of Tropical Canopy Trees under Global Warming.
Sterck, Frank; Anten, Niels P R; Schieving, Feike; Zuidema, Pieter A
2016-01-01
There is a heated debate about the effect of global change on tropical forests. Many scientists predict large-scale tree mortality while others point to mitigating roles of CO2 fertilization and - the notoriously unknown - physiological trait acclimation of trees. In this opinion article we provided a first quantification of the potential of trait acclimation to mitigate the negative effects of warming on tropical canopy tree growth and survival. We applied a physiological tree growth model that incorporates trait acclimation through an optimization approach. Our model estimated the maximum effect of acclimation when trees optimize traits that are strongly plastic on a week to annual time scale (leaf photosynthetic capacity, total leaf area, stem sapwood area) to maximize carbon gain. We simulated tree carbon gain for temperatures (25-35°C) and ambient CO2 concentrations (390-800 ppm) predicted for the 21st century. Full trait acclimation increased simulated carbon gain by up to 10-20% and the maximum tolerated temperature by up to 2°C, thus reducing risks of tree death under predicted warming. Functional trait acclimation may thus increase the resilience of tropical trees to warming, but cannot prevent tree death during extremely hot and dry years at current CO2 levels. We call for incorporating trait acclimation in field and experimental studies of plant functional traits, and in models that predict responses of tropical forests to climate change.
Trait acclimation mitigates mortality risks of tropical canopy trees under global warming
Frank eSterck
2016-05-01
Full Text Available There is a heated debate about the effect of global change on tropical forests. Many scientists predict large-scale tree mortality while others point to mitigating roles of CO2 fertilization and – the notoriously unknown – physiological trait acclimation of trees. In this opinion article we provided a first quantification of the potential of trait acclimation to mitigate the negative effects of warming on tropical canopy tree growth and survival. We applied a physiological tree growth model that incorporates trait acclimation through an optimization approach. Our model estimated the maximum effect of acclimation when trees optimize traits that are strongly plastic on a week to annual time scale (leaf photosynthetic capacity, total leaf area, stem sapwood area to maximize carbon gain. We simulated tree carbon gain for temperatures (25-35ºC and ambient CO2 concentrations (390-800 ppm predicted for the 21st century. Full trait acclimation increased simulated carbon gain by up to 10-20% and the maximum tolerated temperature by up to 2ºC, thus reducing risks of tree death under predicted warming. Functional trait acclimation may thus increase the resilience of tropical trees to warming, but cannot prevent tree death during extremely hot and dry years at current CO2 levels. We call for incorporating trait acclimation in field and experimental studies of plant functional traits, and in models that predict responses of tropical forests to climate change.
EXPLANATORY VARIANCE IN MAXIMAL OXYGEN UPTAKE
Jacalyn J. Robert McComb
2006-06-01
Full Text Available The purpose of this study was to develop a prediction equation that could be used to estimate maximal oxygen uptake (VO2max from a submaximal water running protocol. Thirty-two volunteers (n =19 males, n = 13 females, ages 18 - 24 years, underwent the following testing procedures: (a a 7-site skin fold assessment; (b a land VO2max running treadmill test; and (c a 6 min water running test. For the water running submaximal protocol, the participants were fitted with an Aqua Jogger Classic Uni-Sex Belt and a Polar Heart Rate Monitor; the participants' head, shoulders, hips and feet were vertically aligned, using a modified running/bicycle motion. A regression model was used to predict VO2max. The criterion variable, VO2max, was measured using open-circuit calorimetry utilizing the Bruce Treadmill Protocol. Predictor variables included in the model were percent body fat (% BF, height, weight, gender, and heart rate following a 6 min water running protocol. Percent body fat accounted for 76% (r = -0.87, SEE = 3.27 of the variance in VO2max. No other variables significantly contributed to the explained variance in VO2max. The equation for the estimation of VO2max is as follows: VO2max ml.kg-1·min-1 = 56.14 - 0.92 (% BF.
Reflection quasilattices and the maximal quasilattice
Boyle, Latham; Steinhardt, Paul J.
2016-08-01
We introduce the concept of a reflection quasilattice, the quasiperiodic generalization of a Bravais lattice with irreducible reflection symmetry. Among their applications, reflection quasilattices are the reciprocal (i.e., Bragg diffraction) lattices for quasicrystals and quasicrystal tilings, such as Penrose tilings, with irreducible reflection symmetry and discrete scale invariance. In a follow-up paper, we will show that reflection quasilattices can be used to generate tilings in real space with properties analogous to those in Penrose tilings, but with different symmetries and in various dimensions. Here we explain that reflection quasilattices only exist in dimensions two, three, and four, and we prove that there is a unique reflection quasilattice in dimension four: the "maximal reflection quasilattice" in terms of dimensionality and symmetry. Unlike crystallographic Bravais lattices, all reflection quasilattices are invariant under rescaling by certain discrete scale factors. We tabulate the complete set of scale factors for all reflection quasilattices in dimension d >2 , and for all those with quadratic irrational scale factors in d =2 .
Viral quasispecies assembly via maximal clique enumeration.
Töpfer, Armin; Marschall, Tobias; Bull, Rowena A; Luciani, Fabio; Schönhuth, Alexander; Beerenwinkel, Niko
2014-03-01
Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Network channel allocation and revenue maximization
Hamalainen, Timo; Joutsensalo, Jyrki
2002-09-01
This paper introduces a model that can be used to share link capacity among customers under different kind of traffic conditions. This model is suitable for different kind of networks like the 4G networks (fast wireless access to wired network) to support connections of given duration that requires a certain quality of service. We study different types of network traffic mixed in a same communication link. A single link is considered as a bottleneck and the goal is to find customer traffic profiles that maximizes the revenue of the link. Presented allocation system accepts every calls and there is not absolute blocking, but the offered data rate/user depends on the network load. Data arrival rate depends on the current link utilization, user's payment (selected CoS class) and delay. The arrival rate is (i) increasing with respect to the offered data rate, (ii) decreasing with respect to the price, (iii) decreasing with respect to the network load, and (iv) decreasing with respect to the delay. As an example, explicit formula obeying these conditions is given and analyzed.
Evolution of correlated multiplexity through stability maximization
Dwivedi, Sanjiv K.; Jalan, Sarika
2017-02-01
Investigating the relation between various structural patterns found in real-world networks and the stability of underlying systems is crucial to understand the importance and evolutionary origin of such patterns. We evolve multiplex networks, comprising antisymmetric couplings in one layer depicting predator-prey relationship and symmetric couplings in the other depicting mutualistic (or competitive) relationship, based on stability maximization through the largest eigenvalue of the corresponding adjacency matrices. We find that there is an emergence of the correlated multiplexity between the mirror nodes as the evolution progresses. Importantly, evolved values of the correlated multiplexity exhibit a dependence on the interlayer coupling strength. Additionally, the interlayer coupling strength governs the evolution of the disassortativity property in the individual layers. We provide analytical understanding to these findings by considering starlike networks representing both the layers. The framework discussed here is useful for understanding principles governing the stability as well as the importance of various patterns in the underlying networks of real-world systems ranging from the brain to ecology which consist of multiple types of interaction behavior.
Viral quasispecies assembly via maximal clique enumeration.
Armin Töpfer
2014-03-01
Full Text Available Virus populations can display high genetic diversity within individual hosts. The intra-host collection of viral haplotypes, called viral quasispecies, is an important determinant of virulence, pathogenesis, and treatment outcome. We present HaploClique, a computational approach to reconstruct the structure of a viral quasispecies from next-generation sequencing data as obtained from bulk sequencing of mixed virus samples. We develop a statistical model for paired-end reads accounting for mutations, insertions, and deletions. Using an iterative maximal clique enumeration approach, read pairs are assembled into haplotypes of increasing length, eventually enabling global haplotype assembly. The performance of our quasispecies assembly method is assessed on simulated data for varying population characteristics and sequencing technology parameters. Owing to its paired-end handling, HaploClique compares favorably to state-of-the-art haplotype inference methods. It can reconstruct error-free full-length haplotypes from low coverage samples and detect large insertions and deletions at low frequencies. We applied HaploClique to sequencing data derived from a clinical hepatitis C virus population of an infected patient and discovered a novel deletion of length 357±167 bp that was validated by two independent long-read sequencing experiments. HaploClique is available at https://github.com/armintoepfer/haploclique. A summary of this paper appears in the proceedings of the RECOMB 2014 conference, April 2-5.
Maximal respiratory pressure in healthy Japanese children
Tagami, Miki; Okuno, Yukako; Matsuda, Tadamitsu; Kawamura, Kenta; Shoji, Ryosuke; Tomita, Kazuhide
2017-01-01
[Purpose] Normal values for respiratory muscle pressures during development in Japanese children have not been reported. The purpose of this study was to investigate respiratory muscle pressures in Japanese children aged 3–12 years. [Subjects and Methods] We measured respiratory muscle pressure values using a manovacuometer without a nose clip, with subjects in a sitting position. Data were collected for ages 3–6 (Group I: 68 subjects), 7–9 (Group II: 86 subjects), and 10–12 (Group III: 64 subjects) years. [Results] The values for respiratory muscle pressures in children were significantly higher with age in both sexes, and were higher in boys than in girls. Correlation coefficients were significant at values of 0.279 to 0.471 for each gender relationship between maximal respiratory pressure and age, height, and weight, respectively. [Conclusion] In this study, we showed pediatric respiratory muscle pressure reference value for each age. In the present study, values for respiratory muscle pressures were lower than Brazilian studies. This suggests that differences in respiratory muscle pressures vary with ethnicity. PMID:28356644
Maximizing exosome colloidal stability following electroporation.
Hood, Joshua L; Scott, Michael J; Wickline, Samuel A
2014-03-01
Development of exosome-based semisynthetic nanovesicles for diagnostic and therapeutic purposes requires novel approaches to load exosomes with cargo. Electroporation has previously been used to load exosomes with RNA. However, investigations into exosome colloidal stability following electroporation have not been considered. Herein, we report the development of a unique trehalose pulse media (TPM) that minimizes exosome aggregation following electroporation. Dynamic light scattering (DLS) and RNA absorbance were employed to determine the extent of exosome aggregation and electroextraction post electroporation in TPM compared to common PBS pulse media or sucrose pulse media (SPM). Use of TPM to disaggregate melanoma exosomes post electroporation was dependent on both exosome concentration and electric field strength. TPM maximized exosome dispersal post electroporation for both homogenous B16 melanoma and heterogeneous human serum-derived populations of exosomes. Moreover, TPM enabled heavy cargo loading of melanoma exosomes with 5nm superparamagnetic iron oxide nanoparticles (SPION5) while maintaining original exosome size and minimizing exosome aggregation as evidenced by transmission electron microscopy. Loading exosomes with SPION5 increased exosome density on sucrose gradients. This provides a simple, label-free means of enriching exogenously modified exosomes and introduces the potential for MRI-driven theranostic exosome investigations in vivo.