WorldWideScience

Sample records for maximum common subgraph

  1. CytoMCS: A Multiple Maximum Common Subgraph Detection Tool for Cytoscape

    DEFF Research Database (Denmark)

    Larsen, Simon; Baumbach, Jan

    2017-01-01

    such analyses we have developed CytoMCS, a Cytoscape app for computing inexact solutions to the maximum common edge subgraph problem for two or more graphs. Our algorithm uses an iterative local search heuristic for computing conserved subgraphs, optimizing a squared edge conservation score that is able...... to detect not only fully conserved edges but also partially conserved edges. It can be applied to any set of directed or undirected, simple graphs loaded as networks into Cytoscape, e.g. protein-protein interaction networks or gene regulatory networks. CytoMCS is available as a Cytoscape app at http://apps.cytoscape.org/apps/cytomcs....

  2. Robust cell tracking in epithelial tissues through identification of maximum common subgraphs.

    Science.gov (United States)

    Kursawe, Jochen; Bardenet, Rémi; Zartman, Jeremiah J; Baker, Ruth E; Fletcher, Alexander G

    2016-11-01

    Tracking of cells in live-imaging microscopy videos of epithelial sheets is a powerful tool for investigating fundamental processes in embryonic development. Characterizing cell growth, proliferation, intercalation and apoptosis in epithelia helps us to understand how morphogenetic processes such as tissue invagination and extension are locally regulated and controlled. Accurate cell tracking requires correctly resolving cells entering or leaving the field of view between frames, cell neighbour exchanges, cell removals and cell divisions. However, current tracking methods for epithelial sheets are not robust to large morphogenetic deformations and require significant manual interventions. Here, we present a novel algorithm for epithelial cell tracking, exploiting the graph-theoretic concept of a 'maximum common subgraph' to track cells between frames of a video. Our algorithm does not require the adjustment of tissue-specific parameters, and scales in sub-quadratic time with tissue size. It does not rely on precise positional information, permitting large cell movements between frames and enabling tracking in datasets acquired at low temporal resolution due to experimental constraints such as phototoxicity. To demonstrate the method, we perform tracking on the Drosophila embryonic epidermis and compare cell-cell rearrangements to previous studies in other tissues. Our implementation is open source and generally applicable to epithelial tissues. © 2016 The Authors.

  3. Induced subgraph searching for geometric model fitting

    Science.gov (United States)

    Xiao, Fan; Xiao, Guobao; Yan, Yan; Wang, Xing; Wang, Hanzi

    2017-11-01

    In this paper, we propose a novel model fitting method based on graphs to fit and segment multiple-structure data. In the graph constructed on data, each model instance is represented as an induced subgraph. Following the idea of pursuing the maximum consensus, the multiple geometric model fitting problem is formulated as searching for a set of induced subgraphs including the maximum union set of vertices. After the generation and refinement of the induced subgraphs that represent the model hypotheses, the searching process is conducted on the "qualified" subgraphs. Multiple model instances can be simultaneously estimated by solving a converted problem. Then, we introduce the energy evaluation function to determine the number of model instances in data. The proposed method is able to effectively estimate the number and the parameters of model instances in data severely corrupted by outliers and noises. Experimental results on synthetic data and real images validate the favorable performance of the proposed method compared with several state-of-the-art fitting methods.

  4. An algorithm for finding a similar subgraph of all Hamiltonian cycles

    Science.gov (United States)

    Wafdan, R.; Ihsan, M.; Suhaimi, D.

    2018-01-01

    This paper discusses an algorithm to find a similar subgraph called findSimSubG algorithm. A similar subgraph is a subgraph with a maximum number of edges, contains no isolated vertex and is contained in every Hamiltonian cycle of a Hamiltonian Graph. The algorithm runs only on Hamiltonian graphs with at least two Hamiltonian cycles. The algorithm works by examining whether the initial subgraph of the first Hamiltonian cycle is a subgraph of comparison graphs. If the initial subgraph is not in comparison graphs, the algorithm will remove edges and vertices of the initial subgraph that are not in comparison graphs. There are two main processes in the algorithm, changing Hamiltonian cycle into a cycle graph and removing edges and vertices of the initial subgraph that are not in comparison graphs. The findSimSubG algorithm can find the similar subgraph without using backtracking method. The similar subgraph cannot be found on certain graphs, such as an n-antiprism graph, complete bipartite graph, complete graph, 2n-crossed prism graph, n-crown graph, n-möbius ladder, prism graph, and wheel graph. The complexity of this algorithm is O(m|V|), where m is the number of Hamiltonian cycles and |V| is the number of vertices of a Hamiltonian graph.

  5. The index-based subgraph matching algorithm (ISMA: fast subgraph enumeration in large networks using optimized search trees.

    Directory of Open Access Journals (Sweden)

    Sofie Demeyer

    Full Text Available Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA, a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/.

  6. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    Science.gov (United States)

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  7. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    OpenAIRE

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are inve...

  8. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-01-01

    Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety

  9. MAXIMUM r-REGULAR INDUCED SUBGRAPH PROBLEM: FAST EXPONENTIAL ALGORITHMS AND COMBINATORIAL BOUNDS

    DEFF Research Database (Denmark)

    Gupta, S.; Raman, V.; Saurabh, S.

    2012-01-01

    We show that for a fixed r, the number of maximal r-regular induced subgraphs in any graph with n vertices is upper bounded by O(c(n)), where c is a positive constant strictly less than 2. This bound generalizes the well-known result of Moon and Moser, who showed an upper bound of 3(n/3) on the n...

  10. Mining connected global and local dense subgraphs for bigdata

    Science.gov (United States)

    Wu, Bo; Shen, Haiying

    2016-01-01

    The problem of discovering connected dense subgraphs of natural graphs is important in data analysis. Discovering dense subgraphs that do not contain denser subgraphs or are not contained in denser subgraphs (called significant dense subgraphs) is also critical for wide-ranging applications. In spite of many works on discovering dense subgraphs, there are no algorithms that can guarantee the connectivity of the returned subgraphs or discover significant dense subgraphs. Hence, in this paper, we define two subgraph discovery problems to discover connected and significant dense subgraphs, propose polynomial-time algorithms and theoretically prove their validity. We also propose an algorithm to further improve the time and space efficiency of our basic algorithm for discovering significant dense subgraphs in big data by taking advantage of the unique features of large natural graphs. In the experiments, we use massive natural graphs to evaluate our algorithms in comparison with previous algorithms. The experimental results show the effectiveness of our algorithms for the two problems and their efficiency. This work is also the first that reveals the physical significance of significant dense subgraphs in natural graphs from different domains.

  11. Heavy subgraph pairs for traceability of block-chains

    Directory of Open Access Journals (Sweden)

    Li Binlong

    2014-05-01

    Full Text Available A graph is called traceable if it contains a Hamilton path, i.e., a path containing all its vertices. Let G be a graph on n vertices. We say that an induced subgraph of G is o−1-heavy if it contains two nonadjacent vertices which satisfy an Ore-type degree condition for traceability, i.e., with degree sum at least n−1 in G. A block-chain is a graph whose block graph is a path, i.e., it is either a P1, P2, or a 2-connected graph, or a graph with at least one cut vertex and exactly two end-blocks. Obviously, every traceable graph is a block-chain, but the reverse does not hold. In this paper we characterize all the pairs of connected o−1-heavy graphs that guarantee traceability of block-chains. Our main result is a common extension of earlier work on degree sum conditions, forbidden subgraph conditions and heavy subgraph conditions for traceability

  12. Characterization of subgraph relationships and distribution in complex networks

    International Nuclear Information System (INIS)

    Antiqueira, Lucas; Fontoura Costa, Luciano da

    2009-01-01

    A network can be analyzed at different topological scales, ranging from single nodes to motifs, communities, up to the complete structure. We propose a novel approach which extends from single nodes to the whole network level by considering non-overlapping subgraphs (i.e. connected components) and their interrelationships and distribution through the network. Though such subgraphs can be completely general, our methodology focuses on the cases in which the nodes of these subgraphs share some special feature, such as being critical for the proper operation of the network. The methodology of subgraph characterization involves two main aspects: (i) the generation of histograms of subgraph sizes and distances between subgraphs and (ii) a merging algorithm, developed to assess the relevance of nodes outside subgraphs by progressively merging subgraphs until the whole network is covered. The latter procedure complements the histograms by taking into account the nodes lying between subgraphs, as well as the relevance of these nodes to the overall subgraph interconnectivity. Experiments were carried out using four types of network models and five instances of real-world networks, in order to illustrate how subgraph characterization can help complementing complex network-based studies.

  13. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  14. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab

    2017-08-22

    Frequent subgraph mining is a core graph operation used in many domains, such as graph data management and knowledge exploration, bioinformatics and security. Most existing techniques target static graphs. However, modern applications, such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem on a single large evolving graph. We adapt the notion of “fringe” to the graph context, that is the set of subgraphs on the border between frequent and infrequent subgraphs. IncGM+ maintains fringe subgraphs and exploits them to prune the search space. To boost the efficiency, we propose an efficient index structure to maintain selected embeddings with minimal memory overhead. These embeddings are utilized to avoid redundant expensive subgraph isomorphism operations. Moreover, the proposed system supports batch updates. Using large real-world graphs, we experimentally verify that IncGM+ outperforms existing methods by up to three orders of magnitude, scales to much larger graphs and consumes less memory.

  15. GRAMI: Frequent subgraph and pattern mining in a single large graph

    KAUST Repository

    Elseidy, M.

    2014-01-01

    Mining frequent subgraphs is an important operation on graphs; it is defined as finding all subgraphs that appear frequently in a database according to a given frequency threshold. Most existing work assumes a database of many small graphs, but modern applications, such as social networks, citation graphs, or proteinprotein interactions in bioinformatics, are modeled as a single large graph. In this paper we present GRAMI, a novel framework for frequent subgraph mining in a single large graph. GRAMI undertakes a novel approach that only finds the minimal set of instances to satisfy the frequency threshold and avoids the costly enumeration of all instances required by previous approaches. We accompany our approach with a heuristic and optimizations that significantly improve performance. Additionally, we present an extension of GRAMI that mines frequent patterns. Compared to subgraphs, patterns offer a more powerful version of matching that captures transitive interactions between graph nodes (like friend of a friend) which are very common in modern applications. Finally, we present CGRAMI, a version supporting structural and semantic constraints, and AGRAMI, an approximate version producing results with no false positives. Our experiments on real data demonstrate that our framework is up to 2 orders of magnitude faster and discovers more interesting patterns than existing approaches. 2014 VLDB Endowment.

  16. Performance Evaluation of Frequent Subgraph Discovery Techniques

    Directory of Open Access Journals (Sweden)

    Saif Ur Rehman

    2014-01-01

    Full Text Available Due to rapid development of the Internet technology and new scientific advances, the number of applications that model the data as graphs increases, because graphs have highly expressive power to model a complicated structure. Graph mining is a well-explored area of research which is gaining popularity in the data mining community. A graph is a general model to represent data and has been used in many domains such as cheminformatics, web information management system, computer network, and bioinformatics, to name a few. In graph mining the frequent subgraph discovery is a challenging task. Frequent subgraph mining is concerned with discovery of those subgraphs from graph dataset which have frequent or multiple instances within the given graph dataset. In the literature a large number of frequent subgraph mining algorithms have been proposed; these included FSG, AGM, gSpan, CloseGraph, SPIN, Gaston, and Mofa. The objective of this research work is to perform quantitative comparison of the above listed techniques. The performances of these techniques have been evaluated through a number of experiments based on three different state-of-the-art graph datasets. This novel work will provide base for anyone who is working to design a new frequent subgraph discovery technique.

  17. Infinitely connected subgraphs in graphs of uncountable chromatic number

    DEFF Research Database (Denmark)

    Thomassen, Carsten

    2016-01-01

    Erdős and Hajnal conjectured in 1966 that every graph of uncountable chromatic number contains a subgraph of infinite connectivity. We prove that every graph of uncountable chromatic number has a subgraph which has uncountable chromatic number and infinite edge-connectivity. We also prove that......, if each orientation of a graph G has a vertex of infinite outdegree, then G contains an uncountable subgraph of infinite edge-connectivity....

  18. Scalable Frequent Subgraph Mining

    KAUST Repository

    Abdelhamid, Ehab

    2017-06-19

    A graph is a data structure that contains a set of nodes and a set of edges connecting these nodes. Nodes represent objects while edges model relationships among these objects. Graphs are used in various domains due to their ability to model complex relations among several objects. Given an input graph, the Frequent Subgraph Mining (FSM) task finds all subgraphs with frequencies exceeding a given threshold. FSM is crucial for graph analysis, and it is an essential building block in a variety of applications, such as graph clustering and indexing. FSM is computationally expensive, and its existing solutions are extremely slow. Consequently, these solutions are incapable of mining modern large graphs. This slowness is caused by the underlying approaches of these solutions which require finding and storing an excessive amount of subgraph matches. This dissertation proposes a scalable solution for FSM that avoids the limitations of previous work. This solution is composed of four components. The first component is a single-threaded technique which, for each candidate subgraph, needs to find only a minimal number of matches. The second component is a scalable parallel FSM technique that utilizes a novel two-phase approach. The first phase quickly builds an approximate search space, which is then used by the second phase to optimize and balance the workload of the FSM task. The third component focuses on accelerating frequency evaluation, which is a critical step in FSM. To do so, a machine learning model is employed to predict the type of each graph node, and accordingly, an optimized method is selected to evaluate that node. The fourth component focuses on mining dynamic graphs, such as social networks. To this end, an incremental index is maintained during the dynamic updates. Only this index is processed and updated for the majority of graph updates. Consequently, search space is significantly pruned and efficiency is improved. The empirical evaluation shows that the

  19. Subgraphs in vertex neighborhoods of K-free graphs

    DEFF Research Database (Denmark)

    Bang-Jensen, J.; Brandt, Stephan

    2004-01-01

    In a K-free graph, the neighborhood of every vertex induces a K-free subgraph. The K-free graphs with the converse property that every induced K-free subgraph is contained in the neighborhood of a vertex are characterized, based on the characterization in the case r = 3 due to Pach [8]....

  20. Finding Hierarchical and Overlapping Dense Subgraphs using Nucleus Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Seshadhri, Comandur [The Ohio State Univ., Columbus, OH (United States); Pinar, Ali [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Sariyuce, Ahmet Erdem [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Catalyurek, Umit [The Ohio State Univ., Columbus, OH (United States)

    2014-11-01

    Finding dense substructures in a graph is a fundamental graph mining operation, with applications in bioinformatics, social networks, and visualization to name a few. Yet most standard formulations of this problem (like clique, quasiclique, k-densest subgraph) are NP-hard. Furthermore, the goal is rarely to nd the \\true optimum", but to identify many (if not all) dense substructures, understand their distribution in the graph, and ideally determine a hierarchical structure among them. Current dense subgraph nding algorithms usually optimize some objective, and only nd a few such subgraphs without providing any hierarchy. It is also not clear how to account for overlaps in dense substructures. We de ne the nucleus decomposition of a graph, which represents the graph as a forest of nuclei. Each nucleus is a subgraph where smaller cliques are present in many larger cliques. The forest of nuclei is a hierarchy by containment, where the edge density increases as we proceed towards leaf nuclei. Sibling nuclei can have limited intersections, which allows for discovery of overlapping dense subgraphs. With the right parameters, the nuclear decomposition generalizes the classic notions of k-cores and k-trusses. We give provable e cient algorithms for nuclear decompositions, and empirically evaluate their behavior in a variety of real graphs. The tree of nuclei consistently gives a global, hierarchical snapshot of dense substructures, and outputs dense subgraphs of higher quality than other state-of-theart solutions. Our algorithm can process graphs with tens of millions of edges in less than an hour.

  1. Bell-type inequalities embedded in the subgraph of graph states

    International Nuclear Information System (INIS)

    Hsu, L.-Y.

    2006-01-01

    We investigate the Bell-type inequalities of graph states. In this paper, Bell-type inequalities can be derived based on two kinds of the associated subgraphs of the graph states. First, the star subgraphs lead to the maximal violation of the modified Seevinck-Svetlichny inequalities. Second, cycle subgraphs lead to maximal violation of Bell-type inequalities. As a result, once the associated graph of a graph state is given, the corresponding Bell operators can be immediatedly determined using stabilizing generators. In the above Bell-type inequalities, two measurement settings for each party are required

  2. Bounds for the b-Chromatic Number of Subgraphs and Edge-Deleted Subgraphs

    Directory of Open Access Journals (Sweden)

    Francis P.

    2016-11-01

    Full Text Available A b-coloring of a graph G with k colors is a proper coloring of G using k colors in which each color class contains a color dominating vertex, that is, a vertex which has a neighbor in each of the other color classes. The largest positive integer k for which G has a b-coloring using k colors is the b-chromatic number b(G of G. In this paper, we obtain bounds for the b- chromatic number of induced subgraphs in terms of the b-chromatic number of the original graph. This turns out to be a generalization of the result due to R. Balakrishnan et al. [Bounds for the b-chromatic number of G−v, Discrete Appl. Math. 161 (2013 1173-1179]. Also we show that for any connected graph G and any e ∈ E(G, b(G - e ≤ b(G + -2. Further, we determine all graphs which attain the upper bound. Finally, we conclude by finding bound for the b-chromatic number of any subgraph.

  3. A Simulated Annealing Algorithm for Maximum Common Edge Subgraph Detection in Biological Networks

    DEFF Research Database (Denmark)

    Larsen, Simon; Alkærsig, Frederik G.; Ditzel, Henrik

    2016-01-01

    Network alignment is a challenging computational problem that identifies node or edge mappings between two or more networks, with the aim to unravel common patterns among them. Pairwise network alignment is already intractable, making multiple network comparison even more difficult. Here, we intr...

  4. GRAMI: Frequent subgraph and pattern mining in a single large graph

    KAUST Repository

    Elseidy, M.; Abdelhamid, Ehab; Skiadopoulos, S.; Kalnis, Panos

    2014-01-01

    Mining frequent subgraphs is an important operation on graphs; it is defined as finding all subgraphs that appear frequently in a database according to a given frequency threshold. Most existing work assumes a database of many small graphs

  5. Auditing SNOMED CT hierarchical relations based on lexical features of concepts in non-lattice subgraphs.

    Science.gov (United States)

    Cui, Licong; Bodenreider, Olivier; Shi, Jay; Zhang, Guo-Qiang

    2018-02-01

    We introduce a structural-lexical approach for auditing SNOMED CT using a combination of non-lattice subgraphs of the underlying hierarchical relations and enriched lexical attributes of fully specified concept names. Our goal is to develop a scalable and effective approach that automatically identifies missing hierarchical IS-A relations. Our approach involves 3 stages. In stage 1, all non-lattice subgraphs of SNOMED CT's IS-A hierarchical relations are extracted. In stage 2, lexical attributes of fully-specified concept names in such non-lattice subgraphs are extracted. For each concept in a non-lattice subgraph, we enrich its set of attributes with attributes from its ancestor concepts within the non-lattice subgraph. In stage 3, subset inclusion relations between the lexical attribute sets of each pair of concepts in each non-lattice subgraph are compared to existing IS-A relations in SNOMED CT. For concept pairs within each non-lattice subgraph, if a subset relation is identified but an IS-A relation is not present in SNOMED CT IS-A transitive closure, then a missing IS-A relation is reported. The September 2017 release of SNOMED CT (US edition) was used in this investigation. A total of 14,380 non-lattice subgraphs were extracted, from which we suggested a total of 41,357 missing IS-A relations. For evaluation purposes, 200 non-lattice subgraphs were randomly selected from 996 smaller subgraphs (of size 4, 5, or 6) within the "Clinical Finding" and "Procedure" sub-hierarchies. Two domain experts confirmed 185 (among 223) suggested missing IS-A relations, a precision of 82.96%. Our results demonstrate that analyzing the lexical features of concepts in non-lattice subgraphs is an effective approach for auditing SNOMED CT. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Isolating highly connected induced subgraphs

    DEFF Research Database (Denmark)

    Penev, Irena; Thomasse, Stephan; Trotignon, Nicolas

    2016-01-01

    We prove that any graph G of minimum degree greater than 2k(2) - 1 has a (k + 1)-connected induced subgraph H such that the number of vertices of H that have neighbors outside of H is at most 2k(2) - 1. This generalizes a classical result of Mader, which states that a high minimum degree implies ...

  7. GRAMI: Generalized Frequent Subgraph Mining in Large Graphs

    KAUST Repository

    El Saeedy, Mohammed El Sayed

    2011-07-24

    Mining frequent subgraphs is an important operation on graphs. Most existing work assumes a database of many small graphs, but modern applications, such as social networks, citation graphs or protein-protein interaction in bioinformatics, are modeled as a single large graph. Interesting interactions in such applications may be transitive (e.g., friend of a friend). Existing methods, however, search for frequent isomorphic (i.e., exact match) subgraphs and cannot discover many useful patterns. In this paper we propose GRAMI, a framework that generalizes frequent subgraph mining in a large single graph. GRAMI discovers frequent patterns. A pattern is a graph where edges are generalized to distance-constrained paths. Depending on the definition of the distance function, many instantiations of the framework are possible. Both directed and undirected graphs, as well as multiple labels per vertex, are supported. We developed an efficient implementation of the framework that models the frequency resolution phase as a constraint satisfaction problem, in order to avoid the costly enumeration of all instances of each pattern in the graph. We also implemented CGRAMI, a version that supports structural and semantic constraints; and AGRAMI, an approximate version that supports very large graphs. Our experiments on real data demonstrate that our framework is up to 3 orders of magnitude faster and discovers more interesting patterns than existing approaches.

  8. Identification of Functional Information Subgraphs in Complex Networks

    International Nuclear Information System (INIS)

    Bettencourt, Luis M. A.; Gintautas, Vadas; Ham, Michael I.

    2008-01-01

    We present a general information theoretic approach for identifying functional subgraphs in complex networks. We show that the uncertainty in a variable can be written as a sum of information quantities, where each term is generated by successively conditioning mutual informations on new measured variables in a way analogous to a discrete differential calculus. The analogy to a Taylor series suggests efficient optimization algorithms for determining the state of a target variable in terms of functional groups of other nodes. We apply this methodology to electrophysiological recordings of cortical neuronal networks grown in vitro. Each cell's firing is generally explained by the activity of a few neurons. We identify these neuronal subgraphs in terms of their redundant or synergetic character and reconstruct neuronal circuits that account for the state of target cells

  9. Protein complex prediction based on k-connected subgraphs in protein interaction network

    OpenAIRE

    Habibi, Mahnaz; Eslahchi, Changiz; Wong, Limsoon

    2010-01-01

    Abstract Background Protein complexes play an important role in cellular mechanisms. Recently, several methods have been presented to predict protein complexes in a protein interaction network. In these methods, a protein complex is predicted as a dense subgraph of protein interactions. However, interactions data are incomplete and a protein complex does not have to be a complete or dense subgraph. Results We propose a more appropriate protein complex prediction method, CFA, that is based on ...

  10. Incremental Frequent Subgraph Mining on Large Evolving Graphs

    KAUST Repository

    Abdelhamid, Ehab; Canim, Mustafa; Sadoghi, Mohammad; Bhatta, Bishwaranjan; Chang, Yuan-Chi; Kalnis, Panos

    2017-01-01

    , such as social networks, utilize large evolving graphs. Mining these graphs using existing techniques is infeasible, due to the high computational cost. In this paper, we propose IncGM+, a fast incremental approach for continuous frequent subgraph mining problem

  11. Advice Complexity of the Online Induced Subgraph Problem

    DEFF Research Database (Denmark)

    Komm, Dennis; Královič, Rastislav; Královič, Richard

    2016-01-01

    of the input can influence the solution quality. We evaluate the information in a quantitative way by considering the best possible advice of given size that describes the unknown input. Using a result from Boyar et al. we give a tight trade-off relationship stating that, for inputs of length n, roughly n...... subgraph problem, preemption does not significantly help by giving a lower bound of Omega(n/(c^2\\log c)) on the bits of advice that are needed to obtain competitive ratio c, where c is any increasing function bounded from above by \\sqrt{n/\\log n}. We also give a linear lower bound for c close to 1....... these problems by investigating a generalized problem: for an arbitrary but fixed hereditary property, find some maximal induced subgraph having the property. We investigate this problem from the point of view of advice complexity, i.e. we ask how some additional information about the yet unrevealed parts...

  12. Protein complex prediction based on k-connected subgraphs in protein interaction network

    Directory of Open Access Journals (Sweden)

    Habibi Mahnaz

    2010-09-01

    Full Text Available Abstract Background Protein complexes play an important role in cellular mechanisms. Recently, several methods have been presented to predict protein complexes in a protein interaction network. In these methods, a protein complex is predicted as a dense subgraph of protein interactions. However, interactions data are incomplete and a protein complex does not have to be a complete or dense subgraph. Results We propose a more appropriate protein complex prediction method, CFA, that is based on connectivity number on subgraphs. We evaluate CFA using several protein interaction networks on reference protein complexes in two benchmark data sets (MIPS and Aloy, containing 1142 and 61 known complexes respectively. We compare CFA to some existing protein complex prediction methods (CMC, MCL, PCP and RNSC in terms of recall and precision. We show that CFA predicts more complexes correctly at a competitive level of precision. Conclusions Many real complexes with different connectivity level in protein interaction network can be predicted based on connectivity number. Our CFA program and results are freely available from http://www.bioinf.cs.ipm.ir/softwares/cfa/CFA.rar.

  13. Heavy subgraph pairs for traceability of block-chains

    NARCIS (Netherlands)

    Li, Binlong; Li, Binlong; Broersma, Haitze J.; Zhang, Shenggui

    2014-01-01

    A graph is called traceable if it contains a Hamilton path, i.e., a path containing all its vertices. Let G be a graph on n vertices. We say that an induced subgraph of G is o-1-heavy if it contains two nonadjacent vertices which satisfy an Ore-type degree condition for traceability, i.e., with

  14. A Parallel Approach for Frequent Subgraph Mining in a Single Large Graph Using Spark

    Directory of Open Access Journals (Sweden)

    Fengcai Qiao

    2018-02-01

    Full Text Available Frequent subgraph mining (FSM plays an important role in graph mining, attracting a great deal of attention in many areas, such as bioinformatics, web data mining and social networks. In this paper, we propose SSiGraM (Spark based Single Graph Mining, a Spark based parallel frequent subgraph mining algorithm in a single large graph. Aiming to approach the two computational challenges of FSM, we conduct the subgraph extension and support evaluation parallel across all the distributed cluster worker nodes. In addition, we also employ a heuristic search strategy and three novel optimizations: load balancing, pre-search pruning and top-down pruning in the support evaluation process, which significantly improve the performance. Extensive experiments with four different real-world datasets demonstrate that the proposed algorithm outperforms the existing GraMi (Graph Mining algorithm by an order of magnitude for all datasets and can work with a lower support threshold.

  15. The Fibonacci numbers of certain subgraphs of circulant graphs

    Directory of Open Access Journals (Sweden)

    Loiret Alejandría Dosal-Trujillo

    2015-11-01

    We prove that the total number of independent vertex sets of the family of graphs Cn[r] for all n≥r+1, and for several subgraphs of this family is completely determined by some sequences which are constructed recursively like the Fibonacci and Lucas sequences, even more, these new sequences generalize the Fibonacci and Lucas sequences.

  16. Efficient heuristics for maximum common substructure search.

    Science.gov (United States)

    Englert, Péter; Kovács, Péter

    2015-05-26

    Maximum common substructure search is a computationally hard optimization problem with diverse applications in the field of cheminformatics, including similarity search, lead optimization, molecule alignment, and clustering. Most of these applications have strict constraints on running time, so heuristic methods are often preferred. However, the development of an algorithm that is both fast enough and accurate enough for most practical purposes is still a challenge. Moreover, in some applications, the quality of a common substructure depends not only on its size but also on various topological features of the one-to-one atom correspondence it defines. Two state-of-the-art heuristic algorithms for finding maximum common substructures have been implemented at ChemAxon Ltd., and effective heuristics have been developed to improve both their efficiency and the relevance of the atom mappings they provide. The implementations have been thoroughly evaluated and compared with existing solutions (KCOMBU and Indigo). The heuristics have been found to greatly improve the performance and applicability of the algorithms. The purpose of this paper is to introduce the applied methods and present the experimental results.

  17. GoFFish: A Sub-Graph Centric Framework for Large-Scale Graph Analytics1

    Energy Technology Data Exchange (ETDEWEB)

    Simmhan, Yogesh; Kumbhare, Alok; Wickramaarachchi, Charith; Nagarkar, Soonil; Ravi, Santosh; Raghavendra, Cauligi; Prasanna, Viktor

    2014-08-25

    Large scale graph processing is a major research area for Big Data exploration. Vertex centric programming models like Pregel are gaining traction due to their simple abstraction that allows for scalable execution on distributed systems naturally. However, there are limitations to this approach which cause vertex centric algorithms to under-perform due to poor compute to communication overhead ratio and slow convergence of iterative superstep. In this paper we introduce GoFFish a scalable sub-graph centric framework co-designed with a distributed persistent graph storage for large scale graph analytics on commodity clusters. We introduce a sub-graph centric programming abstraction that combines the scalability of a vertex centric approach with the flexibility of shared memory sub-graph computation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation. We map Connected Components, SSSP and PageRank algorithms to this model to illustrate its flexibility. Further, we empirically analyze GoFFish using several real world graphs and demonstrate its significant performance improvement, orders of magnitude in some cases, compared to Apache Giraph, the leading open source vertex centric implementation.

  18. Do motifs reflect evolved function?--No convergent evolution of genetic regulatory network subgraph topologies.

    Science.gov (United States)

    Knabe, Johannes F; Nehaniv, Chrystopher L; Schilstra, Maria J

    2008-01-01

    Methods that analyse the topological structure of networks have recently become quite popular. Whether motifs (subgraph patterns that occur more often than in randomized networks) have specific functions as elementary computational circuits has been cause for debate. As the question is difficult to resolve with currently available biological data, we approach the issue using networks that abstractly model natural genetic regulatory networks (GRNs) which are evolved to show dynamical behaviors. Specifically one group of networks was evolved to be capable of exhibiting two different behaviors ("differentiation") in contrast to a group with a single target behavior. In both groups we find motif distribution differences within the groups to be larger than differences between them, indicating that evolutionary niches (target functions) do not necessarily mold network structure uniquely. These results show that variability operators can have a stronger influence on network topologies than selection pressures, especially when many topologies can create similar dynamics. Moreover, analysis of motif functional relevance by lesioning did not suggest that motifs were of greater importance to the functioning of the network than arbitrary subgraph patterns. Only when drastically restricting network size, so that one motif corresponds to a whole functionally evolved network, was preference for particular connection patterns found. This suggests that in non-restricted, bigger networks, entanglement with the rest of the network hinders topological subgraph analysis.

  19. Simultaneously Discovering and Localizing Common Objects in Wild Images.

    Science.gov (United States)

    Wang, Zhenzhen; Yuan, Junsong

    2018-09-01

    Motivated by the recent success of supervised and weakly supervised common object discovery, in this paper, we move forward one step further to tackle common object discovery in a fully unsupervised way. Generally, object co-localization aims at simultaneously localizing objects of the same class across a group of images. Traditional object localization/detection usually trains specific object detectors which require bounding box annotations of object instances, or at least image-level labels to indicate the presence/absence of objects in an image. Given a collection of images without any annotations, our proposed fully unsupervised method is to simultaneously discover images that contain common objects and also localize common objects in corresponding images. Without requiring to know the total number of common objects, we formulate this unsupervised object discovery as a sub-graph mining problem from a weighted graph of object proposals, where nodes correspond to object proposals, and edges represent the similarities between neighbouring proposals. The positive images and common objects are jointly discovered by finding sub-graphs of strongly connected nodes, with each sub-graph capturing one object pattern. The optimization problem can be efficiently solved by our proposed maximal-flow-based algorithm. Instead of assuming that each image contains only one common object, our proposed solution can better address wild images where each image may contain multiple common objects or even no common object. Moreover, our proposed method can be easily tailored to the task of image retrieval in which the nodes correspond to the similarity between query and reference images. Extensive experiments on PASCAL VOC 2007 and Object Discovery data sets demonstrate that even without any supervision, our approach can discover/localize common objects of various classes in the presence of scale, view point, appearance variation, and partial occlusions. We also conduct broad

  20. On the number of subgraphs of the Barabasi-Albert random graph

    Energy Technology Data Exchange (ETDEWEB)

    Ryabchenko, Aleksandr A; Samosvat, Egor A [Moscow Institute of Physics and Technology (State University), Dolgoprudnyi, Moscow Region, Russian Frderation (Russian Federation)

    2012-06-30

    We study a model of a random graph of the type of the Barabasi-Albert preferential attachment model. We develop a technique that makes it possible to estimate the mathematical expectation for a fairly wide class of random variables in the model under consideration. We use this technique to prove a theorem on the asymptotics of the mathematical expectation of the number of subgraphs isomorphic to a certain fixed graph in the random graphs of this model.

  1. Maximum likelihood as a common computational framework in tomotherapy

    International Nuclear Information System (INIS)

    Olivera, G.H.; Shepard, D.M.; Reckwerdt, P.J.; Ruchala, K.; Zachman, J.; Fitchard, E.E.; Mackie, T.R.

    1998-01-01

    Tomotherapy is a dose delivery technique using helical or axial intensity modulated beams. One of the strengths of the tomotherapy concept is that it can incorporate a number of processes into a single piece of equipment. These processes include treatment optimization planning, dose reconstruction and kilovoltage/megavoltage image reconstruction. A common computational technique that could be used for all of these processes would be very appealing. The maximum likelihood estimator, originally developed for emission tomography, can serve as a useful tool in imaging and radiotherapy. We believe that this approach can play an important role in the processes of optimization planning, dose reconstruction and kilovoltage and/or megavoltage image reconstruction. These processes involve computations that require comparable physical methods. They are also based on equivalent assumptions, and they have similar mathematical solutions. As a result, the maximum likelihood approach is able to provide a common framework for all three of these computational problems. We will demonstrate how maximum likelihood methods can be applied to optimization planning, dose reconstruction and megavoltage image reconstruction in tomotherapy. Results for planning optimization, dose reconstruction and megavoltage image reconstruction will be presented. Strengths and weaknesses of the methodology are analysed. Future directions for this work are also suggested. (author)

  2. A Maximum Resonant Set of Polyomino Graphs

    Directory of Open Access Journals (Sweden)

    Zhang Heping

    2016-05-01

    Full Text Available A polyomino graph P is a connected finite subgraph of the infinite plane grid such that each finite face is surrounded by a regular square of side length one and each edge belongs to at least one square. A dimer covering of P corresponds to a perfect matching. Different dimer coverings can interact via an alternating cycle (or square with respect to them. A set of disjoint squares of P is a resonant set if P has a perfect matching M so that each one of those squares is M-alternating. In this paper, we show that if K is a maximum resonant set of P, then P − K has a unique perfect matching. We further prove that the maximum forcing number of a polyomino graph is equal to the cardinality of a maximum resonant set. This confirms a conjecture of Xu et al. [26]. We also show that if K is a maximal alternating set of P, then P − K has a unique perfect matching.

  3. The n-component cubic model and flows: subgraph break-collapse method

    International Nuclear Information System (INIS)

    Essam, J.W.; Magalhaes, A.C.N. de.

    1988-01-01

    We generalise to the n-component cubic model the subgraph break-collapse method which we previously developed for the Potts model. The relations used are based on expressions which we recently derived for the Z(λ) model in terms of mod-λ flows. Our recursive algorithm is similar, for n = 2, to the break-collapse method for the Z(4) model proposed by Mariz and coworkers. It allows the exact calculation for the partition function and correlation functions for n-component cubic clusters with n as a variable, without the need to examine all of the spin configurations. (author) [pt

  4. On the number of subgraphs of the Barabási-Albert random graph

    International Nuclear Information System (INIS)

    Ryabchenko, Aleksandr A; Samosvat, Egor A

    2012-01-01

    We study a model of a random graph of the type of the Barabási-Albert preferential attachment model. We develop a technique that makes it possible to estimate the mathematical expectation for a fairly wide class of random variables in the model under consideration. We use this technique to prove a theorem on the asymptotics of the mathematical expectation of the number of subgraphs isomorphic to a certain fixed graph in the random graphs of this model.

  5. The Potts models and flows. III, standard and subgraph break-collapse methods

    International Nuclear Information System (INIS)

    Magalhaes, A.C.N. de; Essam, J.W.

    1987-01-01

    An algorithm is developed for the exact calculation of the many spin correlation functions of Potts model clusters which is more efficient than the standard break-collapse method traditionally used in real space renormalization group calculations. The improved performance is based on a relationship which, at any stage of the calculation, allows the replacement of certain subgraphs by single effective edges. Our method avoids, as in the standard one, the time consuming summation over spin states and can be very useful in series expansions and real space renormalisation group calculations on crystal lattices. (Author) [pt

  6. Research of Subgraph Estimation Page Rank Algorithm for Web Page Rank

    Directory of Open Access Journals (Sweden)

    LI Lan-yin

    2017-04-01

    Full Text Available The traditional PageRank algorithm can not efficiently perform large data Webpage scheduling problem. This paper proposes an accelerated algorithm named topK-Rank,which is based on PageRank on the MapReduce platform. It can find top k nodes efficiently for a given graph without sacrificing accuracy. In order to identify top k nodes,topK-Rank algorithm prunes unnecessary nodes and edges in each iteration to dynamically construct subgraphs,and iteratively estimates lower/upper bounds of PageRank scores through subgraphs. Theoretical analysis shows that this method guarantees result exactness. Experiments show that topK-Rank algorithm can find k nodes much faster than the existing approaches.

  7. Pathway discovery in metabolic networks by subgraph extraction.

    Science.gov (United States)

    Faust, Karoline; Dupont, Pierre; Callut, Jérôme; van Helden, Jacques

    2010-05-01

    Subgraph extraction is a powerful technique to predict pathways from biological networks and a set of query items (e.g. genes, proteins, compounds, etc.). It can be applied to a variety of different data types, such as gene expression, protein levels, operons or phylogenetic profiles. In this article, we investigate different approaches to extract relevant pathways from metabolic networks. Although these approaches have been adapted to metabolic networks, they are generic enough to be adjusted to other biological networks as well. We comparatively evaluated seven sub-network extraction approaches on 71 known metabolic pathways from Saccharomyces cerevisiae and a metabolic network obtained from MetaCyc. The best performing approach is a novel hybrid strategy, which combines a random walk-based reduction of the graph with a shortest paths-based algorithm, and which recovers the reference pathways with an accuracy of approximately 77%. Most of the presented algorithms are available as part of the network analysis tool set (NeAT). The kWalks method is released under the GPL3 license.

  8. Protein complex prediction via dense subgraphs and false positive analysis.

    Directory of Open Access Journals (Sweden)

    Cecilia Hernandez

    Full Text Available Many proteins work together with others in groups called complexes in order to achieve a specific function. Discovering protein complexes is important for understanding biological processes and predict protein functions in living organisms. Large-scale and throughput techniques have made possible to compile protein-protein interaction networks (PPI networks, which have been used in several computational approaches for detecting protein complexes. Those predictions might guide future biologic experimental research. Some approaches are topology-based, where highly connected proteins are predicted to be complexes; some propose different clustering algorithms using partitioning, overlaps among clusters for networks modeled with unweighted or weighted graphs; and others use density of clusters and information based on protein functionality. However, some schemes still require much processing time or the quality of their results can be improved. Furthermore, most of the results obtained with computational tools are not accompanied by an analysis of false positives. We propose an effective and efficient mining algorithm for discovering highly connected subgraphs, which is our base for defining protein complexes. Our representation is based on transforming the PPI network into a directed acyclic graph that reduces the number of represented edges and the search space for discovering subgraphs. Our approach considers weighted and unweighted PPI networks. We compare our best alternative using PPI networks from Saccharomyces cerevisiae (yeast and Homo sapiens (human with state-of-the-art approaches in terms of clustering, biological metrics and execution times, as well as three gold standards for yeast and two for human. Furthermore, we analyze false positive predicted complexes searching the PDBe (Protein Data Bank in Europe database in order to identify matching protein complexes that have been purified and structurally characterized. Our analysis shows

  9. Constraints on signaling network logic reveal functional subgraphs on Multiple Myeloma OMIC data.

    Science.gov (United States)

    Miannay, Bertrand; Minvielle, Stéphane; Magrangeas, Florence; Guziolowski, Carito

    2018-03-21

    The integration of gene expression profiles (GEPs) and large-scale biological networks derived from pathways databases is a subject which is being widely explored. Existing methods are based on network distance measures among significantly measured species. Only a small number of them include the directionality and underlying logic existing in biological networks. In this study we approach the GEP-networks integration problem by considering the network logic, however our approach does not require a prior species selection according to their gene expression level. We start by modeling the biological network representing its underlying logic using Logic Programming. This model points to reachable network discrete states that maximize a notion of harmony between the molecular species active or inactive possible states and the directionality of the pathways reactions according to their activator or inhibitor control role. Only then, we confront these network states with the GEP. From this confrontation independent graph components are derived, each of them related to a fixed and optimal assignment of active or inactive states. These components allow us to decompose a large-scale network into subgraphs and their molecular species state assignments have different degrees of similarity when compared to the same GEP. We apply our method to study the set of possible states derived from a subgraph from the NCI-PID Pathway Interaction Database. This graph links Multiple Myeloma (MM) genes to known receptors for this blood cancer. We discover that the NCI-PID MM graph had 15 independent components, and when confronted to 611 MM GEPs, we find 1 component as being more specific to represent the difference between cancer and healthy profiles.

  10. Exact parallel maximum clique algorithm for general and protein graphs.

    Science.gov (United States)

    Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka

    2013-09-23

    A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.

  11. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  12. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    OpenAIRE

    Liu, Peng; Wang, Xiaoli

    2017-01-01

    A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due ...

  13. The finite body triangulation: algorithms, subgraphs, homogeneity estimation and application.

    Science.gov (United States)

    Carson, Cantwell G; Levine, Jonathan S

    2016-09-01

    The concept of a finite body Dirichlet tessellation has been extended to that of a finite body Delaunay 'triangulation' to provide a more meaningful description of the spatial distribution of nonspherical secondary phase bodies in 2- and 3-dimensional images. A finite body triangulation (FBT) consists of a network of minimum edge-to-edge distances between adjacent objects in a microstructure. From this is also obtained the characteristic object chords formed by the intersection of the object boundary with the finite body tessellation. These two sets of distances form the basis of a parsimonious homogeneity estimation. The characteristics of the spatial distribution are then evaluated with respect to the distances between objects and the distances within them. Quantitative analysis shows that more physically representative distributions can be obtained by selecting subgraphs, such as the relative neighbourhood graph and the minimum spanning tree, from the finite body tessellation. To demonstrate their potential, we apply these methods to 3-dimensional X-ray computed tomographic images of foamed cement and their 2-dimensional cross sections. The Python computer code used to estimate the FBT is made available. Other applications for the algorithm - such as porous media transport and crack-tip propagation - are also discussed. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  14. Mean common or mean maximum carotid intima-media thickness as primary outcome in lipid-modifying intervention studies

    NARCIS (Netherlands)

    Dogan, Soner; Kastelein, Johannes Jacob Pieter; Grobbee, Diederick Egbertus; Bots, Michiel Leonardus

    2011-01-01

    Carotid intima-media thickness (CIMT) measurements are used as a disease outcome in randomized controlled trials that assess the effects of lipid-modifying treatment. It is unclear whether common CIMT or mean maximum CIMT should be used as the primary outcome. We directly compared both measurements

  15. Reuse-oriented common structure discovery in assembly models

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Pan; Zhang Jie; Li, Yuan; Yu, Jian Feng [The Ministry of Education Key Lab of Contemporary Design and Integrated Manufacturing Technology, Northwestern Polytechnical University, Xian (China)

    2017-01-15

    Discovering the common structures in assembly models provides designers with the commonalities that carry significant design knowledge across multiple products, which helps to improve design efficiency and accelerate the design process. In this paper, a discovery method has been developed to obtain the common structure in assembly models. First, this work proposes a graph descriptor that captures both the geometrical and topological information of the assembly model, in which shape vectors and link vectors quantitatively describe the part models and mating relationships, respectively. Then, a clustering step is introduced into the discovery, which clusters the similar parts by comparing the similarities between them. In addition, some rules are also provided to filter the frequent subgraphs in order to obtain the expected results. Compared with the existing method, the proposed approach could overcome the disadvantages by providing an independent description of the part model and taking into consideration the similar parts in assemblies, which leads to a more reasonable result. Finally, some experiments have been carried out and the experimental results demonstrate the effectiveness of the proposed approach.

  16. Reuse-oriented common structure discovery in assembly models

    International Nuclear Information System (INIS)

    Wang, Pan; Zhang Jie; Li, Yuan; Yu, Jian Feng

    2017-01-01

    Discovering the common structures in assembly models provides designers with the commonalities that carry significant design knowledge across multiple products, which helps to improve design efficiency and accelerate the design process. In this paper, a discovery method has been developed to obtain the common structure in assembly models. First, this work proposes a graph descriptor that captures both the geometrical and topological information of the assembly model, in which shape vectors and link vectors quantitatively describe the part models and mating relationships, respectively. Then, a clustering step is introduced into the discovery, which clusters the similar parts by comparing the similarities between them. In addition, some rules are also provided to filter the frequent subgraphs in order to obtain the expected results. Compared with the existing method, the proposed approach could overcome the disadvantages by providing an independent description of the part model and taking into consideration the similar parts in assemblies, which leads to a more reasonable result. Finally, some experiments have been carried out and the experimental results demonstrate the effectiveness of the proposed approach

  17. Maximum Lateness Scheduling on Two-Person Cooperative Games with Variable Processing Times and Common Due Date

    Directory of Open Access Journals (Sweden)

    Peng Liu

    2017-01-01

    Full Text Available A new maximum lateness scheduling model in which both cooperative games and variable processing times exist simultaneously is considered in this paper. The job variable processing time is described by an increasing or a decreasing function dependent on the position of a job in the sequence. Two persons have to cooperate in order to process a set of jobs. Each of them has a single machine and their processing cost is defined as the minimum value of maximum lateness. All jobs have a common due date. The objective is to maximize the multiplication of their rational positive cooperative profits. A division of those jobs should be negotiated to yield a reasonable cooperative profit allocation scheme acceptable to them. We propose the sufficient and necessary conditions for the problems to have positive integer solution.

  18. A dichotomy for upper domination in monogenic classes

    KAUST Repository

    AbouEisha, Hassan M.

    2014-01-01

    An upper dominating set in a graph is a minimal (with respect to set inclusion) dominating set of maximum cardinality. The problem of finding an upper dominating set is NP-hard for general graphs and in many restricted graph families. In the present paper, we study the computational complexity of this problem in monogenic classes of graphs (i.e. classes defined by a single forbidden induced subgraph) and show that the problem admits a dichotomy in this family. In particular, we prove that if the only forbidden induced subgraph is a P4 or a 2K2 (or any induced subgraph of these graphs), then the problem can be solved in polynomial time. Otherwise, it is NP-hard.

  19. Approximating maximum weight cycle covers in directed graphs with weights zero and one

    NARCIS (Netherlands)

    Bläser, Markus; Manthey, Bodo

    2005-01-01

    A cycle cover of a graph is a spanning subgraph each node of which is part of exactly one simple cycle. A $k$-cycle cover is a cycle cover where each cycle has length at least $k$. Given a complete directed graph with edge weights zero and one, Max-$k$-DCC(0, 1) is the problem of finding a k-cycle

  20. Fault tolerance in protein interaction networks: stable bipartite subgraphs and redundant pathways.

    Science.gov (United States)

    Brady, Arthur; Maxwell, Kyle; Daniels, Noah; Cowen, Lenore J

    2009-01-01

    As increasing amounts of high-throughput data for the yeast interactome become available, more system-wide properties are uncovered. One interesting question concerns the fault tolerance of protein interaction networks: whether there exist alternative pathways that can perform some required function if a gene essential to the main mechanism is defective, absent or suppressed. A signature pattern for redundant pathways is the BPM (between-pathway model) motif, introduced by Kelley and Ideker. Past methods proposed to search the yeast interactome for BPM motifs have had several important limitations. First, they have been driven heuristically by local greedy searches, which can lead to the inclusion of extra genes that may not belong in the motif; second, they have been validated solely by functional coherence of the putative pathways using GO enrichment, making it difficult to evaluate putative BPMs in the absence of already known biological annotation. We introduce stable bipartite subgraphs, and show they form a clean and efficient way of generating meaningful BPMs which naturally discard extra genes included by local greedy methods. We show by GO enrichment measures that our BPM set outperforms previous work, covering more known complexes and functional pathways. Perhaps most importantly, since our BPMs are initially generated by examining the genetic-interaction network only, the location of edges in the protein-protein physical interaction network can then be used to statistically validate each candidate BPM, even with sparse GO annotation (or none at all). We uncover some interesting biological examples of previously unknown putative redundant pathways in such areas as vesicle-mediated transport and DNA repair.

  1. Fault tolerance in protein interaction networks: stable bipartite subgraphs and redundant pathways.

    Directory of Open Access Journals (Sweden)

    Arthur Brady

    Full Text Available As increasing amounts of high-throughput data for the yeast interactome become available, more system-wide properties are uncovered. One interesting question concerns the fault tolerance of protein interaction networks: whether there exist alternative pathways that can perform some required function if a gene essential to the main mechanism is defective, absent or suppressed. A signature pattern for redundant pathways is the BPM (between-pathway model motif, introduced by Kelley and Ideker. Past methods proposed to search the yeast interactome for BPM motifs have had several important limitations. First, they have been driven heuristically by local greedy searches, which can lead to the inclusion of extra genes that may not belong in the motif; second, they have been validated solely by functional coherence of the putative pathways using GO enrichment, making it difficult to evaluate putative BPMs in the absence of already known biological annotation. We introduce stable bipartite subgraphs, and show they form a clean and efficient way of generating meaningful BPMs which naturally discard extra genes included by local greedy methods. We show by GO enrichment measures that our BPM set outperforms previous work, covering more known complexes and functional pathways. Perhaps most importantly, since our BPMs are initially generated by examining the genetic-interaction network only, the location of edges in the protein-protein physical interaction network can then be used to statistically validate each candidate BPM, even with sparse GO annotation (or none at all. We uncover some interesting biological examples of previously unknown putative redundant pathways in such areas as vesicle-mediated transport and DNA repair.

  2. On the origin of distribution patterns of motifs in biological networks

    Directory of Open Access Journals (Sweden)

    Lesk Arthur M

    2008-08-01

    Full Text Available Abstract Background Inventories of small subgraphs in biological networks have identified commonly-recurring patterns, called motifs. The inference that these motifs have been selected for function rests on the idea that their occurrences are significantly more frequent than random. Results Our analysis of several large biological networks suggests, in contrast, that the frequencies of appearance of common subgraphs are similar in natural and corresponding random networks. Conclusion Indeed, certain topological features of biological networks give rise naturally to the common appearance of the motifs. We therefore question whether frequencies of occurrences are reasonable evidence that the structures of motifs have been selected for their functional contribution to the operation of networks.

  3. Outer-2-independent domination in graphs

    Indian Academy of Sciences (India)

    independent dominating set of a graph is a set of vertices of such that every vertex of ()\\ has a neighbor in and the maximum vertex degree of the subgraph induced by ()\\ is at most one. The outer-2-independent domination ...

  4. EPR spectrum deconvolution and dose assessment of fossil tooth enamel using maximum likelihood common factor analysis

    International Nuclear Information System (INIS)

    Vanhaelewyn, G.; Callens, F.; Gruen, R.

    2000-01-01

    In order to determine the components which give rise to the EPR spectrum around g = 2 we have applied Maximum Likelihood Common Factor Analysis (MLCFA) on the EPR spectra of enamel sample 1126 which has previously been analysed by continuous wave and pulsed EPR as well as EPR microscopy. MLCFA yielded agreeing results on three sets of X-band spectra and the following components were identified: an orthorhombic component attributed to CO - 2 , an axial component CO 3- 3 , as well as four isotropic components, three of which could be attributed to SO - 2 , a tumbling CO - 2 and a central line of a dimethyl radical. The X-band results were confirmed by analysis of Q-band spectra where three additional isotropic lines were found, however, these three components could not be attributed to known radicals. The orthorhombic component was used to establish dose response curves for the assessment of the past radiation dose, D E . The results appear to be more reliable than those based on conventional peak-to-peak EPR intensity measurements or simple Gaussian deconvolution methods

  5. Design of chemical space networks using a Tanimoto similarity variant based upon maximum common substructures.

    Science.gov (United States)

    Zhang, Bijun; Vogt, Martin; Maggiora, Gerald M; Bajorath, Jürgen

    2015-10-01

    Chemical space networks (CSNs) have recently been introduced as an alternative to other coordinate-free and coordinate-based chemical space representations. In CSNs, nodes represent compounds and edges pairwise similarity relationships. In addition, nodes are annotated with compound property information such as biological activity. CSNs have been applied to view biologically relevant chemical space in comparison to random chemical space samples and found to display well-resolved topologies at low edge density levels. The way in which molecular similarity relationships are assessed is an important determinant of CSN topology. Previous CSN versions were based on numerical similarity functions or the assessment of substructure-based similarity. Herein, we report a new CSN design that is based upon combined numerical and substructure similarity evaluation. This has been facilitated by calculating numerical similarity values on the basis of maximum common substructures (MCSs) of compounds, leading to the introduction of MCS-based CSNs (MCS-CSNs). This CSN design combines advantages of continuous numerical similarity functions with a robust and chemically intuitive substructure-based assessment. Compared to earlier version of CSNs, MCS-CSNs are characterized by a further improved organization of local compound communities as exemplified by the delineation of drug-like subspaces in regions of biologically relevant chemical space.

  6. Preliminarily study on the maximum handling size, prey size and species selectivity of growth hormone transgenic and non-transgenic common carp Cyprinus carpio when foraging on gastropods

    Science.gov (United States)

    Zhu, Tingbing; Zhang, Lihong; Zhang, Tanglin; Wang, Yaping; Hu, Wei; Olsen, Rolf Eric; Zhu, Zuoyan

    2017-10-01

    The present study preliminarily examined the differences in maximum handling size, prey size and species selectivity of growth hormone transgenic and non-transgenic common carp Cyprinus carpio when foraging on four gastropods species (Bellamya aeruginosa, Radix auricularia, Parafossarulus sinensis and Alocinma longicornis) under laboratory conditions. In the maximum handling size trial, five fish from each age group (1-year-old and 2-year-old) and each genotype (transgenic and non-transgenic) of common carp were individually allowed to feed on B. aeruginosa with wide shell height range. The results showed that maximum handling size increased linearly with fish length, and there was no significant difference in maximum handling size between the two genotypes. In the size selection trial, three pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on three size groups of B. aeruginosa. The results show that the two genotypes of C. carpio favored the small-sized group over the large-sized group. In the species selection trial, three pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on thin-shelled B. aeruginosa and thick-shelled R. auricularia, and five pairs of 2-year-old transgenic and non-transgenic carp were individually allowed to feed on two gastropods species (P. sinensis and A. longicornis) with similar size and shell strength. The results showed that both genotypes preferred thin-shelled Radix auricularia rather than thick-shelled B. aeruginosa, but there were no significant difference in selectivity between the two genotypes when fed on P. sinensis and A. longicornis. The present study indicates that transgenic and non-transgenic C. carpio show similar selectivity of predation on the size- and species-limited gastropods. While this information may be useful for assessing the environmental risk of transgenic carp, it does not necessarily demonstrate that transgenic common carp might

  7. Direct maximum parsimony phylogeny reconstruction from genotype data

    OpenAIRE

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-01-01

    Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of ge...

  8. Graph-theoretic approach to quantum correlations.

    Science.gov (United States)

    Cabello, Adán; Severini, Simone; Winter, Andreas

    2014-01-31

    Correlations in Bell and noncontextuality inequalities can be expressed as a positive linear combination of probabilities of events. Exclusive events can be represented as adjacent vertices of a graph, so correlations can be associated to a subgraph. We show that the maximum value of the correlations for classical, quantum, and more general theories is the independence number, the Lovász number, and the fractional packing number of this subgraph, respectively. We also show that, for any graph, there is always a correlation experiment such that the set of quantum probabilities is exactly the Grötschel-Lovász-Schrijver theta body. This identifies these combinatorial notions as fundamental physical objects and provides a method for singling out experiments with quantum correlations on demand.

  9. Improper colouring of (random) unit disk graphs

    NARCIS (Netherlands)

    Kang, R.J.; Müller, T.; Sereni, J.S.

    2008-01-01

    For any graph G, the k-improper chromatic number ¿k(G) is the smallest number of colours used in a colouring of G such that each colour class induces a subgraph of maximum degree k. We investigate ¿k for unit disk graphs and random unit disk graphs to generalise results of McDiarmid and Reed

  10. On solution concepts for matching games

    NARCIS (Netherlands)

    Biro, Peter; Kern, Walter; Paulusma, Daniël; Kratochvil, J.; Li, A.; Fiala, J.; Kolman, P.

    A matching game is a cooperative game $(N,v)$ defined on a graph $G = (N,E)$ with an edge weighting $\\omega : E \\to {\\Bbb R}_+$ . The player set is $N$ and the value of a coalition $S \\subseteq  N$ is defined as the maximum weight of a matching in the subgraph induced by $S$. First we present an

  11. Matching games: the least core and the nucleolus

    NARCIS (Netherlands)

    Kern, Walter; Paulusma, Daniël

    2003-01-01

    A matching game is a cooperative game defined by a graph G = (N, E). The player set is N and the value of a coalition S ⫅ N is defined as the size of a maximum matching in the subgraph induced by S. We show that the nucleolus of such games can be computed efficiently. The result is based on an

  12. Computing solutions for matching games

    NARCIS (Netherlands)

    Biró, Péter; Kern, Walter; Paulusma, Daniël

    2012-01-01

    A matching game is a cooperative game (N, v) defined on a graph G = (N, E) with an edge weighting w : E → R+. The player set is N and the value of a coalition S ⊆ N is defined as the maximum weight of a matching in the subgraph induced by S. First we present an O(nm+n2 log n) algorithm that tests if

  13. Ancestral sequence reconstruction with Maximum Parsimony

    OpenAIRE

    Herbst, Lina; Fischer, Mareike

    2017-01-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference as well as for ancestral sequence inference is Maximum Parsimony (...

  14. Cycles through subsets with large degree sums

    NARCIS (Netherlands)

    Broersma, Haitze J.; Li, Hao; Li, Jianping; Tian, Feng; Veldman, H.J.

    1997-01-01

    Let G be a 2-connected graph on n vertices and let X V(G). We say that G is X-cyclable if G has an X-cycle, i.e., a cycle containing all vertices of X. We denote by ¿(X) the maximum number of pairwise nonadjacent vertices in the subgraph G[X] of G induced by X. If G[X] is not complete, we denote by

  15. Pareto versus lognormal: a maximum entropy test.

    Science.gov (United States)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  16. Broadcasting a Common Message with Variable-Length Stop-Feedback codes

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2015-01-01

    We investigate the maximum coding rate achievable over a two-user broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder, which transmits...... itself in the absence of a square-root penalty in the asymptotic expansion of the maximum coding rate for large blocklengths, a result also known as zero dispersion. In this paper, we show that this speed-up does not necessarily occur for the broadcast channel with common message. Specifically...... continuously until it receives both stop signals. For the point-to-point case, Polyanskiy, Poor, and Verdú (2011) recently demonstrated that variable-length coding combined with stop feedback significantly increases the speed at which the maximum coding rate converges to capacity. This speed-up manifests...

  17. Maximum vehicle cabin temperatures under different meteorological conditions

    Science.gov (United States)

    Grundstein, Andrew; Meentemeyer, Vernon; Dowd, John

    2009-05-01

    A variety of studies have documented the dangerously high temperatures that may occur within the passenger compartment (cabin) of cars under clear sky conditions, even at relatively low ambient air temperatures. Our study, however, is the first to examine cabin temperatures under variable weather conditions. It uses a unique maximum vehicle cabin temperature dataset in conjunction with directly comparable ambient air temperature, solar radiation, and cloud cover data collected from April through August 2007 in Athens, GA. Maximum cabin temperatures, ranging from 41-76°C, varied considerably depending on the weather conditions and the time of year. Clear days had the highest cabin temperatures, with average values of 68°C in the summer and 61°C in the spring. Cloudy days in both the spring and summer were on average approximately 10°C cooler. Our findings indicate that even on cloudy days with lower ambient air temperatures, vehicle cabin temperatures may reach deadly levels. Additionally, two predictive models of maximum daily vehicle cabin temperatures were developed using commonly available meteorological data. One model uses maximum ambient air temperature and average daily solar radiation while the other uses cloud cover percentage as a surrogate for solar radiation. From these models, two maximum vehicle cabin temperature indices were developed to assess the level of danger. The models and indices may be useful for forecasting hazardous conditions, promoting public awareness, and to estimate past cabin temperatures for use in forensic analyses.

  18. Maximum power point tracker based on fuzzy logic

    International Nuclear Information System (INIS)

    Daoud, A.; Midoun, A.

    2006-01-01

    The solar energy is used as power source in photovoltaic power systems and the need for an intelligent power management system is important to obtain the maximum power from the limited solar panels. With the changing of the sun illumination due to variation of angle of incidence of sun radiation and of the temperature of the panels, Maximum Power Point Tracker (MPPT) enables optimization of solar power generation. The MPPT is a sub-system designed to extract the maximum power from a power source. In the case of solar panels power source. the maximum power point varies as a result of changes in its electrical characteristics which in turn are functions of radiation dose, temperature, ageing and other effects. The MPPT maximum the power output from panels for a given set of conditions by detecting the best working point of the power characteristic and then controls the current through the panels or the voltage across them. Many MPPT methods have been reported in literature. These techniques of MPPT can be classified into three main categories that include: lookup table methods, hill climbing methods and computational methods. The techniques vary according to the degree of sophistication, processing time and memory requirements. The perturbation and observation algorithm (hill climbing technique) is commonly used due to its ease of implementation, and relative tracking efficiency. However, it has been shown that when the insolation changes rapidly, the perturbation and observation method is slow to track the maximum power point. In recent years, the fuzzy controllers are used for maximum power point tracking. This method only requires the linguistic control rules for maximum power point, the mathematical model is not required and therefore the implementation of this control method is easy to real control system. In this paper, we we present a simple robust MPPT using fuzzy set theory where the hardware consists of the microchip's microcontroller unit control card and

  19. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  20. maximum neutron flux at thermal nuclear reactors

    International Nuclear Information System (INIS)

    Strugar, P.

    1968-10-01

    Since actual research reactors are technically complicated and expensive facilities it is important to achieve savings by appropriate reactor lattice configurations. There is a number of papers, and practical examples of reactors with central reflector, dealing with spatial distribution of fuel elements which would result in higher neutron flux. Common disadvantage of all the solutions is that the choice of best solution is done starting from the anticipated spatial distributions of fuel elements. The weakness of these approaches is lack of defined optimization criteria. Direct approach is defined as follows: determine the spatial distribution of fuel concentration starting from the condition of maximum neutron flux by fulfilling the thermal constraints. Thus the problem of determining the maximum neutron flux is solving a variational problem which is beyond the possibilities of classical variational calculation. This variational problem has been successfully solved by applying the maximum principle of Pontrjagin. Optimum distribution of fuel concentration was obtained in explicit analytical form. Thus, spatial distribution of the neutron flux and critical dimensions of quite complex reactor system are calculated in a relatively simple way. In addition to the fact that the results are innovative this approach is interesting because of the optimization procedure itself [sr

  1. Motif-role-fingerprints: the building-blocks of motifs, clustering-coefficients and transitivities in directed networks.

    Directory of Open Access Journals (Sweden)

    Mark D McDonnell

    Full Text Available Complex networks are frequently characterized by metrics for which particular subgraphs are counted. One statistic from this category, which we refer to as motif-role fingerprints, differs from global subgraph counts in that the number of subgraphs in which each node participates is counted. As with global subgraph counts, it can be important to distinguish between motif-role fingerprints that are 'structural' (induced subgraphs and 'functional' (partial subgraphs. Here we show mathematically that a vector of all functional motif-role fingerprints can readily be obtained from an arbitrary directed adjacency matrix, and then converted to structural motif-role fingerprints by multiplying that vector by a specific invertible conversion matrix. This result demonstrates that a unique structural motif-role fingerprint exists for any given functional motif-role fingerprint. We demonstrate a similar result for the cases of functional and structural motif-fingerprints without node roles, and global subgraph counts that form the basis of standard motif analysis. We also explicitly highlight that motif-role fingerprints are elemental to several popular metrics for quantifying the subgraph structure of directed complex networks, including motif distributions, directed clustering coefficient, and transitivity. The relationships between each of these metrics and motif-role fingerprints also suggest new subtypes of directed clustering coefficients and transitivities. Our results have potential utility in analyzing directed synaptic networks constructed from neuronal connectome data, such as in terms of centrality. Other potential applications include anomaly detection in networks, identification of similar networks and identification of similar nodes within networks. Matlab code for calculating all stated metrics following calculation of functional motif-role fingerprints is provided as S1 Matlab File.

  2. Variable-Length Coding with Stop-Feedback for the Common-Message Broadcast Channel

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2016-01-01

    This paper investigates the maximum coding rate over a K-user discrete memoryless broadcast channel for the scenario where a common message is transmitted using variable-length stop-feedback codes. Specifically, upon decoding the common message, each decoder sends a stop signal to the encoder...... of these bounds reveal that---contrary to the point-to-point case---the second-order term in the asymptotic expansion of the maximum coding rate decays inversely proportional to the square root of the average blocklength. This holds for certain nontrivial common-message broadcast channels, such as the binary......, which transmits continuously until it receives all K stop signals. We present nonasymptotic achievability and converse bounds for the maximum coding rate, which strengthen and generalize the bounds previously reported in Trillingsgaard et al. (2015) for the two-user case. An asymptotic analysis...

  3. Unification of field theory and maximum entropy methods for learning probability densities

    Science.gov (United States)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  4. Unification of field theory and maximum entropy methods for learning probability densities.

    Science.gov (United States)

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  5. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  6. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  7. Direct maximum parsimony phylogeny reconstruction from genotype data.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2007-12-05

    Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  8. Direct maximum parsimony phylogeny reconstruction from genotype data

    Directory of Open Access Journals (Sweden)

    Ravi R

    2007-12-01

    Full Text Available Abstract Background Maximum parsimony phylogenetic tree reconstruction from genetic variation data is a fundamental problem in computational genetics with many practical applications in population genetics, whole genome analysis, and the search for genetic predictors of disease. Efficient methods are available for reconstruction of maximum parsimony trees from haplotype data, but such data are difficult to determine directly for autosomal DNA. Data more commonly is available in the form of genotypes, which consist of conflated combinations of pairs of haplotypes from homologous chromosomes. Currently, there are no general algorithms for the direct reconstruction of maximum parsimony phylogenies from genotype data. Hence phylogenetic applications for autosomal data must therefore rely on other methods for first computationally inferring haplotypes from genotypes. Results In this work, we develop the first practical method for computing maximum parsimony phylogenies directly from genotype data. We show that the standard practice of first inferring haplotypes from genotypes and then reconstructing a phylogeny on the haplotypes often substantially overestimates phylogeny size. As an immediate application, our method can be used to determine the minimum number of mutations required to explain a given set of observed genotypes. Conclusion Phylogeny reconstruction directly from unphased data is computationally feasible for moderate-sized problem instances and can lead to substantially more accurate tree size inferences than the standard practice of treating phasing and phylogeny construction as two separate analysis stages. The difference between the approaches is particularly important for downstream applications that require a lower-bound on the number of mutations that the genetic region has undergone.

  9. A Method of Forming the Optimal Set of Disjoint Path in Computer Networks

    Directory of Open Access Journals (Sweden)

    As'ad Mahmoud As'ad ALNASER

    2017-04-01

    Full Text Available This work provides a short analysis of algorithms of multipath routing. The modified algorithm of formation of the maximum set of not crossed paths taking into account their metrics is offered. Optimization of paths is carried out due to their reconfiguration with adjacent deadlock path. Reconfigurations are realized within the subgraphs including only peaks of the main and an adjacent deadlock path. It allows to reduce the field of formation of an optimum path and time complexity of its formation.

  10. Capacitated Hub Routing Problem in Hub-and-Feeder Network Design: Modeling and Solution Algorithm

    OpenAIRE

    Gelareh , Shahin; Neamatian Monemi , Rahimeh; Semet , Frédéric

    2015-01-01

    International audience; In this paper, we address the Bounded Cardinality Hub Location Routing with Route Capacity wherein eachhub acts as a transshipment node for one directed route. The number of hubs lies between a minimum anda maximum and the hub-level network is a complete subgraph. The transshipment operations take place atthe hub nodes and flow transfer time from a hub-level transporter to a spoke-level vehicle influences spoketo-hub allocations. We propose a mathematical model and a b...

  11. Hardness and Approximation for Network Flow Interdiction

    OpenAIRE

    Chestnut, Stephen R.; Zenklusen, Rico

    2015-01-01

    In the Network Flow Interdiction problem an adversary attacks a network in order to minimize the maximum s-t-flow. Very little is known about the approximatibility of this problem despite decades of interest in it. We present the first approximation hardness, showing that Network Flow Interdiction and several of its variants cannot be much easier to approximate than Densest k-Subgraph. In particular, any $n^{o(1)}$-approximation algorithm for Network Flow Interdiction would imply an $n^{o(1)}...

  12. Two-Agent Scheduling to Minimize the Maximum Cost with Position-Dependent Jobs

    Directory of Open Access Journals (Sweden)

    Long Wan

    2015-01-01

    Full Text Available This paper investigates a single-machine two-agent scheduling problem to minimize the maximum costs with position-dependent jobs. There are two agents, each with a set of independent jobs, competing to perform their jobs on a common machine. In our scheduling setting, the actual position-dependent processing time of one job is characterized by variable function dependent on the position of the job in the sequence. Each agent wants to fulfil the objective of minimizing the maximum cost of its own jobs. We develop a feasible method to achieve all the Pareto optimal points in polynomial time.

  13. An novel frequent probability pattern mining algorithm based on circuit simulation method in uncertain biological networks

    Science.gov (United States)

    2014-01-01

    Background Motif mining has always been a hot research topic in bioinformatics. Most of current research on biological networks focuses on exact motif mining. However, due to the inevitable experimental error and noisy data, biological network data represented as the probability model could better reflect the authenticity and biological significance, therefore, it is more biological meaningful to discover probability motif in uncertain biological networks. One of the key steps in probability motif mining is frequent pattern discovery which is usually based on the possible world model having a relatively high computational complexity. Methods In this paper, we present a novel method for detecting frequent probability patterns based on circuit simulation in the uncertain biological networks. First, the partition based efficient search is applied to the non-tree like subgraph mining where the probability of occurrence in random networks is small. Then, an algorithm of probability isomorphic based on circuit simulation is proposed. The probability isomorphic combines the analysis of circuit topology structure with related physical properties of voltage in order to evaluate the probability isomorphism between probability subgraphs. The circuit simulation based probability isomorphic can avoid using traditional possible world model. Finally, based on the algorithm of probability subgraph isomorphism, two-step hierarchical clustering method is used to cluster subgraphs, and discover frequent probability patterns from the clusters. Results The experiment results on data sets of the Protein-Protein Interaction (PPI) networks and the transcriptional regulatory networks of E. coli and S. cerevisiae show that the proposed method can efficiently discover the frequent probability subgraphs. The discovered subgraphs in our study contain all probability motifs reported in the experiments published in other related papers. Conclusions The algorithm of probability graph isomorphism

  14. Ancestral Sequence Reconstruction with Maximum Parsimony.

    Science.gov (United States)

    Herbst, Lina; Fischer, Mareike

    2017-12-01

    One of the main aims in phylogenetics is the estimation of ancestral sequences based on present-day data like, for instance, DNA alignments. One way to estimate the data of the last common ancestor of a given set of species is to first reconstruct a phylogenetic tree with some tree inference method and then to use some method of ancestral state inference based on that tree. One of the best-known methods both for tree inference and for ancestral sequence inference is Maximum Parsimony (MP). In this manuscript, we focus on this method and on ancestral state inference for fully bifurcating trees. In particular, we investigate a conjecture published by Charleston and Steel in 1995 concerning the number of species which need to have a particular state, say a, at a particular site in order for MP to unambiguously return a as an estimate for the state of the last common ancestor. We prove the conjecture for all even numbers of character states, which is the most relevant case in biology. We also show that the conjecture does not hold in general for odd numbers of character states, but also present some positive results for this case.

  15. An Efficient Monte Carlo Approach to Compute PageRank for Large Graphs on a Single PC

    Directory of Open Access Journals (Sweden)

    Sonobe Tomohiro

    2016-03-01

    Full Text Available This paper describes a novel Monte Carlo based random walk to compute PageRanks of nodes in a large graph on a single PC. The target graphs of this paper are ones whose size is larger than the physical memory. In such an environment, memory management is a difficult task for simulating the random walk among the nodes. We propose a novel method that partitions the graph into subgraphs in order to make them fit into the physical memory, and conducts the random walk for each subgraph. By evaluating the walks lazily, we can conduct the walks only in a subgraph and approximate the random walk by rotating the subgraphs. In computational experiments, the proposed method exhibits good performance for existing large graphs with several passes of the graph data.

  16. Common views of potentially attractive fusion concepts

    International Nuclear Information System (INIS)

    Piet, S.J.

    1986-01-01

    Fusion is viewed through three windows to help determine what constitutes a very attractive fusion concept. These windows are economics, maintenance and reliability, and safety and environment. Achievement of many desired features cannot yet be quantified. Although these differing perspectives can lead to some conflicting desires, five common desired features are apparent - (a) minimum failure rates, (b) minimum failure effects, (c) minimum complexity, (d) minimum short-term radioactivity, and (e) maximum mass power density. Some innovative fusion concepts are briefly examined in the light of these commonalities

  17. Capacitated Bounded Cardinality Hub Routing Problem: Model and Solution Algorithm

    OpenAIRE

    Gelareha, Shahin; Monemic, Rahimeh Neamatian; Semetd, Frederic

    2017-01-01

    In this paper, we address the Bounded Cardinality Hub Location Routing with Route Capacity wherein each hub acts as a transshipment node for one directed route. The number of hubs lies between a minimum and a maximum and the hub-level network is a complete subgraph. The transshipment operations take place at the hub nodes and flow transfer time from a hub-level transporter to a spoke-level vehicle influences spoke- to-hub allocations. We propose a mathematical model and a branch-and-cut algor...

  18. Semi-strong split domination in graphs

    Directory of Open Access Journals (Sweden)

    Anwar Alwardi

    2014-06-01

    Full Text Available Given a graph $G = (V,E$, a dominating set $D subseteq V$ is called a semi-strong split dominating set of $G$ if $|V setminus D| geq 1$ and the maximum degree of the subgraph induced by $V setminus D$ is 1. The minimum cardinality of a semi-strong split dominating set (SSSDS of G is the semi-strong split domination number of G, denoted $gamma_{sss}(G$. In this work, we introduce the concept and prove several results regarding it.

  19. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    Science.gov (United States)

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  20. Computational Support for Creative Design

    KAUST Repository

    Liu, Han

    2015-01-01

    the structures of input models, we propose new automatic operations to discover replaceable substructures across models or within a model. We enforce a pair of subgraphs matching along their boundaries so that switching two subgraphs results in topological

  1. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  2. A new sentence generator providing material for maximum reading speed measurement.

    Science.gov (United States)

    Perrin, Jean-Luc; Paillé, Damien; Baccino, Thierry

    2015-12-01

    A new method is proposed to generate text material for assessing maximum reading speed of adult readers. The described procedure allows one to generate a vast number of equivalent short sentences. These sentences can be displayed for different durations in order to determine the reader's maximum speed using a psychophysical threshold algorithm. Each sentence is built so that it is either true or false according to common knowledge. The actual reading is verified by asking the reader to determine the truth value of each sentence. We based our design on the generator described by Crossland et al. and upgraded it. The new generator handles concepts distributed in an ontology, which allows an easy determination of the sentences' truth value and control of lexical and psycholinguistic parameters. In this way many equivalent sentence can be generated and displayed to perform the measurement. Maximum reading speed scores obtained with pseudo-randomly chosen sentences from the generator were strongly correlated with maximum reading speed scores obtained with traditional MNREAD sentences (r = .836). Furthermore, the large number of sentences that can be generated makes it possible to perform repeated measurements, since the possibility of a reader learning individual sentences is eliminated. Researchers interested in within-reader performance variability could use the proposed method for this purpose.

  3. Superfast maximum-likelihood reconstruction for quantum tomography

    Science.gov (United States)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  4. Maximum Acceleration Recording Circuit

    Science.gov (United States)

    Bozeman, Richard J., Jr.

    1995-01-01

    Coarsely digitized maximum levels recorded in blown fuses. Circuit feeds power to accelerometer and makes nonvolatile record of maximum level to which output of accelerometer rises during measurement interval. In comparison with inertia-type single-preset-trip-point mechanical maximum-acceleration-recording devices, circuit weighs less, occupies less space, and records accelerations within narrower bands of uncertainty. In comparison with prior electronic data-acquisition systems designed for same purpose, circuit simpler, less bulky, consumes less power, costs and analysis of data recorded in magnetic or electronic memory devices. Circuit used, for example, to record accelerations to which commodities subjected during transportation on trucks.

  5. Neutron spectra unfolding with maximum entropy and maximum likelihood

    International Nuclear Information System (INIS)

    Itoh, Shikoh; Tsunoda, Toshiharu

    1989-01-01

    A new unfolding theory has been established on the basis of the maximum entropy principle and the maximum likelihood method. This theory correctly embodies the Poisson statistics of neutron detection, and always brings a positive solution over the whole energy range. Moreover, the theory unifies both problems of overdetermined and of underdetermined. For the latter, the ambiguity in assigning a prior probability, i.e. the initial guess in the Bayesian sense, has become extinct by virtue of the principle. An approximate expression of the covariance matrix for the resultant spectra is also presented. An efficient algorithm to solve the nonlinear system, which appears in the present study, has been established. Results of computer simulation showed the effectiveness of the present theory. (author)

  6. Maximum Power from a Solar Panel

    Directory of Open Access Journals (Sweden)

    Michael Miller

    2010-01-01

    Full Text Available Solar energy has become a promising alternative to conventional fossil fuel sources. Solar panels are used to collect solar radiation and convert it into electricity. One of the techniques used to maximize the effectiveness of this energy alternative is to maximize the power output of the solar collector. In this project the maximum power is calculated by determining the voltage and the current of maximum power. These quantities are determined by finding the maximum value for the equation for power using differentiation. After the maximum values are found for each time of day, each individual quantity, voltage of maximum power, current of maximum power, and maximum power is plotted as a function of the time of day.

  7. Approximation Algorithms for k-Connected Graph Factors

    NARCIS (Netherlands)

    Manthey, Bodo; Waanders, Marten; Sanita, Laura; Skutella, Martin

    2016-01-01

    Finding low-cost spanning subgraphs with given degree and connectivity requirements is a fundamental problem in the area of network design. We consider the problem of finding d-regular spanning subgraphs (or d-factors) of minimum weight with connectivity requirements. For the case of

  8. Evaluation of regulatory variation and theoretical health risk for pesticide maximum residue limits in food.

    Science.gov (United States)

    Li, Zijian

    2018-08-01

    To evaluate whether pesticide maximum residue limits (MRLs) can protect public health, a deterministic dietary risk assessment of maximum pesticide legal exposure was conducted to convert global MRLs to theoretical maximum dose intake (TMDI) values by estimating the average food intake rate and human body weight for each country. A total of 114 nations (58% of the total nations in the world) and two international organizations, including the European Union (EU) and Codex (WHO) have regulated at least one of the most currently used pesticides in at least one of the most consumed agricultural commodities. In this study, 14 of the most commonly used pesticides and 12 of the most commonly consumed agricultural commodities were identified and selected for analysis. A health risk analysis indicated that nearly 30% of the computed pesticide TMDI values were greater than the acceptable daily intake (ADI) values; however, many nations lack common pesticide MRLs in many commonly consumed foods and other human exposure pathways, such as soil, water, and air were not considered. Normality tests of the TMDI values set indicated that all distributions had a right skewness due to large TMDI clusters at the low end of the distribution, which were caused by some strict pesticide MRLs regulated by the EU (normally a default MRL of 0.01 mg/kg when essential data are missing). The Box-Cox transformation and optimal lambda (λ) were applied to these TMDI distributions, and normality tests of the transformed data set indicated that the power transformed TMDI values of at least eight pesticides presented a normal distribution. It was concluded that unifying strict pesticide MRLs by nations worldwide could significantly skew the distribution of TMDI values to the right, lower the legal exposure to pesticide, and effectively control human health risks. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Comparison of standard maximum likelihood classification and polytomous logistic regression used in remote sensing

    Science.gov (United States)

    John Hogland; Nedret Billor; Nathaniel Anderson

    2013-01-01

    Discriminant analysis, referred to as maximum likelihood classification within popular remote sensing software packages, is a common supervised technique used by analysts. Polytomous logistic regression (PLR), also referred to as multinomial logistic regression, is an alternative classification approach that is less restrictive, more flexible, and easy to interpret. To...

  10. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  11. Partitioning graphs into connected parts

    NARCIS (Netherlands)

    Hof, van 't P.; Paulusma, D.; Woeginger, G.J.; Frid, A.; Morozov, A.S.; Rybalchenko, A.; Wagner, K.W.

    2009-01-01

    The 2-DISJOINT CONNECTED SUBGRAPHS problem asks if a given graph has two vertex-disjoint connected subgraphs containing pre-specified sets of vertices. We show that this problem is NP-complete even if one of the sets has cardinality 2. The LONGEST PATH CONTRACTIBILITY problem asks for the largest

  12. Image Analysis and Modeling

    Science.gov (United States)

    1975-05-01

    place "subgraphs" with more complicated subgraphs. There have beer many re- sults reported which extend string-grammar theorems to web grammar...W. Bacus and E. E. Gose , "Leukocyte Pattern Recognition," IEEE SMC-2. No. 2, pp. 513-536, September 1972. [4] J. K. Hawkins

  13. Maximum concentrations at work and maximum biologically tolerable concentration for working materials 1991

    International Nuclear Information System (INIS)

    1991-01-01

    The meaning of the term 'maximum concentration at work' in regard of various pollutants is discussed. Specifically, a number of dusts and smokes are dealt with. The valuation criteria for maximum biologically tolerable concentrations for working materials are indicated. The working materials in question are corcinogeneous substances or substances liable to cause allergies or mutate the genome. (VT) [de

  14. 40 CFR 1042.140 - Maximum engine power, displacement, power density, and maximum in-use engine speed.

    Science.gov (United States)

    2010-07-01

    ... cylinders having an internal diameter of 13.0 cm and a 15.5 cm stroke length, the rounded displacement would... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Maximum engine power, displacement... Maximum engine power, displacement, power density, and maximum in-use engine speed. This section describes...

  15. Credal Networks under Maximum Entropy

    OpenAIRE

    Lukasiewicz, Thomas

    2013-01-01

    We apply the principle of maximum entropy to select a unique joint probability distribution from the set of all joint probability distributions specified by a credal network. In detail, we start by showing that the unique joint distribution of a Bayesian tree coincides with the maximum entropy model of its conditional distributions. This result, however, does not hold anymore for general Bayesian networks. We thus present a new kind of maximum entropy models, which are computed sequentially. ...

  16. Maximum Entropy in Drug Discovery

    Directory of Open Access Journals (Sweden)

    Chih-Yuan Tseng

    2014-07-01

    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  17. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  18. Weighted Maximum-Clique Transversal Sets of Graphs

    OpenAIRE

    Chuan-Min Lee

    2011-01-01

    A maximum-clique transversal set of a graph G is a subset of vertices intersecting all maximum cliques of G. The maximum-clique transversal set problem is to find a maximum-clique transversal set of G of minimum cardinality. Motivated by the placement of transmitters for cellular telephones, Chang, Kloks, and Lee introduced the concept of maximum-clique transversal sets on graphs in 2001. In this paper, we study the weighted version of the maximum-clique transversal set problem for split grap...

  19. Maximum power demand cost

    International Nuclear Information System (INIS)

    Biondi, L.

    1998-01-01

    The charging for a service is a supplier's remuneration for the expenses incurred in providing it. There are currently two charges for electricity: consumption and maximum demand. While no problem arises about the former, the issue is more complicated for the latter and the analysis in this article tends to show that the annual charge for maximum demand arbitrarily discriminates among consumer groups, to the disadvantage of some [it

  20. Polyhedral Computations for the Simple Graph Partitioning Problem

    DEFF Research Database (Denmark)

    Sørensen, Michael Malmros

    The simple graph partitioning problem is to partition an edge-weighted graph into mutually disjoint subgraphs, each containing no more than b nodes, such that the sum of the weights of all edges in the subgraphs is maximal. In this paper we present a branch-and-cut algorithm for the problem that ...

  1. Equipackable graphs

    DEFF Research Database (Denmark)

    Vestergaard, Preben Dahl; Hartnell, Bert L.

    2006-01-01

    There are many results dealing with the problem of decomposing a fixed graph into isomorphic subgraphs. There has also been work on characterizing graphs with the property that one can delete the edges of a number of edge disjoint copies of the subgraph and, regardless of how that is done, the gr...

  2. A New Fuzzy-Based Maximum Power Point Tracker for a Solar Panel Based on Datasheet Values

    Directory of Open Access Journals (Sweden)

    Ali Kargarnejad

    2013-01-01

    Full Text Available Tracking maximum power point of a solar panel is of interest in most of photovoltaic applications. Solar panel modeling is also very interesting exclusively based on manufacturers data. Knowing that the manufacturers generally give the electrical specifications of their products at one operating condition, there are so many cases in which the specifications in other conditions are of interest. In this research, a comprehensive one-diode model for a solar panel with maximum obtainable accuracy is fully developed only based on datasheet values. The model parameters dependencies on environmental conditions are taken into consideration as much as possible. Comparison between real data and simulations results shows that the proposed model has maximum obtainable accuracy. Then a new fuzzy-based controller to track the maximum power point of the solar panel is also proposed which has better response from speed, accuracy and stability point of view respect to the previous common developed one.

  3. Chromatic number via Turán number

    DEFF Research Database (Denmark)

    Alishahi, Meysam; Hajiabolhassan, Hossein

    2017-01-01

    For a graph G and a family of graphs F, the general Kneser graph KG(G, F) is a graph with the vertex set consisting of all subgraphs of G isomorphic to some member of F and two vertices are adjacent if their corresponding subgraphs are edge disjoint. In this paper, we introduce some generalizatio...

  4. 49 CFR 230.24 - Maximum allowable stress.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Maximum allowable stress. 230.24 Section 230.24... Allowable Stress § 230.24 Maximum allowable stress. (a) Maximum allowable stress value. The maximum allowable stress value on any component of a steam locomotive boiler shall not exceed 1/4 of the ultimate...

  5. Total mercury concentration in common fish species of Lake Victoria ...

    African Journals Online (AJOL)

    Total mercury (THg) concentration was analysed in muscles of common fish species of Lake Victoria in the eastern and southern parts of the lake using cold vapour Atomic Absorption Spectrophotometric technique. Mercury concentration in all fish species was generally lower than the WHO maximum allowable ...

  6. Path-fan Ramsey numbers

    NARCIS (Netherlands)

    Salman, M.; Broersma, Haitze J.

    2003-01-01

    For two given graphs $G$ and $H$, the Ramsey number $R(G,H)$ is the smallest positive integer $p$ such that for every graph $F$ on $p$ vertices the following holds: either $F$ contains $G$ as a subgraph or the complement of $F$ contains $H$ as a subgraph. In this paper, we study the Ramsey numbers

  7. Path-fan Ramsey numbers

    NARCIS (Netherlands)

    Salman, M.; Broersma, Haitze J.

    For two given graphs $F$ and $H$, the Ramsey number $R(F,H)$ is the smallest positive integer $p$ such that for every graph $G$ on $p$ vertices the following holds: either $G$ contains $F$ as a subgraph or the complement of $G$ contains $H$ as a subgraph. In this paper, we study the Ramsey numbers

  8. Path-kipas Ramsey numbers

    NARCIS (Netherlands)

    Salman, M.; Broersma, Haitze J.

    2007-01-01

    For two given graphs $F$ and $H$, the Ramsey number $R(F,H)$ is the smallest positive integer $p$ such that for every graph $G$ on $p$ vertices the following holds: either $G$ contains $F$ as a subgraph or the complement of $G$ contains $H$ as a subgraph. In this paper, we study the Ramsey numbers

  9. A maximum power point tracking for photovoltaic-SPE system using a maximum current controller

    Energy Technology Data Exchange (ETDEWEB)

    Muhida, Riza [Osaka Univ., Dept. of Physical Science, Toyonaka, Osaka (Japan); Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Park, Minwon; Dakkak, Mohammed; Matsuura, Kenji [Osaka Univ., Dept. of Electrical Engineering, Suita, Osaka (Japan); Tsuyoshi, Akira; Michira, Masakazu [Kobe City College of Technology, Nishi-ku, Kobe (Japan)

    2003-02-01

    Processes to produce hydrogen from solar photovoltaic (PV)-powered water electrolysis using solid polymer electrolysis (SPE) are reported. An alternative control of maximum power point tracking (MPPT) in the PV-SPE system based on the maximum current searching methods has been designed and implemented. Based on the characteristics of voltage-current and theoretical analysis of SPE, it can be shown that the tracking of the maximum current output of DC-DC converter in SPE side will track the MPPT of photovoltaic panel simultaneously. This method uses a proportional integrator controller to control the duty factor of DC-DC converter with pulse-width modulator (PWM). The MPPT performance and hydrogen production performance of this method have been evaluated and discussed based on the results of the experiment. (Author)

  10. Parallelization of maximum likelihood fits with OpenMP and CUDA

    CERN Document Server

    Jarp, S; Leduc, J; Nowak, A; Pantaleo, F

    2011-01-01

    Data analyses based on maximum likelihood fits are commonly used in the high energy physics community for fitting statistical models to data samples. This technique requires the numerical minimization of the negative log-likelihood function. MINUIT is the most common package used for this purpose in the high energy physics community. The main algorithm in this package, MIGRAD, searches the minimum by using the gradient information. The procedure requires several evaluations of the function, depending on the number of free parameters and their initial values. The whole procedure can be very CPU-time consuming in case of complex functions, with several free parameters, many independent variables and large data samples. Therefore, it becomes particularly important to speed-up the evaluation of the negative log-likelihood function. In this paper we present an algorithm and its implementation which benefits from data vectorization and parallelization (based on OpenMP) and which was also ported to Graphics Processi...

  11. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    Science.gov (United States)

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  12. Lake Basin Fetch and Maximum Length/Width

    Data.gov (United States)

    Minnesota Department of Natural Resources — Linear features representing the Fetch, Maximum Length and Maximum Width of a lake basin. Fetch, maximum length and average width are calcuated from the lake polygon...

  13. Maximum permissible voltage of YBCO coated conductors

    Energy Technology Data Exchange (ETDEWEB)

    Wen, J.; Lin, B.; Sheng, J.; Xu, J.; Jin, Z. [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Hong, Z., E-mail: zhiyong.hong@sjtu.edu.cn [Department of Electrical Engineering, Shanghai Jiao Tong University, Shanghai (China); Wang, D.; Zhou, H.; Shen, X.; Shen, C. [Qingpu Power Supply Company, State Grid Shanghai Municipal Electric Power Company, Shanghai (China)

    2014-06-15

    Highlights: • We examine three kinds of tapes’ maximum permissible voltage. • We examine the relationship between quenching duration and maximum permissible voltage. • Continuous I{sub c} degradations under repetitive quenching where tapes reaching maximum permissible voltage. • The relationship between maximum permissible voltage and resistance, temperature. - Abstract: Superconducting fault current limiter (SFCL) could reduce short circuit currents in electrical power system. One of the most important thing in developing SFCL is to find out the maximum permissible voltage of each limiting element. The maximum permissible voltage is defined as the maximum voltage per unit length at which the YBCO coated conductors (CC) do not suffer from critical current (I{sub c}) degradation or burnout. In this research, the time of quenching process is changed and voltage is raised until the I{sub c} degradation or burnout happens. YBCO coated conductors test in the experiment are from American superconductor (AMSC) and Shanghai Jiao Tong University (SJTU). Along with the quenching duration increasing, the maximum permissible voltage of CC decreases. When quenching duration is 100 ms, the maximum permissible of SJTU CC, 12 mm AMSC CC and 4 mm AMSC CC are 0.72 V/cm, 0.52 V/cm and 1.2 V/cm respectively. Based on the results of samples, the whole length of CCs used in the design of a SFCL can be determined.

  14. Introduction to graph theory

    CERN Document Server

    Trudeau, Richard J

    1994-01-01

    Preface1. Pure Mathematics Introduction; Euclidean Geometry as Pure Mathematics; Games; Why Study Pure Mathematics?; What's Coming; Suggested Reading2. Graphs Introduction; Sets; Paradox; Graphs; Graph diagrams; Cautions; Common Graphs; Discovery; Complements and Subgraphs; Isomorphism; Recognizing Isomorphic Graphs; Semantics The Number of Graphs Having a Given nu; Exercises; Suggested Reading3. Planar Graphs Introduction; UG, K subscript 5, and the Jordan Curve Theorem; Are there More Nonplanar Graphs?; Expansions; Kuratowski's Theorem; Determining Whether a Graph is Planar or

  15. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  16. On the relationship between Gaussian stochastic blockmodels and label propagation algorithms

    International Nuclear Information System (INIS)

    Zhang, Junhao; Hu, Junfeng; Chen, Tongfei

    2015-01-01

    The problem of community detection has received great attention in recent years. Many methods have been proposed to discover communities in networks. In this paper, we propose a Gaussian stochastic blockmodel that uses Gaussian distributions to fit weight of edges in networks for non-overlapping community detection. The maximum likelihood estimation of this model has the same objective function as general label propagation with node preference. The node preference of a specific vertex turns out to be a value proportional to the intra-community eigenvector centrality (the corresponding entry in principal eigenvector of the adjacency matrix of the subgraph inside that vertex's community) under maximum likelihood estimation. Additionally, the maximum likelihood estimation of a constrained version of our model is highly related to another extension of the label propagation algorithm, namely, the label propagation algorithm under constraint. Experiments show that the proposed Gaussian stochastic blockmodel performs well on various benchmark networks. (paper)

  17. The Ramsey numbers of large cycles versus small wheels

    NARCIS (Netherlands)

    Surahmat,; Baskoro, E.T.; Broersma, H.J.

    2004-01-01

    For two given graphs G and H, the Ramsey number R(G;H) is the smallest positive integer N such that for every graph F of order N the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we determine the Ramsey number R(Cn;Wm) for m = 4

  18. Graphs with not all possible path-kernels

    DEFF Research Database (Denmark)

    Aldred, Robert; Thomassen, Carsten

    2004-01-01

    The Path Partition Conjecture states that the vertices of a graph G with longest path of length c may be partitioned into two parts X and Y such that the longest path in the subgraph of G induced by X has length at most a and the longest path in the subgraph of G induced by Y has length at most b...

  19. The Ramsey Numbers of Paths Versus Fans

    NARCIS (Netherlands)

    Salman, M.; Broersma, Haitze J.; Faigle, U.; Hurink, Johann L.; Pickl, Stefan; Woeginger, Gerhard

    2003-01-01

    For two given graphs G and H, the Ramsey number R(G,H) is the smallest positive integer p such that for every graph F on p vertices the following holds: either F contains G as a subgraph or the complement of F contains H as a subgraph. In this paper, we study the Ramsey numbers R(Pn,Fm), where Pn is

  20. Heuristic algorithm for determination of local properties of scale-free networks

    CERN Document Server

    Mitrovic, M

    2006-01-01

    Complex networks are everywhere. Many phenomena in nature can be modeled as networks: - brain structures - protein-protein interaction networks - social interactions - the Internet and WWW. They can be represented in terms of nodes and edges connecting them. Important characteristics: - these networks are not random; they have a structured architecture. Structure of different networks are similar: - all have power law degree distribution (scale-free property) - despite large size there is usually relatively short path between any two nodes (small world property). Global characteristics: - degree distribution, clustering coefficient and the diameter. Local structure: - frequency of subgraphs of given type (subgraph of order k is a part of the network consisting of k nodes and edges between them). There are different types of subgraphs of the same order.

  1. Maximum Gene-Support Tree

    Directory of Open Access Journals (Sweden)

    Yunfeng Shan

    2008-01-01

    Full Text Available Genomes and genes diversify during evolution; however, it is unclear to what extent genes still retain the relationship among species. Model species for molecular phylogenetic studies include yeasts and viruses whose genomes were sequenced as well as plants that have the fossil-supported true phylogenetic trees available. In this study, we generated single gene trees of seven yeast species as well as single gene trees of nine baculovirus species using all the orthologous genes among the species compared. Homologous genes among seven known plants were used for validation of the finding. Four algorithms—maximum parsimony (MP, minimum evolution (ME, maximum likelihood (ML, and neighbor-joining (NJ—were used. Trees were reconstructed before and after weighting the DNA and protein sequence lengths among genes. Rarely a gene can always generate the “true tree” by all the four algorithms. However, the most frequent gene tree, termed “maximum gene-support tree” (MGS tree, or WMGS tree for the weighted one, in yeasts, baculoviruses, or plants was consistently found to be the “true tree” among the species. The results provide insights into the overall degree of divergence of orthologous genes of the genomes analyzed and suggest the following: 1 The true tree relationship among the species studied is still maintained by the largest group of orthologous genes; 2 There are usually more orthologous genes with higher similarities between genetically closer species than between genetically more distant ones; and 3 The maximum gene-support tree reflects the phylogenetic relationship among species in comparison.

  2. Common-Message Broadcast Channels with Feedback in the Nonasymptotic Regime: Full Feedback

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2018-01-01

    We investigate the maximum coding rate achievable on a two-user broadcast channel for the case where a common message is transmitted with feedback using either fixed-blocklength codes or variable-length codes. For the fixed-blocklength-code setup, we establish nonasymptotic converse and achievabi......We investigate the maximum coding rate achievable on a two-user broadcast channel for the case where a common message is transmitted with feedback using either fixed-blocklength codes or variable-length codes. For the fixed-blocklength-code setup, we establish nonasymptotic converse...... and achievability bounds. An asymptotic analysis of these bounds reveals that feedback improves the second-order term compared to the no-feedback case. In particular, for a certain class of anti-symmetric broadcast channels, we show that the dispersion is halved. For the variable-length-code setup, we demonstrate...

  3. Maximum neutron flux in thermal reactors

    International Nuclear Information System (INIS)

    Strugar, P.V.

    1968-12-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples

  4. 75 FR 43840 - Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for...

    Science.gov (United States)

    2010-07-27

    ...-17530; Notice No. 2] RIN 2130-ZA03 Inflation Adjustment of the Ordinary Maximum and Aggravated Maximum... remains at $250. These adjustments are required by the Federal Civil Penalties Inflation Adjustment Act [email protected] . SUPPLEMENTARY INFORMATION: The Federal Civil Penalties Inflation Adjustment Act of 1990...

  5. Half-width at half-maximum, full-width at half-maximum analysis

    Indian Academy of Sciences (India)

    addition to the well-defined parameter full-width at half-maximum (FWHM). The distribution of ... optical side-lobes in the diffraction pattern resulting in steep central maxima [6], reduc- tion of effects of ... and broad central peak. The idea of.

  6. Two-dimensional maximum entropy image restoration

    International Nuclear Information System (INIS)

    Brolley, J.E.; Lazarus, R.B.; Suydam, B.R.; Trussell, H.J.

    1977-07-01

    An optical check problem was constructed to test P LOG P maximum entropy restoration of an extremely distorted image. Useful recovery of the original image was obtained. Comparison with maximum a posteriori restoration is made. 7 figures

  7. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  8. 7 CFR 3565.210 - Maximum interest rate.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Maximum interest rate. 3565.210 Section 3565.210... AGRICULTURE GUARANTEED RURAL RENTAL HOUSING PROGRAM Loan Requirements § 3565.210 Maximum interest rate. The interest rate for a guaranteed loan must not exceed the maximum allowable rate specified by the Agency in...

  9. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  10. Probing Ionic Liquid Aqueous Solutions Using Temperature of Maximum Density Isotope Effects

    Directory of Open Access Journals (Sweden)

    Mohammad Tariq

    2013-03-01

    Full Text Available This work is a new development of an extensive research program that is investigating for the first time shifts in the temperature of maximum density (TMD of aqueous solutions caused by ionic liquid solutes. In the present case we have compared the shifts caused by three ionic liquid solutes with a common cation—1-ethyl-3-methylimidazolium coupled with acetate, ethylsulfate and tetracyanoborate anions—in normal and deuterated water solutions. The observed differences are discussed in terms of the nature of the corresponding anion-water interactions.

  11. MXLKID: a maximum likelihood parameter identifier

    International Nuclear Information System (INIS)

    Gavel, D.T.

    1980-07-01

    MXLKID (MaXimum LiKelihood IDentifier) is a computer program designed to identify unknown parameters in a nonlinear dynamic system. Using noisy measurement data from the system, the maximum likelihood identifier computes a likelihood function (LF). Identification of system parameters is accomplished by maximizing the LF with respect to the parameters. The main body of this report briefly summarizes the maximum likelihood technique and gives instructions and examples for running the MXLKID program. MXLKID is implemented LRLTRAN on the CDC7600 computer at LLNL. A detailed mathematical description of the algorithm is given in the appendices. 24 figures, 6 tables

  12. A note on Ramsey numbers for fans

    NARCIS (Netherlands)

    Zhang, Yanbo; Broersma, Haitze J.; Chen, Yaojun

    For two given graphs G1 and G2, the Ramsey number R(G1,G2) is the smallest integer N such that, for any graph G of order N, either G contains G1 as a subgraph or the complement of G contains G2 as a subgraph. A fan Fl is l triangles sharing exactly one vertex. In this note, it is shown that R(Fn,

  13. 78 FR 9845 - Minimum and Ordinary Maximum and Aggravated Maximum Civil Monetary Penalties for a Violation of...

    Science.gov (United States)

    2013-02-12

    ... maximum penalty amount of $75,000 for each violation, except that if the violation results in death... the maximum civil penalty for a violation is $175,000 if the violation results in death, serious... Penalties for a Violation of the Hazardous Materials Transportation Laws or Regulations, Orders, Special...

  14. Integrative analysis of many weighted co-expression networks using tensor computation.

    Directory of Open Access Journals (Sweden)

    Wenyuan Li

    2011-06-01

    Full Text Available The rapid accumulation of biological networks poses new challenges and calls for powerful integrative analysis tools. Most existing methods capable of simultaneously analyzing a large number of networks were primarily designed for unweighted networks, and cannot easily be extended to weighted networks. However, it is known that transforming weighted into unweighted networks by dichotomizing the edges of weighted networks with a threshold generally leads to information loss. We have developed a novel, tensor-based computational framework for mining recurrent heavy subgraphs in a large set of massive weighted networks. Specifically, we formulate the recurrent heavy subgraph identification problem as a heavy 3D subtensor discovery problem with sparse constraints. We describe an effective approach to solving this problem by designing a multi-stage, convex relaxation protocol, and a non-uniform edge sampling technique. We applied our method to 130 co-expression networks, and identified 11,394 recurrent heavy subgraphs, grouped into 2,810 families. We demonstrated that the identified subgraphs represent meaningful biological modules by validating against a large set of compiled biological knowledge bases. We also showed that the likelihood for a heavy subgraph to be meaningful increases significantly with its recurrence in multiple networks, highlighting the importance of the integrative approach to biological network analysis. Moreover, our approach based on weighted graphs detects many patterns that would be overlooked using unweighted graphs. In addition, we identified a large number of modules that occur predominately under specific phenotypes. This analysis resulted in a genome-wide mapping of gene network modules onto the phenome. Finally, by comparing module activities across many datasets, we discovered high-order dynamic cooperativeness in protein complex networks and transcriptional regulatory networks.

  15. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  16. Uniform coverings of 2-paths with 4-cycles

    Directory of Open Access Journals (Sweden)

    Midori Kobayashi

    2015-07-01

    Full Text Available Let G be a graph [a digraph] and H be a subgraph of G. A D(G,H,λ design is a multiset D of subgraphs of G each isomorphic to H so that every 2-path [directed 2-path] of G lies in exactly λ subgraphs in D. In this paper, we show that there exists a D(Kn,n,C4,λ design if and only if (i n is even, or (ii n is odd and λ is even. We also show that there exists a D(Kn,n∗,C⃗4,λ design for every n and λ, where Kn,n and Kn,n∗ are the complete bipartite graph and the complete bipartite digraph, respectively; C4 and C⃗4 are a 4-cycle and a directed 4-cycle, respectively.

  17. Epidemics in networks: a master equation approach

    International Nuclear Information System (INIS)

    Cotacallapa, M; Hase, M O

    2016-01-01

    A problem closely related to epidemiology, where a subgraph of ‘infected’ links is defined inside a larger network, is investigated. This subgraph is generated from the underlying network by a random variable, which decides whether a link is able to propagate a disease/information. The relaxation timescale of this random variable is examined in both annealed and quenched limits, and the effectiveness of propagation of disease/information is analyzed. The dynamics of the model is governed by a master equation and two types of underlying network are considered: one is scale-free and the other has exponential degree distribution. We have shown that the relaxation timescale of the contagion variable has a major influence on the topology of the subgraph of infected links, which determines the efficiency of spreading of disease/information over the network. (paper)

  18. Epidemics in networks: a master equation approach

    Science.gov (United States)

    Cotacallapa, M.; Hase, M. O.

    2016-02-01

    A problem closely related to epidemiology, where a subgraph of ‘infected’ links is defined inside a larger network, is investigated. This subgraph is generated from the underlying network by a random variable, which decides whether a link is able to propagate a disease/information. The relaxation timescale of this random variable is examined in both annealed and quenched limits, and the effectiveness of propagation of disease/information is analyzed. The dynamics of the model is governed by a master equation and two types of underlying network are considered: one is scale-free and the other has exponential degree distribution. We have shown that the relaxation timescale of the contagion variable has a major influence on the topology of the subgraph of infected links, which determines the efficiency of spreading of disease/information over the network.

  19. Marginal Maximum Likelihood Estimation of Item Response Models in R

    Directory of Open Access Journals (Sweden)

    Matthew S. Johnson

    2007-02-01

    Full Text Available Item response theory (IRT models are a class of statistical models used by researchers to describe the response behaviors of individuals to a set of categorically scored items. The most common IRT models can be classified as generalized linear fixed- and/or mixed-effect models. Although IRT models appear most often in the psychological testing literature, researchers in other fields have successfully utilized IRT-like models in a wide variety of applications. This paper discusses the three major methods of estimation in IRT and develops R functions utilizing the built-in capabilities of the R environment to find the marginal maximum likelihood estimates of the generalized partial credit model. The currently available R packages ltm is also discussed.

  20. LCLS Maximum Credible Beam Power

    International Nuclear Information System (INIS)

    Clendenin, J.

    2005-01-01

    The maximum credible beam power is defined as the highest credible average beam power that the accelerator can deliver to the point in question, given the laws of physics, the beam line design, and assuming all protection devices have failed. For a new accelerator project, the official maximum credible beam power is determined by project staff in consultation with the Radiation Physics Department, after examining the arguments and evidence presented by the appropriate accelerator physicist(s) and beam line engineers. The definitive parameter becomes part of the project's safety envelope. This technical note will first review the studies that were done for the Gun Test Facility (GTF) at SSRL, where a photoinjector similar to the one proposed for the LCLS is being tested. In Section 3 the maximum charge out of the gun for a single rf pulse is calculated. In Section 4, PARMELA simulations are used to track the beam from the gun to the end of the photoinjector. Finally in Section 5 the beam through the matching section and injected into Linac-1 is discussed

  1. [Evolutionary process unveiled by the maximum genetic diversity hypothesis].

    Science.gov (United States)

    Huang, Yi-Min; Xia, Meng-Ying; Huang, Shi

    2013-05-01

    As two major popular theories to explain evolutionary facts, the neutral theory and Neo-Darwinism, despite their proven virtues in certain areas, still fail to offer comprehensive explanations to such fundamental evolutionary phenomena as the genetic equidistance result, abundant overlap sites, increase in complexity over time, incomplete understanding of genetic diversity, and inconsistencies with fossil and archaeological records. Maximum genetic diversity hypothesis (MGD), however, constructs a more complete evolutionary genetics theory that incorporates all of the proven virtues of existing theories and adds to them the novel concept of a maximum or optimum limit on genetic distance or diversity. It has yet to meet a contradiction and explained for the first time the half-century old Genetic Equidistance phenomenon as well as most other major evolutionary facts. It provides practical and quantitative ways of studying complexity. Molecular interpretation using MGD-based methods reveal novel insights on the origins of humans and other primates that are consistent with fossil evidence and common sense, and reestablished the important role of China in the evolution of humans. MGD theory has also uncovered an important genetic mechanism in the construction of complex traits and the pathogenesis of complex diseases. We here made a series of sequence comparisons among yeasts, fishes and primates to illustrate the concept of limit on genetic distance. The idea of limit or optimum is in line with the yin-yang paradigm in the traditional Chinese view of the universal creative law in nature.

  2. Bootstrap-based Support of HGT Inferred by Maximum Parsimony

    Directory of Open Access Journals (Sweden)

    Nakhleh Luay

    2010-05-01

    Full Text Available Abstract Background Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. Results In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. Conclusions We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/, and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  3. Bootstrap-based support of HGT inferred by maximum parsimony.

    Science.gov (United States)

    Park, Hyun Jung; Jin, Guohua; Nakhleh, Luay

    2010-05-05

    Maximum parsimony is one of the most commonly used criteria for reconstructing phylogenetic trees. Recently, Nakhleh and co-workers extended this criterion to enable reconstruction of phylogenetic networks, and demonstrated its application to detecting reticulate evolutionary relationships. However, one of the major problems with this extension has been that it favors more complex evolutionary relationships over simpler ones, thus having the potential for overestimating the amount of reticulation in the data. An ad hoc solution to this problem that has been used entails inspecting the improvement in the parsimony length as more reticulation events are added to the model, and stopping when the improvement is below a certain threshold. In this paper, we address this problem in a more systematic way, by proposing a nonparametric bootstrap-based measure of support of inferred reticulation events, and using it to determine the number of those events, as well as their placements. A number of samples is generated from the given sequence alignment, and reticulation events are inferred based on each sample. Finally, the support of each reticulation event is quantified based on the inferences made over all samples. We have implemented our method in the NEPAL software tool (available publicly at http://bioinfo.cs.rice.edu/), and studied its performance on both biological and simulated data sets. While our studies show very promising results, they also highlight issues that are inherently challenging when applying the maximum parsimony criterion to detect reticulate evolution.

  4. 13 CFR 107.840 - Maximum term of Financing.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Maximum term of Financing. 107.840... COMPANIES Financing of Small Businesses by Licensees Structuring Licensee's Financing of An Eligible Small Business: Terms and Conditions of Financing § 107.840 Maximum term of Financing. The maximum term of any...

  5. 20 CFR 226.52 - Total annuity subject to maximum.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Total annuity subject to maximum. 226.52... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Railroad Retirement Family Maximum § 226.52 Total annuity subject to maximum. The total annuity amount which is compared to the maximum monthly amount to...

  6. An in silico approach combined with in vivo experiments enables the identification of a new protein whose overexpression can compensate for specific respiratory defects in Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Meunier Brigitte

    2011-10-01

    Full Text Available Abstract Background The mitochondrial inner membrane contains five large complexes that are essential for oxidative phosphorylation. Although the structure and the catalytic mechanisms of the respiratory complexes have been progressively established, their biogenesis is far from being fully understood. Very few complex III assembly factors have been identified so far. It is probable that more factors are needed for the assembly of a functional complex, but that the genetic approaches used to date have not been able to identify them. We have developed a systems biology approach to identify new factors controlling complex III biogenesis. Results We collected all the physical protein-protein interactions (PPI involving the core subunits, the supernumerary subunits and the assembly factors of complex III and used Cytoscape 2.6.3 and its plugins to construct a network. It was then divided into overlapping and highly interconnected sub-graphs with clusterONE. One sub-graph contained the core and the supernumerary subunits of complex III, it also contained some subunits of complex IV and proteins participating in the assembly of complex IV. This sub-graph was then split with another algorithm into two sub-graphs. The subtraction of these two sub-graphs from the previous sub-graph allowed us to identify a protein of unknown function Usb1p/Ylr132p that interacts with the complex III subunits Qcr2p and Cor1p. We then used genetic and cell biology approaches to investigate the function of Usb1p. Preliminary results indicated that Usb1p is an essential protein with a dual localization in the nucleus and in the mitochondria, and that the over-expression of this protein can compensate for defects in the biogenesis of the respiratory complexes. Conclusions Our systems biology approach has highlighted the multiple associations between subunits and assembly factors of complexes III and IV during their biogenesis. In addition, this approach has allowed the

  7. ANALYSIS OF RELIABILITY OF NONRECTORABLE REDUNDANT POWER SYSTEMS TAKING INTO ACCOUNT COMMON FAILURES

    Directory of Open Access Journals (Sweden)

    V. A. Anischenko

    2014-01-01

    Full Text Available Reliability Analysis of nonrestorable redundant power Systems of industrial plants and other consumers of electric energy was carried out. The main attention was paid to numbers failures influence, caused by failures of all elements of System due to one general reason. Noted the main possible reasons of common failures formation. Two main indicators of reliability of non-restorable systems are considered: average time of no-failure operation and mean probability of no-failure operation. Modeling of failures were carried out by mean of division of investigated system into two in-series connected subsystems, one of them indicated independent failures, but the other indicated common failures. Due to joined modeling of single and common failures resulting intensity of failures is the amount incompatible components: intensity statistically independent failures and intensity of common failures of elements and system in total.It is shown the influence of common failures of elements on average time of no-failure operation of system. There is built the scale of preference of systems according to criterion of  average time maximum of no-failure operation, depending on portion of common failures. It is noticed that such common failures don’t influence on the scale of preference, but  change intervals of time, determining the moments of systems failures and excepting them from the number of comparators. There were discussed two problems  of conditionally optimization of  systems’  reservation choice, taking into account their reliability and cost. The first problem is solved due to criterion of minimum cost of system providing mean probability of no-failure operation, the second problem is solved due to criterion of maximum of mean probability of no-failure operation with cost limitation of system.

  8. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  9. High-performance analysis of filtered semantic graphs

    OpenAIRE

    Buluç, A; Fox, A; Gilbert, JR; Kamil, S; Lugowski, A; Oliker, L; Williams, S

    2012-01-01

    High performance is a crucial consideration when executing a complex analytic query on a massive semantic graph. In a semantic graph, vertices and edges carry \\attributes" of various types. Analytic queries on semantic graphs typically depend on the values of these attributes; thus, the computation must either view the graph through a filter that passes only those individual vertices and edges of interest, or else must first materialize a subgraph or subgraphs consisting of only the vertices ...

  10. On H-irregularity strengths of G-amalgamation of graphs

    Directory of Open Access Journals (Sweden)

    Faraha Ashraf

    2017-10-01

    Full Text Available A simple graph G=(V(G,E(G admits an H-covering if every edge in E(G belongs at least to one subgraph of G isomorphic to a given graph H. Then the graph G admitting H-covering admits an H-irregular total k-labeling f: V(G U E(G \\to {1, 2, ..., k} if for every two different subgraphs H' and H'' isomorphic to H there is $wt_{f}(H' \

  11. Searching via walking: How to find a marked clique of a complete graph using quantum walks

    International Nuclear Information System (INIS)

    Hillery, Mark; Reitzner, Daniel; Buzek, Vladimir

    2010-01-01

    We show how a quantum walk can be used to find a marked edge or a marked complete subgraph of a complete graph. We employ a version of a quantum walk, the scattering walk, which lends itself to experimental implementation. The edges are marked by adding elements to them that impart a specific phase shift to the particle as it enters or leaves the edge. If the complete graph has N vertices and the subgraph has K vertices, the particle becomes localized on the subgraph in O(N/K) steps. This leads to a quantum search that is quadratically faster than a corresponding classical search. We show how to implement the quantum walk using a quantum circuit and a quantum oracle, which allows us to specify the resources needed for a quantitative comparison of the efficiency of classical and quantum searches--the number of oracle calls.

  12. Maximum likely scale estimation

    DEFF Research Database (Denmark)

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo

    2005-01-01

    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  13. A Boundary Property for Upper Domination

    KAUST Repository

    AbouEisha, Hassan M.

    2016-08-08

    An upper dominating set in a graph is a minimal (with respect to set inclusion) dominating set of maximum cardinality.The problem of finding an upper dominating set is generally NP-hard, but can be solved in polynomial time in some restricted graph classes, such as P4-free graphs or 2K2-free graphs.For classes defined by finitely many forbidden induced subgraphs, the boundary separating difficult instances of the problem from polynomially solvable ones consists of the so called boundary classes.However, none of such classes has been identified so far for the upper dominating set problem.In the present paper, we discover the first boundary class for this problem.

  14. Subgraph detection using graph signals

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-03-06

    In this paper we develop statistical detection theory for graph signals. In particular, given two graphs, namely, a background graph that represents an usual activity and an alternative graph that represents some unusual activity, we are interested in answering the following question: To which of the two graphs does the observed graph signal fit the best? To begin with, we assume both the graphs are known, and derive an optimal Neyman-Pearson detector. Next, we derive a suboptimal detector for the case when the alternative graph is not known. The developed theory is illustrated with numerical experiments.

  15. Subgraph detection using graph signals

    KAUST Repository

    Chepuri, Sundeep Prabhakar; Leus, Geert

    2017-01-01

    In this paper we develop statistical detection theory for graph signals. In particular, given two graphs, namely, a background graph that represents an usual activity and an alternative graph that represents some unusual activity, we are interested in answering the following question: To which of the two graphs does the observed graph signal fit the best? To begin with, we assume both the graphs are known, and derive an optimal Neyman-Pearson detector. Next, we derive a suboptimal detector for the case when the alternative graph is not known. The developed theory is illustrated with numerical experiments.

  16. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    Directory of Open Access Journals (Sweden)

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  17. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  18. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  19. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  20. Feedback Halves the Dispersion for Some Two-User Broadcast Channels with Common Message

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Yang, Wei; Durisi, Giuseppe

    2017-01-01

    We investigate the maximum coding rate achievable on a two-user broadcast channel for the case where a common- message is transmitted using fixed-blocklength codes with feed- back. Specifically, we focus on a family of broadcast channels com- posed of two antisymmetric Z-channels. For this setup,...

  1. Maximum spectral demands in the near-fault region

    Science.gov (United States)

    Huang, Yin-Nan; Whittaker, Andrew S.; Luco, Nicolas

    2008-01-01

    The Next Generation Attenuation (NGA) relationships for shallow crustal earthquakes in the western United States predict a rotated geometric mean of horizontal spectral demand, termed GMRotI50, and not maximum spectral demand. Differences between strike-normal, strike-parallel, geometric-mean, and maximum spectral demands in the near-fault region are investigated using 147 pairs of records selected from the NGA strong motion database. The selected records are for earthquakes with moment magnitude greater than 6.5 and for closest site-to-fault distance less than 15 km. Ratios of maximum spectral demand to NGA-predicted GMRotI50 for each pair of ground motions are presented. The ratio shows a clear dependence on period and the Somerville directivity parameters. Maximum demands can substantially exceed NGA-predicted GMRotI50 demands in the near-fault region, which has significant implications for seismic design, seismic performance assessment, and the next-generation seismic design maps. Strike-normal spectral demands are a significantly unconservative surrogate for maximum spectral demands for closest distance greater than 3 to 5 km. Scale factors that transform NGA-predicted GMRotI50 to a maximum spectral demand in the near-fault region are proposed.

  2. Maximum allowable load on wheeled mobile manipulators

    International Nuclear Information System (INIS)

    Habibnejad Korayem, M.; Ghariblu, H.

    2003-01-01

    This paper develops a computational technique for finding the maximum allowable load of mobile manipulator during a given trajectory. The maximum allowable loads which can be achieved by a mobile manipulator during a given trajectory are limited by the number of factors; probably the dynamic properties of mobile base and mounted manipulator, their actuator limitations and additional constraints applied to resolving the redundancy are the most important factors. To resolve extra D.O.F introduced by the base mobility, additional constraint functions are proposed directly in the task space of mobile manipulator. Finally, in two numerical examples involving a two-link planar manipulator mounted on a differentially driven mobile base, application of the method to determining maximum allowable load is verified. The simulation results demonstrates the maximum allowable load on a desired trajectory has not a unique value and directly depends on the additional constraint functions which applies to resolve the motion redundancy

  3. Robust Maximum Association Estimators

    NARCIS (Netherlands)

    A. Alfons (Andreas); C. Croux (Christophe); P. Filzmoser (Peter)

    2017-01-01

    textabstractThe maximum association between two multivariate variables X and Y is defined as the maximal value that a bivariate association measure between one-dimensional projections αX and αY can attain. Taking the Pearson correlation as projection index results in the first canonical correlation

  4. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    Science.gov (United States)

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  5. Maximum-Likelihood Detection Of Noncoherent CPM

    Science.gov (United States)

    Divsalar, Dariush; Simon, Marvin K.

    1993-01-01

    Simplified detectors proposed for use in maximum-likelihood-sequence detection of symbols in alphabet of size M transmitted by uncoded, full-response continuous phase modulation over radio channel with additive white Gaussian noise. Structures of receivers derived from particular interpretation of maximum-likelihood metrics. Receivers include front ends, structures of which depends only on M, analogous to those in receivers of coherent CPM. Parts of receivers following front ends have structures, complexity of which would depend on N.

  6. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  7. Proposals for common definitions of reference points in gynecological brachytherapy

    International Nuclear Information System (INIS)

    Chassagne, D.; Horiot, J.C.

    1977-01-01

    In May 1975 the report of European Curietherapy Group recommended in gynecological Dosimetry by computer. Use of reference points = lymphatic trapezoid figure with 6 points, Pelvic wall, all points are refering to bony structures. Use of critical organ reference points = maximum rectum dose, bladder dose mean rectal dose. Use of 6,000 rads reference isodose described by height, width, and thickness dimensions. These proposals are the basis of a common language in gynecological brachytherapy [fr

  8. 22 CFR 201.67 - Maximum freight charges.

    Science.gov (United States)

    2010-04-01

    ..., commodity rate classification, quantity, vessel flag category (U.S.-or foreign-flag), choice of ports, and... the United States. (2) Maximum charter rates. (i) USAID will not finance ocean freight under any... owner(s). (4) Maximum liner rates. USAID will not finance ocean freight for a cargo liner shipment at a...

  9. Simulated population responses of common carp to commercial exploitation

    Energy Technology Data Exchange (ETDEWEB)

    Weber, Michael J.; Hennen, Matthew J.; Brown, Michael L.

    2011-12-01

    Common carp Cyprinus carpio is a widespread invasive species that can become highly abundant and impose deleterious ecosystem effects. Thus, aquatic resource managers are interested in controlling common carp populations. Control of invasive common carp populations is difficult, due in part to the inherent uncertainty of how populations respond to exploitation. To understand how common carp populations respond to exploitation, we evaluated common carp population dynamics (recruitment, growth, and mortality) in three natural lakes in eastern South Dakota. Common carp exhibited similar population dynamics across these three systems that were characterized by consistent recruitment (ages 3 to 15 years present), fast growth (K = 0.37 to 0.59), and low mortality (A = 1 to 7%). We then modeled the effects of commercial exploitation on size structure, abundance, and egg production to determine its utility as a management tool to control populations. All three populations responded similarly to exploitation simulations with a 575-mm length restriction, representing commercial gear selectivity. Simulated common carp size structure modestly declined (9 to 37%) in all simulations. Abundance of common carp declined dramatically (28 to 56%) at low levels of exploitation (0 to 20%) but exploitation >40% had little additive effect and populations were only reduced by 49 to 79% despite high exploitation (>90%). Maximum lifetime egg production was reduced from 77 to 89% at a moderate level of exploitation (40%), indicating the potential for recruitment overfishing. Exploitation further reduced common carp size structure, abundance, and egg production when simulations were not size selective. Our results provide insights to how common carp populations may respond to exploitation. Although commercial exploitation may be able to partially control populations, an integrated removal approach that removes all sizes of common carp has a greater chance of controlling population abundance

  10. Determination of the maximum-depth to potential field sources by a maximum structural index method

    Science.gov (United States)

    Fedi, M.; Florio, G.

    2013-01-01

    A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.

  11. Decomposing highly edge-connected graphs into homomorphic copies of a fixed tree

    DEFF Research Database (Denmark)

    Merker, Martin

    2016-01-01

    far this conjecture has only been verified for paths, stars, and a family of bistars. We prove a weaker version of the Tree Decomposition Conjecture, where we require the subgraphs in the decomposition to be isomorphic to graphs that can be obtained from T by vertex-identifications. We call......The Tree Decomposition Conjecture by Barát and Thomassen states that for every tree T there exists a natural number k(T) such that the following holds: If G is a k(T)-edge-connected simple graph with size divisible by the size of T, then G can be edge-decomposed into subgraphs isomorphic to T. So...... such a subgraph a homomorphic copy of T. This implies the Tree Decomposition Conjecture under the additional constraint that the girth of G is greater than the diameter of T. As an application, we verify the Tree Decomposition Conjecture for all trees of diameter at most 4....

  12. Classification of urine sediment based on convolution neural network

    Science.gov (United States)

    Pan, Jingjing; Jiang, Cunbo; Zhu, Tiantian

    2018-04-01

    By designing a new convolution neural network framework, this paper breaks the constraints of the original convolution neural network framework requiring large training samples and samples of the same size. Move and cropping the input images, generate the same size of the sub-graph. And then, the generated sub-graph uses the method of dropout, increasing the diversity of samples and preventing the fitting generation. Randomly select some proper subset in the sub-graphic set and ensure that the number of elements in the proper subset is same and the proper subset is not the same. The proper subsets are used as input layers for the convolution neural network. Through the convolution layer, the pooling, the full connection layer and output layer, we can obtained the classification loss rate of test set and training set. In the red blood cells, white blood cells, calcium oxalate crystallization classification experiment, the classification accuracy rate of 97% or more.

  13. 40 CFR 141.13 - Maximum contaminant levels for turbidity.

    Science.gov (United States)

    2010-07-01

    ... turbidity. 141.13 Section 141.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Maximum contaminant levels for turbidity. The maximum contaminant levels for turbidity are applicable to... part. The maximum contaminant levels for turbidity in drinking water, measured at a representative...

  14. Modelling maximum canopy conductance and transpiration in ...

    African Journals Online (AJOL)

    There is much current interest in predicting the maximum amount of water that can be transpired by Eucalyptus trees. It is possible that industrial waste water may be applied as irrigation water to eucalypts and it is important to predict the maximum transpiration rates of these plantations in an attempt to dispose of this ...

  15. 40 CFR 141.62 - Maximum contaminant levels for inorganic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for inorganic contaminants. 141.62 Section 141.62 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Water Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.62 Maximum...

  16. EFFECTS OF A SAND RUNNING SURFACE ON THE KINEMATICS OF SPRINTING AT MAXIMUM VELOCITY

    Directory of Open Access Journals (Sweden)

    P E Alcaraz

    2011-05-01

    Full Text Available Performing sprints on a sand surface is a common training method for improving sprint-specific strength. For maximum specificity of training the athlete’s movement patterns during the training exercise should closely resemble those used when performing the sport. The aim of this study was to compare the kinematics of sprinting at maximum velocity on a dry sand surface to the kinematics of sprinting on an athletics track. Five men and five women participated in the study, and flying sprints over 30 m were recorded by video and digitized using biomechanical analysis software. We found that sprinting on a sand surface was substantially different to sprinting on an athletics track. When sprinting on sand the athletes tended to ‘sit’ during the ground contact phase of the stride. This action was characterized by a lower centre of mass, a greater forward lean in the trunk, and an incomplete extension of the hip joint at take-off. We conclude that sprinting on a dry sand surface may not be an appropriate method for training the maximum velocity phase in sprinting. Although this training method exerts a substantial overload on the athlete, as indicated by reductions in running velocity and stride length, it also induces detrimental changes to the athlete’s running technique which may transfer to competition sprinting.

  17. Weak scale from the maximum entropy principle

    Science.gov (United States)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  18. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  19. 40 CFR 141.61 - Maximum contaminant levels for organic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels for organic contaminants. 141.61 Section 141.61 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER... Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.61 Maximum contaminant...

  20. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  1. Maximum Power Training and Plyometrics for Cross-Country Running.

    Science.gov (United States)

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  2. The last glacial maximum

    Science.gov (United States)

    Clark, P.U.; Dyke, A.S.; Shakun, J.D.; Carlson, A.E.; Clark, J.; Wohlfarth, B.; Mitrovica, J.X.; Hostetler, S.W.; McCabe, A.M.

    2009-01-01

    We used 5704 14C, 10Be, and 3He ages that span the interval from 10,000 to 50,000 years ago (10 to 50 ka) to constrain the timing of the Last Glacial Maximum (LGM) in terms of global ice-sheet and mountain-glacier extent. Growth of the ice sheets to their maximum positions occurred between 33.0 and 26.5 ka in response to climate forcing from decreases in northern summer insolation, tropical Pacific sea surface temperatures, and atmospheric CO2. Nearly all ice sheets were at their LGM positions from 26.5 ka to 19 to 20 ka, corresponding to minima in these forcings. The onset of Northern Hemisphere deglaciation 19 to 20 ka was induced by an increase in northern summer insolation, providing the source for an abrupt rise in sea level. The onset of deglaciation of the West Antarctic Ice Sheet occurred between 14 and 15 ka, consistent with evidence that this was the primary source for an abrupt rise in sea level ???14.5 ka.

  3. Maximum physical capacity testing in cancer patients undergoing chemotherapy

    DEFF Research Database (Denmark)

    Knutsen, L.; Quist, M; Midtgaard, J

    2006-01-01

    BACKGROUND: Over the past few years there has been a growing interest in the field of physical exercise in rehabilitation of cancer patients, leading to requirements for objective maximum physical capacity measurement (maximum oxygen uptake (VO(2max)) and one-repetition maximum (1RM)) to determin...... early in the treatment process. However, the patients were self-referred and thus highly motivated and as such are not necessarily representative of the whole population of cancer patients treated with chemotherapy....... in performing maximum physical capacity tests as these motivated them through self-perceived competitiveness and set a standard that served to encourage peak performance. CONCLUSION: The positive attitudes in this sample towards maximum physical capacity open the possibility of introducing physical testing...

  4. Networked and Distributed Control Method with Optimal Power Dispatch for Islanded Microgrids

    DEFF Research Database (Denmark)

    Li, Qiang; Peng, Congbo; Chen, Minyou

    2017-01-01

    of controllable agents. The distributed control laws derived from the first subgraph guarantee the supply-demand balance, while further control laws from the second subgraph reassign the outputs of controllable distributed generators, which ensure active and reactive power are dispatched optimally. However...... according to our proposition. Finally, the method is evaluated over seven cases via simulation. The results show that the system performs as desired, even if environmental conditions and load demand fluctuate significantly. In summary, the method can rapidly respond to fluctuations resulting in optimal...

  5. MAXIMUM PRINCIPLE FOR SUBSONIC FLOW WITH VARIABLE ENTROPY

    Directory of Open Access Journals (Sweden)

    B. Sizykh Grigory

    2017-01-01

    Full Text Available Maximum principle for subsonic flow is fair for stationary irrotational subsonic gas flows. According to this prin- ciple, if the value of the velocity is not constant everywhere, then its maximum is achieved on the boundary and only on the boundary of the considered domain. This property is used when designing form of an aircraft with a maximum critical val- ue of the Mach number: it is believed that if the local Mach number is less than unit in the incoming flow and on the body surface, then the Mach number is less then unit in all points of flow. The known proof of maximum principle for subsonic flow is based on the assumption that in the whole considered area of the flow the pressure is a function of density. For the ideal and perfect gas (the role of diffusion is negligible, and the Mendeleev-Clapeyron law is fulfilled, the pressure is a function of density if entropy is constant in the entire considered area of the flow. Shows an example of a stationary sub- sonic irrotational flow, in which the entropy has different values on different stream lines, and the pressure is not a function of density. The application of the maximum principle for subsonic flow with respect to such a flow would be unreasonable. This example shows the relevance of the question about the place of the points of maximum value of the velocity, if the entropy is not a constant. To clarify the regularities of the location of these points, was performed the analysis of the com- plete Euler equations (without any simplifying assumptions in 3-D case. The new proof of the maximum principle for sub- sonic flow was proposed. This proof does not rely on the assumption that the pressure is a function of density. Thus, it is shown that the maximum principle for subsonic flow is true for stationary subsonic irrotational flows of ideal perfect gas with variable entropy.

  6. Ancient humans influenced the current spatial genetic structure of common walnut populations in Asia

    Science.gov (United States)

    Paola Pollegioni; Keith E. Woeste; Francesca Chiocchini; Stefano Del Lungo; Irene Olimpieri; Virginia Tortolano; Jo Clark; Gabriel E. Hemery; Sergio Mapelli; Maria Emilia Malvolti; Gyaneshwer. Chaubey

    2015-01-01

    Common walnut (Juglans regia L) is an economically important species cultivated worldwide for its wood and nuts. It is generally accepted that J. regia survived and grew spontaneously in almost completely isolated stands in its Asian native range after the Last Glacial Maximum. Despite its natural geographic isolation, J....

  7. Maximum entropy methods

    International Nuclear Information System (INIS)

    Ponman, T.J.

    1984-01-01

    For some years now two different expressions have been in use for maximum entropy image restoration and there has been some controversy over which one is appropriate for a given problem. Here two further entropies are presented and it is argued that there is no single correct algorithm. The properties of the four different methods are compared using simple 1D simulations with a view to showing how they can be used together to gain as much information as possible about the original object. (orig.)

  8. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  9. Maximum Principles for Discrete and Semidiscrete Reaction-Diffusion Equation

    Directory of Open Access Journals (Sweden)

    Petr Stehlík

    2015-01-01

    Full Text Available We study reaction-diffusion equations with a general reaction function f on one-dimensional lattices with continuous or discrete time ux′  (or  Δtux=k(ux-1-2ux+ux+1+f(ux, x∈Z. We prove weak and strong maximum and minimum principles for corresponding initial-boundary value problems. Whereas the maximum principles in the semidiscrete case (continuous time exhibit similar features to those of fully continuous reaction-diffusion model, in the discrete case the weak maximum principle holds for a smaller class of functions and the strong maximum principle is valid in a weaker sense. We describe in detail how the validity of maximum principles depends on the nonlinearity and the time step. We illustrate our results on the Nagumo equation with the bistable nonlinearity.

  10. Minimal length, Friedmann equations and maximum density

    Energy Technology Data Exchange (ETDEWEB)

    Awad, Adel [Center for Theoretical Physics, British University of Egypt,Sherouk City 11837, P.O. Box 43 (Egypt); Department of Physics, Faculty of Science, Ain Shams University,Cairo, 11566 (Egypt); Ali, Ahmed Farag [Centre for Fundamental Physics, Zewail City of Science and Technology,Sheikh Zayed, 12588, Giza (Egypt); Department of Physics, Faculty of Science, Benha University,Benha, 13518 (Egypt)

    2014-06-16

    Inspired by Jacobson’s thermodynamic approach, Cai et al. have shown the emergence of Friedmann equations from the first law of thermodynamics. We extend Akbar-Cai derivation http://dx.doi.org/10.1103/PhysRevD.75.084003 of Friedmann equations to accommodate a general entropy-area law. Studying the resulted Friedmann equations using a specific entropy-area law, which is motivated by the generalized uncertainty principle (GUP), reveals the existence of a maximum energy density closed to Planck density. Allowing for a general continuous pressure p(ρ,a) leads to bounded curvature invariants and a general nonsingular evolution. In this case, the maximum energy density is reached in a finite time and there is no cosmological evolution beyond this point which leaves the big bang singularity inaccessible from a spacetime prospective. The existence of maximum energy density and a general nonsingular evolution is independent of the equation of state and the spacial curvature k. As an example we study the evolution of the equation of state p=ωρ through its phase-space diagram to show the existence of a maximum energy which is reachable in a finite time.

  11. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  12. Probabilistic maximum-value wind prediction for offshore environments

    DEFF Research Database (Denmark)

    Staid, Andrea; Pinson, Pierre; Guikema, Seth D.

    2015-01-01

    statistical models to predict the full distribution of the maximum-value wind speeds in a 3 h interval. We take a detailed look at the performance of linear models, generalized additive models and multivariate adaptive regression splines models using meteorological covariates such as gust speed, wind speed......, convective available potential energy, Charnock, mean sea-level pressure and temperature, as given by the European Center for Medium-Range Weather Forecasts forecasts. The models are trained to predict the mean value of maximum wind speed, and the residuals from training the models are used to develop...... the full probabilistic distribution of maximum wind speed. Knowledge of the maximum wind speed for an offshore location within a given period can inform decision-making regarding turbine operations, planned maintenance operations and power grid scheduling in order to improve safety and reliability...

  13. Common sunflower (Helianthus annuus) interference in soybean (Glycine max)

    International Nuclear Information System (INIS)

    Geier, P.W.; Maddux, L.D.; Moshier, L.J.; Stahlman, P.W.

    1996-01-01

    Multiple weed species in the field combine to cause yield losses and can be described using one of several empirical models. Field studies were conducted to compare observed corn yield loss caused by common sunflower and shattercane populations with predicted yield losses modeled using a multiple species rectangular hyperbola model, an additive model, or the yield loss model in the decision support system, WeedSOFT, and to derive competitive indices for common sunflower and shattercane. Common sunflower and shattercane emerged with corn and selected densities established in field experiments at Scandia and Rossville, KS, between 2000 and 2002. The multiple species rectangular hyperbola model fit pooled data from three of five location–years with a predicted maximum corn yield loss of 60%. Initial slope parameter estimate for common sunflower was 49.2 and 4.2% for shattercane. A ratio of these estimates indicated that common sunflower was 11 times more competitive than shattercane. When common sunflower was assigned a competitive index (CI) value of 10, shattercane CI was 0.9. Predicted yield losses modeled for separate common sunflower or shattercane populations were additive when compared with observed yield losses caused by low-density mixed populations of common sunflower (0 to 0.5 plants m −2 ) and shattercane (0 to 4 plants m −2 ). However, a ratio of estimates of these models indicated that common sunflower was only four times as competitive as shattercane, with a CI of 2.5 for shattercane. The yield loss model in WeedSOFT underpredicted the same corn losses by 7.5%. Clearly, both the CI for shattercane and the yield loss model in WeedSOFT need to be reevaluated, and the multiple species rectangular hyperbola model is proposed. (author)

  14. Identification of common allergens for united airway disease by skin prick test

    Directory of Open Access Journals (Sweden)

    Vikas Deep Mishra

    2016-01-01

    Full Text Available Objective: Identification of common allergens by skin prick test in patients of united airway disease. Materials and Methods: Skin prick test was performed in 60 patients of United Airway Disease to identify the common allergens. A total of 62 allergens consisting of 36 types of pollen, 5 fungi, 4 insects, 8 type of dusts, 4 dander, 3 fabrics, Dust mite and Parthenium leaves were tested. Result: Most common allergens were Dust mite (60% followed by Parthenium leaves (45%, insects (18.75%, pollen (14.81%, dust allergens (8.51%, fabrics (8.33%, fungi (5.66%, dander (5%. Most common insect allergens were cockroach (female (30%, cockroach (male (23.33%. Common pollens were Ricinus communis (28.33%, Amaranthus spinosus (28.33%, Parthenium hysterophorus (26.66%, Eucalyptus tereticornis (26.66% and Cynodon dactylon (25%. Common dust allergens were house dust (21.66%, paper dust (11.66% and cotton mill dust (10%. Among fabrics kapok cotton (13.33% showed maximum positivity. Among fungi Aspergillus fumigatus (10% followed by A. niger (6.66% were most common. In animal dander group common ones were cat dander followed by dog dander. Conclusion: In conclusion it can be said that the knowledge drawn by above study will help to treat patients by immunotherapy or avoidance strategy.

  15. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  16. An efficient, large-scale, non-lattice-detection algorithm for exhaustive structural auditing of biomedical ontologies.

    Science.gov (United States)

    Zhang, Guo-Qiang; Xing, Guangming; Cui, Licong

    2018-04-01

    One of the basic challenges in developing structural methods for systematic audition on the quality of biomedical ontologies is the computational cost usually involved in exhaustive sub-graph analysis. We introduce ANT-LCA, a new algorithm for computing all non-trivial lowest common ancestors (LCA) of each pair of concepts in the hierarchical order induced by an ontology. The computation of LCA is a fundamental step for non-lattice approach for ontology quality assurance. Distinct from existing approaches, ANT-LCA only computes LCAs for non-trivial pairs, those having at least one common ancestor. To skip all trivial pairs that may be of no practical interest, ANT-LCA employs a simple but innovative algorithmic strategy combining topological order and dynamic programming to keep track of non-trivial pairs. We provide correctness proofs and demonstrate a substantial reduction in computational time for two largest biomedical ontologies: SNOMED CT and Gene Ontology (GO). ANT-LCA achieved an average computation time of 30 and 3 sec per version for SNOMED CT and GO, respectively, about 2 orders of magnitude faster than the best known approaches. Our algorithm overcomes a fundamental computational barrier in sub-graph based structural analysis of large ontological systems. It enables the implementation of a new breed of structural auditing methods that not only identifies potential problematic areas, but also automatically suggests changes to fix the issues. Such structural auditing methods can lead to more effective tools supporting ontology quality assurance work. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Acute appendicitis: most common clinical presentation and causative microorganism

    International Nuclear Information System (INIS)

    Awan, M.Y.; Shukr, I.; Mahmood, M.A.; Qasmi, S.A.

    2013-01-01

    Objective: To determine the most common clinical presentation and causative microorganism for acute appendicitis. Study Design: Descriptive. Place and duration of study: Department of Surgery, Combined Military Hospital Multan, from June 2002 to May 2004. Patients and Methods: Clinical features of all the patients, older than 5 years of age diagnosed with acute appendicitis were recorded. Patients presented with other pathology which mimic acute appendicitis were excluded from the study. Surgery was done under general anaesthesia. Appendices of all the patient as well as pus swabs from abdominal cavity were sent to the laboratory for histopathology and microbiological cultures to confirm the diagnoses of acute appendicitis and causative organism. Results: The mean age of 75 subjects was 32.56 +- 11.93 years. The most common symptom was pain in right iliac fossa (80 % cases) and the most common physical sign was tenderness (92% cases). Some of the patients(9.3%) had a histologically normal appendix. Maximum isolates on culture were E. coli. Conclusion: The most common presentation of acute appendicitis was pain in right iliac fossa while the most sensitive sign was tenderness. Proper history and sharp clinical examination is the key to diagnosis. The most frequent organism of appendicitis was Escherichia Coli. (author)

  18. A general formula for computing maximum proportion correct scores in various psychophysical paradigms with arbitrary probability distributions of stimulus observations.

    Science.gov (United States)

    Dai, Huanping; Micheyl, Christophe

    2015-05-01

    Proportion correct (Pc) is a fundamental measure of task performance in psychophysics. The maximum Pc score that can be achieved by an optimal (maximum-likelihood) observer in a given task is of both theoretical and practical importance, because it sets an upper limit on human performance. Within the framework of signal detection theory, analytical solutions for computing the maximum Pc score have been established for several common experimental paradigms under the assumption of Gaussian additive internal noise. However, as the scope of applications of psychophysical signal detection theory expands, the need is growing for psychophysicists to compute maximum Pc scores for situations involving non-Gaussian (internal or stimulus-induced) noise. In this article, we provide a general formula for computing the maximum Pc in various psychophysical experimental paradigms for arbitrary probability distributions of sensory activity. Moreover, easy-to-use MATLAB code implementing the formula is provided. Practical applications of the formula are illustrated, and its accuracy is evaluated, for two paradigms and two types of probability distributions (uniform and Gaussian). The results demonstrate that Pc scores computed using the formula remain accurate even for continuous probability distributions, as long as the conversion from continuous probability density functions to discrete probability mass functions is supported by a sufficiently high sampling resolution. We hope that the exposition in this article, and the freely available MATLAB code, facilitates calculations of maximum performance for a wider range of experimental situations, as well as explorations of the impact of different assumptions concerning internal-noise distributions on maximum performance in psychophysical experiments.

  19. The Maximum Resource Bin Packing Problem

    DEFF Research Database (Denmark)

    Boyar, J.; Epstein, L.; Favrholdt, L.M.

    2006-01-01

    Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find...

  20. Maximum entropy analysis of EGRET data

    DEFF Research Database (Denmark)

    Pohl, M.; Strong, A.W.

    1997-01-01

    EGRET data are usually analysed on the basis of the Maximum-Likelihood method \\cite{ma96} in a search for point sources in excess to a model for the background radiation (e.g. \\cite{hu97}). This method depends strongly on the quality of the background model, and thus may have high systematic unce...... uncertainties in region of strong and uncertain background like the Galactic Center region. Here we show images of such regions obtained by the quantified Maximum-Entropy method. We also discuss a possible further use of MEM in the analysis of problematic regions of the sky....

  1. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  2. Presenting results as dynamically generated co-authorship subgraphs in semantic digital library collections

    Directory of Open Access Journals (Sweden)

    James Powell

    2012-02-01

    Full Text Available Semantic web representations of data are by definition graphs, and these graphs can be explored using concepts from graph theory. This paper demonstrates how semantically mapped bibliographic metadata, combined with a lightweight software architecture and Web-based graph visualization tools, can be used to generate dynamic authorship graphs in response to typical user queries, as an alternative to more common text-based results presentations. It also shows how centrality measures and path analysis techniques from social network analysis can be used to enhance the visualization of query results. The resulting graphs require modestly more cognitive engagement from the user but offer insights not available from text.

  3. ReactionMap: an efficient atom-mapping algorithm for chemical reactions.

    Science.gov (United States)

    Fooshee, David; Andronico, Alessio; Baldi, Pierre

    2013-11-25

    Large databases of chemical reactions provide new data-mining opportunities and challenges. Key challenges result from the imperfect quality of the data and the fact that many of these reactions are not properly balanced or atom-mapped. Here, we describe ReactionMap, an efficient atom-mapping algorithm. Our approach uses a combination of maximum common chemical subgraph search and minimization of an assignment cost function derived empirically from training data. We use a set of over 259,000 balanced atom-mapped reactions from the SPRESI commercial database to train the system, and we validate it on random sets of 1000 and 17,996 reactions sampled from this pool. These large test sets represent a broad range of chemical reaction types, and ReactionMap correctly maps about 99% of the atoms and about 96% of the reactions, with a mean time per mapping of 2 s. Most correctly mapped reactions are mapped with high confidence. Mapping accuracy compares favorably with ChemAxon's AutoMapper, versions 5 and 6.1, and the DREAM Web tool. These approaches correctly map 60.7%, 86.5%, and 90.3% of the reactions, respectively, on the same data set. A ReactionMap server is available on the ChemDB Web portal at http://cdb.ics.uci.edu .

  4. Maximum Acceptable Weight of Lift reflects peak lumbosacral extension moments in a Functional Capacity Evaluation test using free style, stoop, and squat lifting

    NARCIS (Netherlands)

    Kuijer, P.P.F.M.; van Oostrom, S.H.; Duijzer, K.; van Dieen, J.H.

    2012-01-01

    It is unclear whether the maximum acceptable weight of lift (MAWL), a common psychophysical method, reflects joint kinetics when different lifting techniques are employed. In a within-participants study (n = 12), participants performed three lifting techniques - free style, stoop and squat lifting

  5. Maximum acceptable weight of lift reflects peak lumbosacral extension moments in a functional capacity evaluation test using free style, stoop and squat lifting

    NARCIS (Netherlands)

    Kuijer, P. P. F. M.; van Oostrom, S. H.; Duijzer, K.; van Dieën, J. H.

    2012-01-01

    It is unclear whether the maximum acceptable weight of lift (MAWL), a common psychophysical method, reflects joint kinetics when different lifting techniques are employed. In a within-participants study (n = 12), participants performed three lifting techniques - free style, stoop and squat lifting

  6. The Partial Mapping of the Web Graph

    Directory of Open Access Journals (Sweden)

    Kristina Machova

    2009-06-01

    Full Text Available The paper presents an approach to partial mapping of a web sub-graph. This sub-graph contains the nearest surroundings of an actual web page. Our work deals with acquiring relevant Hyperlinks of a base web site, generation of adjacency matrix, the nearest distance matrix and matrix of converted distances of Hyperlinks, detection of compactness of web representation, and visualization of its graphical representation. The paper introduces an LWP algorithm – a technique for Hyperlink filtration.  This work attempts to help users with the orientation within the web graph.

  7. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  8. Detecting Network Vulnerabilities Through Graph TheoreticalMethods

    Energy Technology Data Exchange (ETDEWEB)

    Cesarz, Patrick; Pomann, Gina-Maria; Torre, Luis de la; Villarosa, Greta; Flournoy, Tamara; Pinar, Ali; Meza Juan

    2007-09-30

    Identifying vulnerabilities in power networks is an important problem, as even a small number of vulnerable connections can cause billions of dollars in damage to a network. In this paper, we investigate a graph theoretical formulation for identifying vulnerabilities of a network. We first try to find the most critical components in a network by finding an optimal solution for each possible cutsize constraint for the relaxed version of the inhibiting bisection problem, which aims to find loosely coupled subgraphs with significant demand/supply mismatch. Then we investigate finding critical components by finding a flow assignment that minimizes the maximum among flow assignments on all edges. We also report experiments on IEEE 30, IEEE 118, and WSCC 179 benchmark power networks.

  9. Analogue of Pontryagin's maximum principle for multiple integrals minimization problems

    OpenAIRE

    Mikhail, Zelikin

    2016-01-01

    The theorem like Pontryagin's maximum principle for multiple integrals is proved. Unlike the usual maximum principle, the maximum should be taken not over all matrices, but only on matrices of rank one. Examples are given.

  10. Microprocessor Controlled Maximum Power Point Tracker for Photovoltaic Application

    International Nuclear Information System (INIS)

    Jiya, J. D.; Tahirou, G.

    2002-01-01

    This paper presents a microprocessor controlled maximum power point tracker for photovoltaic module. Input current and voltage are measured and multiplied within the microprocessor, which contains an algorithm to seek the maximum power point. The duly cycle of the DC-DC converter, at which the maximum power occurs is obtained, noted and adjusted. The microprocessor constantly seeks for improvement of obtained power by varying the duty cycle

  11. Maximum entropy reconstructions for crystallographic imaging; Cristallographie et reconstruction d`images par maximum d`entropie

    Energy Technology Data Exchange (ETDEWEB)

    Papoular, R

    1997-07-01

    The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.

  12. Comprehensive performance analyses and optimization of the irreversible thermodynamic cycle engines (TCE) under maximum power (MP) and maximum power density (MPD) conditions

    International Nuclear Information System (INIS)

    Gonca, Guven; Sahin, Bahri; Ust, Yasin; Parlak, Adnan

    2015-01-01

    This paper presents comprehensive performance analyses and comparisons for air-standard irreversible thermodynamic cycle engines (TCE) based on the power output, power density, thermal efficiency, maximum dimensionless power output (MP), maximum dimensionless power density (MPD) and maximum thermal efficiency (MEF) criteria. Internal irreversibility of the cycles occurred during the irreversible-adiabatic processes is considered by using isentropic efficiencies of compression and expansion processes. The performances of the cycles are obtained by using engine design parameters such as isentropic temperature ratio of the compression process, pressure ratio, stroke ratio, cut-off ratio, Miller cycle ratio, exhaust temperature ratio, cycle temperature ratio and cycle pressure ratio. The effects of engine design parameters on the maximum and optimal performances are investigated. - Highlights: • Performance analyses are conducted for irreversible thermodynamic cycle engines. • Comprehensive computations are performed. • Maximum and optimum performances of the engines are shown. • The effects of design parameters on performance and power density are examined. • The results obtained may be guidelines to the engine designers

  13. Shower maximum detector for SDC calorimetry

    International Nuclear Information System (INIS)

    Ernwein, J.

    1994-01-01

    A prototype for the SDC end-cap (EM) calorimeter complete with a pre-shower and a shower maximum detector was tested in beams of electrons and Π's at CERN by an SDC subsystem group. The prototype was manufactured from scintillator tiles and strips read out with 1 mm diameter wave-length shifting fibers. The design and construction of the shower maximum detector is described, and results of laboratory tests on light yield and performance of the scintillator-fiber system are given. Preliminary results on energy and position measurements with the shower max detector in the test beam are shown. (authors). 4 refs., 5 figs

  14. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  15. 32 CFR 842.35 - Depreciation and maximum allowances.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Depreciation and maximum allowances. 842.35... LITIGATION ADMINISTRATIVE CLAIMS Personnel Claims (31 U.S.C. 3701, 3721) § 842.35 Depreciation and maximum allowances. The military services have jointly established the “Allowance List-Depreciation Guide” to...

  16. Probabilistic Performance Guarantees for Distributed Self-Assembly

    KAUST Repository

    Fox, Michael J.

    2015-04-01

    In distributed self-assembly, a multitude of agents seek to form copies of a particular structure, modeled here as a labeled graph. In the model, agents encounter each other in spontaneous pairwise interactions and decide whether or not to form or sever edges based on their two labels and a fixed set of local interaction rules described by a graph grammar. The objective is to converge on a graph with a maximum number of copies of a given target graph. Our main result is the introduction of a simple algorithm that achieves an asymptotically maximum yield in a probabilistic sense. Notably, agents do not need to update their labels except when forming or severing edges. This contrasts with certain existing approaches that exploit information propagating rules, effectively addressing the decision problem at the level of subgraphs as opposed to individual vertices. We are able to obey more stringent locality requirements while also providing smaller rule sets. The results can be improved upon if certain requirements on the labels are relaxed. We discuss limits of performance in self-assembly in terms of rule set characteristics and achievable maximum yield.

  17. Maximum Entropy, Word-Frequency, Chinese Characters, and Multiple Meanings

    Science.gov (United States)

    Yan, Xiaoyong; Minnhagen, Petter

    2015-01-01

    The word-frequency distribution of a text written by an author is well accounted for by a maximum entropy distribution, the RGF (random group formation)-prediction. The RGF-distribution is completely determined by the a priori values of the total number of words in the text (M), the number of distinct words (N) and the number of repetitions of the most common word (kmax). It is here shown that this maximum entropy prediction also describes a text written in Chinese characters. In particular it is shown that although the same Chinese text written in words and Chinese characters have quite differently shaped distributions, they are nevertheless both well predicted by their respective three a priori characteristic values. It is pointed out that this is analogous to the change in the shape of the distribution when translating a given text to another language. Another consequence of the RGF-prediction is that taking a part of a long text will change the input parameters (M, N, kmax) and consequently also the shape of the frequency distribution. This is explicitly confirmed for texts written in Chinese characters. Since the RGF-prediction has no system-specific information beyond the three a priori values (M, N, kmax), any specific language characteristic has to be sought in systematic deviations from the RGF-prediction and the measured frequencies. One such systematic deviation is identified and, through a statistical information theoretical argument and an extended RGF-model, it is proposed that this deviation is caused by multiple meanings of Chinese characters. The effect is stronger for Chinese characters than for Chinese words. The relation between Zipf’s law, the Simon-model for texts and the present results are discussed. PMID:25955175

  18. 40 CFR 141.63 - Maximum contaminant levels (MCLs) for microbiological contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant levels (MCLs) for microbiological contaminants. 141.63 Section 141.63 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY... Water Regulations: Maximum Contaminant Levels and Maximum Residual Disinfectant Levels § 141.63 Maximum...

  19. Topological structure of dictionary graphs

    International Nuclear Information System (INIS)

    Fuks, Henryk; Krzeminski, Mark

    2009-01-01

    We investigate the topological structure of the subgraphs of dictionary graphs constructed from WordNet and Moby thesaurus data. In the process of learning a foreign language, the learner knows only a subset of all words of the language, corresponding to a subgraph of a dictionary graph. When this subgraph grows with time, its topological properties change. We introduce the notion of the pseudocore and argue that the growth of the vocabulary roughly follows decreasing pseudocore numbers-that is, one first learns words with a high pseudocore number followed by smaller pseudocores. We also propose an alternative strategy for vocabulary growth, involving decreasing core numbers as opposed to pseudocore numbers. We find that as the core or pseudocore grows in size, the clustering coefficient first decreases, then reaches a minimum and starts increasing again. The minimum occurs when the vocabulary reaches a size between 10 3 and 10 4 . A simple model exhibiting similar behavior is proposed. The model is based on a generalized geometric random graph. Possible implications for language learning are discussed.

  20. Particle Swarm Optimization Based of the Maximum Photovoltaic ...

    African Journals Online (AJOL)

    Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency. In this work, a Particle Swarm ...

  1. 40 CFR 1045.140 - What is my engine's maximum engine power?

    Science.gov (United States)

    2010-07-01

    ...) Maximum engine power for an engine family is generally the weighted average value of maximum engine power... engine family's maximum engine power apply in the following circumstances: (1) For outboard or personal... value for maximum engine power from all the different configurations within the engine family to...

  2. A Single Phase Doubly Grounded Semi-Z-Source Inverter for Photovoltaic (PV Systems with Maximum Power Point Tracking (MPPT

    Directory of Open Access Journals (Sweden)

    Tofael Ahmed

    2014-06-01

    Full Text Available In this paper, a single phase doubly grounded semi-Z-source inverter with maximum power point tracking (MPPT is proposed for photovoltaic (PV systems. This proposed system utilizes a single-ended primary inductor (SEPIC converter as DC-DC converter to implement the MPPT algorithm for tracking the maximum power from a PV array and a single phase semi-Z-source inverter for integrating the PV with AC power utilities. The MPPT controller utilizes a fast-converging algorithm to track the maximum power point (MPP and the semi-Z-source inverter utilizes a nonlinear SPWM to produce sinusoidal voltage at the output. The proposed system is able to track the MPP of PV arrays and produce an AC voltage at its output by utilizing only three switches. Experimental results show that the fast-converging MPPT algorithm has fast tracking response with appreciable MPP efficiency. In addition, the inverter shows the minimization of common mode leakage current with its ground sharing feature and reduction of the THD as well as DC current components at the output during DC-AC conversion.

  3. 42 CFR 409.62 - Lifetime maximum on inpatient psychiatric care.

    Science.gov (United States)

    2010-10-01

    ... 42 Public Health 2 2010-10-01 2010-10-01 false Lifetime maximum on inpatient psychiatric care. 409....62 Lifetime maximum on inpatient psychiatric care. There is a lifetime maximum of 190 days on inpatient psychiatric hospital services available to any beneficiary. Therefore, once an individual receives...

  4. Maximum-entropy description of animal movement.

    Science.gov (United States)

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  5. Extreme Maximum Land Surface Temperatures.

    Science.gov (United States)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  6. On Maximum Entropy and Inference

    Directory of Open Access Journals (Sweden)

    Luigi Gresele

    2017-11-01

    Full Text Available Maximum entropy is a powerful concept that entails a sharp separation between relevant and irrelevant variables. It is typically invoked in inference, once an assumption is made on what the relevant variables are, in order to estimate a model from data, that affords predictions on all other (dependent variables. Conversely, maximum entropy can be invoked to retrieve the relevant variables (sufficient statistics directly from the data, once a model is identified by Bayesian model selection. We explore this approach in the case of spin models with interactions of arbitrary order, and we discuss how relevant interactions can be inferred. In this perspective, the dimensionality of the inference problem is not set by the number of parameters in the model, but by the frequency distribution of the data. We illustrate the method showing its ability to recover the correct model in a few prototype cases and discuss its application on a real dataset.

  7. The optimal shape of an object for generating maximum gravity field at a given point in space

    OpenAIRE

    Wang, Xiao-Wei; Su, Yue

    2014-01-01

    How can we design the shape of an object, in the framework of Newtonian gravity, in order to generate maximum gravity at a given point in space? In this work we present a study on this interesting problem. We obtain compact solutions for all dimensional cases. The results are commonly characterized by a simple "physical" feature that any mass element unit on the object surface generates the same gravity strength at the considered point, in the direction along the rotational symmetry axis.

  8. Fractal Dimension and Maximum Sunspot Number in Solar Cycle

    Directory of Open Access Journals (Sweden)

    R.-S. Kim

    2006-09-01

    Full Text Available The fractal dimension is a quantitative parameter describing the characteristics of irregular time series. In this study, we use this parameter to analyze the irregular aspects of solar activity and to predict the maximum sunspot number in the following solar cycle by examining time series of the sunspot number. For this, we considered the daily sunspot number since 1850 from SIDC (Solar Influences Data analysis Center and then estimated cycle variation of the fractal dimension by using Higuchi's method. We examined the relationship between this fractal dimension and the maximum monthly sunspot number in each solar cycle. As a result, we found that there is a strong inverse relationship between the fractal dimension and the maximum monthly sunspot number. By using this relation we predicted the maximum sunspot number in the solar cycle from the fractal dimension of the sunspot numbers during the solar activity increasing phase. The successful prediction is proven by a good correlation (r=0.89 between the observed and predicted maximum sunspot numbers in the solar cycles.

  9. Automatic maximum entropy spectral reconstruction in NMR

    International Nuclear Information System (INIS)

    Mobli, Mehdi; Maciejewski, Mark W.; Gryk, Michael R.; Hoch, Jeffrey C.

    2007-01-01

    Developments in superconducting magnets, cryogenic probes, isotope labeling strategies, and sophisticated pulse sequences together have enabled the application, in principle, of high-resolution NMR spectroscopy to biomolecular systems approaching 1 megadalton. In practice, however, conventional approaches to NMR that utilize the fast Fourier transform, which require data collected at uniform time intervals, result in prohibitively lengthy data collection times in order to achieve the full resolution afforded by high field magnets. A variety of approaches that involve nonuniform sampling have been proposed, each utilizing a non-Fourier method of spectrum analysis. A very general non-Fourier method that is capable of utilizing data collected using any of the proposed nonuniform sampling strategies is maximum entropy reconstruction. A limiting factor in the adoption of maximum entropy reconstruction in NMR has been the need to specify non-intuitive parameters. Here we describe a fully automated system for maximum entropy reconstruction that requires no user-specified parameters. A web-accessible script generator provides the user interface to the system

  10. Stationary neutrino radiation transport by maximum entropy closure

    International Nuclear Information System (INIS)

    Bludman, S.A.

    1994-11-01

    The authors obtain the angular distributions that maximize the entropy functional for Maxwell-Boltzmann (classical), Bose-Einstein, and Fermi-Dirac radiation. In the low and high occupancy limits, the maximum entropy closure is bounded by previously known variable Eddington factors that depend only on the flux. For intermediate occupancy, the maximum entropy closure depends on both the occupation density and the flux. The Fermi-Dirac maximum entropy variable Eddington factor shows a scale invariance, which leads to a simple, exact analytic closure for fermions. This two-dimensional variable Eddington factor gives results that agree well with exact (Monte Carlo) neutrino transport calculations out of a collapse residue during early phases of hydrostatic neutron star formation

  11. Analysis to determine the maximum dimensions of flexible apertures in sensored security netting products.

    Energy Technology Data Exchange (ETDEWEB)

    Murton, Mark; Bouchier, Francis A.; vanDongen, Dale T.; Mack, Thomas Kimball; Cutler, Robert P; Ross, Michael P.

    2013-08-01

    Although technological advances provide new capabilities to increase the robustness of security systems, they also potentially introduce new vulnerabilities. New capability sometimes requires new performance requirements. This paper outlines an approach to establishing a key performance requirement for an emerging intrusion detection sensor: the sensored net. Throughout the security industry, the commonly adopted standard for maximum opening size through barriers is a requirement based on square inchestypically 96 square inches. Unlike standard rigid opening, the dimensions of a flexible aperture are not fixed, but variable and conformable. It is demonstrably simple for a human intruder to move through a 96-square-inch opening that is conformable to the human body. The longstanding 96-square-inch requirement itself, though firmly embedded in policy and best practice, lacks a documented empirical basis. This analysis concluded that the traditional 96-square-inch standard for openings is insufficient for flexible openings that are conformable to the human body. Instead, a circumference standard is recommended for these newer types of sensored barriers. The recommended maximum circumference for a flexible opening should be no more than 26 inches, as measured on the inside of the netting material.

  12. [Basic laws of blood screw motion in human common carotid arteries].

    Science.gov (United States)

    Kulikov, V P; Kirsanov, R I

    2008-08-01

    The basic laws of blood screw motion in common carotid arteries in people were determined by means of modern ultrasound techniques for the first time. 92 healthy adults, aged 18-30, were examined. The blood flow in the middle one-third of common carotid arteries was registered by means of Color Doppler Imaging and impulse Doppler with the help of ultrasound Medison 8000EX scanner by linear transducer of 5-9 MHz. The steady registration of blood screw motion in both common carotid arteries in Color Doppler Imaging regimen was observed in 54.3 % of cases. The direction of screw stream rotation in most cases (54%) was multi-directed: in the right common carotid artery it was right, in the left common carotid artery--left (48%), and in 6% of cases it was reverse. For 46% of cases blood rotation in both common carotid arteries was one-directed (26%--right, 20%--left). The velocity parameters of rotation component of blood motion were determined, maximum velocity being 19.68 +/- 5.84 cm/sec, minimum--4.57 +/- 2.89 cm/sec, average--7.48 +/- 2.49 cm/sec, angular--10.7 +/- 2.49 sec(-1). The rated velocity of blood cells motion in screw motion with regard of screw current lines to the vessel vertical axis makes up from 158.67 +/- 32.79 to 224.39 +/- 46.37 cm/sec.

  13. Regional maximum rainfall analysis using L-moments at the Titicaca Lake drainage, Peru

    Science.gov (United States)

    Fernández-Palomino, Carlos Antonio; Lavado-Casimiro, Waldo Sven

    2017-08-01

    The present study investigates the application of the index flood L-moments-based regional frequency analysis procedure (RFA-LM) to the annual maximum 24-h rainfall (AM) of 33 rainfall gauge stations (RGs) to estimate rainfall quantiles at the Titicaca Lake drainage (TL). The study region was chosen because it is characterised by common floods that affect agricultural production and infrastructure. First, detailed quality analyses and verification of the RFA-LM assumptions were conducted. For this purpose, different tests for outlier verification, homogeneity, stationarity, and serial independence were employed. Then, the application of RFA-LM procedure allowed us to consider the TL as a single, hydrologically homogeneous region, in terms of its maximum rainfall frequency. That is, this region can be modelled by a generalised normal (GNO) distribution, chosen according to the Z test for goodness-of-fit, L-moments (LM) ratio diagram, and an additional evaluation of the precision of the regional growth curve. Due to the low density of RG in the TL, it was important to produce maps of the AM design quantiles estimated using RFA-LM. Therefore, the ordinary Kriging interpolation (OK) technique was used. These maps will be a useful tool for determining the different AM quantiles at any point of interest for hydrologists in the region.

  14. Maximum power per VA control of vector controlled interior ...

    Indian Academy of Sciences (India)

    Thakur Sumeet Singh

    2018-04-11

    Apr 11, 2018 ... Department of Electrical Engineering, Indian Institute of Technology Delhi, New ... The MPVA operation allows maximum-utilization of the drive-system. ... Permanent magnet motor; unity power factor; maximum VA utilization; ...

  15. Application of maximum entropy to neutron tunneling spectroscopy

    International Nuclear Information System (INIS)

    Mukhopadhyay, R.; Silver, R.N.

    1990-01-01

    We demonstrate the maximum entropy method for the deconvolution of high resolution tunneling data acquired with a quasielastic spectrometer. Given a precise characterization of the instrument resolution function, a maximum entropy analysis of lutidine data obtained with the IRIS spectrometer at ISIS results in an effective factor of three improvement in resolution. 7 refs., 4 figs

  16. Production of rare sugars from common sugars in subcritical aqueous ethanol.

    Science.gov (United States)

    Gao, Da-Ming; Kobayashi, Takashi; Adachi, Shuji

    2015-05-15

    A new isomerization reaction was developed to synthesize rare ketoses. D-tagatose, D-xylulose, and D-ribulose were obtained in the maximum yields of 24%, 38%, and 40%, respectively, from the corresponding aldoses, D-galactose, D-xylose, and D-ribose, by treating the aldoses with 80% (v/v) subcritical aqueous ethanol at 180°C. The maximum productivity of D-tagatose was ca. 80 g/(Lh). Increasing the concentration of ethanol significantly increased the isomerization of D-galactose. Variation in the reaction temperature did not significantly affect the production of D-tagatose from D-galactose. Subcritical aqueous ethanol converted both 2,3-threo and 2,3-erythro aldoses to the corresponding C-2 ketoses in high yields. Thus, the treatment of common aldoses in subcritical aqueous ethanol can be regarded as a new method to synthesize the corresponding rare sugars. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. The maximum economic depth of groundwater abstraction for irrigation

    Science.gov (United States)

    Bierkens, M. F.; Van Beek, L. P.; de Graaf, I. E. M.; Gleeson, T. P.

    2017-12-01

    Over recent decades, groundwater has become increasingly important for agriculture. Irrigation accounts for 40% of the global food production and its importance is expected to grow further in the near future. Already, about 70% of the globally abstracted water is used for irrigation, and nearly half of that is pumped groundwater. In many irrigated areas where groundwater is the primary source of irrigation water, groundwater abstraction is larger than recharge and we see massive groundwater head decline in these areas. An important question then is: to what maximum depth can groundwater be pumped for it to be still economically recoverable? The objective of this study is therefore to create a global map of the maximum depth of economically recoverable groundwater when used for irrigation. The maximum economic depth is the maximum depth at which revenues are still larger than pumping costs or the maximum depth at which initial investments become too large compared to yearly revenues. To this end we set up a simple economic model where costs of well drilling and the energy costs of pumping, which are a function of well depth and static head depth respectively, are compared with the revenues obtained for the irrigated crops. Parameters for the cost sub-model are obtained from several US-based studies and applied to other countries based on GDP/capita as an index of labour costs. The revenue sub-model is based on gross irrigation water demand calculated with a global hydrological and water resources model, areal coverage of crop types from MIRCA2000 and FAO-based statistics on crop yield and market price. We applied our method to irrigated areas in the world overlying productive aquifers. Estimated maximum economic depths range between 50 and 500 m. Most important factors explaining the maximum economic depth are the dominant crop type in the area and whether or not initial investments in well infrastructure are limiting. In subsequent research, our estimates of

  18. What controls the maximum magnitude of injection-induced earthquakes?

    Science.gov (United States)

    Eaton, D. W. S.

    2017-12-01

    Three different approaches for estimation of maximum magnitude are considered here, along with their implications for managing risk. The first approach is based on a deterministic limit for seismic moment proposed by McGarr (1976), which was originally designed for application to mining-induced seismicity. This approach has since been reformulated for earthquakes induced by fluid injection (McGarr, 2014). In essence, this method assumes that the upper limit for seismic moment release is constrained by the pressure-induced stress change. A deterministic limit is given by the product of shear modulus and the net injected fluid volume. This method is based on the assumptions that the medium is fully saturated and in a state of incipient failure. An alternative geometrical approach was proposed by Shapiro et al. (2011), who postulated that the rupture area for an induced earthquake falls entirely within the stimulated volume. This assumption reduces the maximum-magnitude problem to one of estimating the largest potential slip surface area within a given stimulated volume. Finally, van der Elst et al. (2016) proposed that the maximum observed magnitude, statistically speaking, is the expected maximum value for a finite sample drawn from an unbounded Gutenberg-Richter distribution. These three models imply different approaches for risk management. The deterministic method proposed by McGarr (2014) implies that a ceiling on the maximum magnitude can be imposed by limiting the net injected volume, whereas the approach developed by Shapiro et al. (2011) implies that the time-dependent maximum magnitude is governed by the spatial size of the microseismic event cloud. Finally, the sample-size hypothesis of Van der Elst et al. (2016) implies that the best available estimate of the maximum magnitude is based upon observed seismicity rate. The latter two approaches suggest that real-time monitoring is essential for effective management of risk. A reliable estimate of maximum

  19. Topics in Bayesian statistics and maximum entropy

    International Nuclear Information System (INIS)

    Mutihac, R.; Cicuttin, A.; Cerdeira, A.; Stanciulescu, C.

    1998-12-01

    Notions of Bayesian decision theory and maximum entropy methods are reviewed with particular emphasis on probabilistic inference and Bayesian modeling. The axiomatic approach is considered as the best justification of Bayesian analysis and maximum entropy principle applied in natural sciences. Particular emphasis is put on solving the inverse problem in digital image restoration and Bayesian modeling of neural networks. Further topics addressed briefly include language modeling, neutron scattering, multiuser detection and channel equalization in digital communications, genetic information, and Bayesian court decision-making. (author)

  20. Combining Experiments and Simulations Using the Maximum Entropy Principle

    DEFF Research Database (Denmark)

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy...... in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results....... Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges....

  1. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore

  2. Common approach to common interests

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-06-01

    In referring to issues confronting the energy field in this region and options to be exercised in the future, I would like to mention the fundamental condition of the utmost importance. That can be summed up as follows: any subject in energy area can never be solved by one country alone, given the geographical and geopolitical characteristics intrinsically possessed by energy. So, a regional approach is needed and it is especially necessary for the main players in the region to jointly address problems common to them. Though it may be a matter to be pursued in the distant future, I am personally dreaming a 'Common Energy Market for Northeast Asia,' in which member countries' interests are adjusted so that the market can be integrated and the region can become a most economically efficient market, thus formulating an effective power to encounter the outside. It should be noted that Europe needed forty years to integrate its market as the unified common market. It is necessary for us to follow a number of steps over the period to eventually materialize our common market concept, too. Now is the time for us to take a first step to lay the foundation for our descendants to enjoy prosperity from such a common market.

  3. A Maximum Radius for Habitable Planets.

    Science.gov (United States)

    Alibert, Yann

    2015-09-01

    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  4. Polynomial-time computability of the edge-reliability of graphs using Gilbert's formula

    Directory of Open Access Journals (Sweden)

    Thomas J. Marlowe

    1998-01-01

    Full Text Available Reliability is an important consideration in analyzing computer and other communication networks, but current techniques are extremely limited in the classes of graphs which can be analyzed efficiently. While Gilbert's formula establishes a theoretically elegant recursive relationship between the edge reliability of a graph and the reliability of its subgraphs, naive evaluation requires consideration of all sequences of deletions of individual vertices, and for many graphs has time complexity essentially Θ (N!. We discuss a general approach which significantly reduces complexity, encoding subgraph isomorphism in a finer partition by invariants, and recursing through the set of invariants.

  5. On semidefinite programming relaxations of maximum k-section

    NARCIS (Netherlands)

    de Klerk, E.; Pasechnik, D.V.; Sotirov, R.; Dobre, C.

    2012-01-01

    We derive a new semidefinite programming bound for the maximum k -section problem. For k=2 (i.e. for maximum bisection), the new bound is at least as strong as a well-known bound by Poljak and Rendl (SIAM J Optim 5(3):467–487, 1995). For k ≥ 3the new bound dominates a bound of Karisch and Rendl

  6. Modeling multisite streamflow dependence with maximum entropy copula

    Science.gov (United States)

    Hao, Z.; Singh, V. P.

    2013-10-01

    Synthetic streamflows at different sites in a river basin are needed for planning, operation, and management of water resources projects. Modeling the temporal and spatial dependence structure of monthly streamflow at different sites is generally required. In this study, the maximum entropy copula method is proposed for multisite monthly streamflow simulation, in which the temporal and spatial dependence structure is imposed as constraints to derive the maximum entropy copula. The monthly streamflows at different sites are then generated by sampling from the conditional distribution. A case study for the generation of monthly streamflow at three sites in the Colorado River basin illustrates the application of the proposed method. Simulated streamflow from the maximum entropy copula is in satisfactory agreement with observed streamflow.

  7. Maximum stellar iron core mass

    Indian Academy of Sciences (India)

    60, No. 3. — journal of. March 2003 physics pp. 415–422. Maximum stellar iron core mass. F W GIACOBBE. Chicago Research Center/American Air Liquide ... iron core compression due to the weight of non-ferrous matter overlying the iron cores within large .... thermal equilibrium velocities will tend to be non-relativistic.

  8. The optimal shape of an object for generating maximum gravity field at a given point in space

    International Nuclear Information System (INIS)

    Wang, Xiao-Wei; Su, Yue

    2015-01-01

    How can we design the shape of an object, in the framework of Newtonian gravity, in order to generate maximum gravity at a given point in space? In this work we present a study on this interesting problem. We obtain compact solutions for all dimensional cases. The results are commonly characterized by a simple ‘physical’ feature that any mass element unit on the object surface generates the same gravity strength at the considered point, in the direction along the rotational symmetry axis. (paper)

  9. Common Courses for Common Purposes:

    DEFF Research Database (Denmark)

    Schaub Jr, Gary John

    2014-01-01

    (PME)? I suggest three alternative paths that increased cooperation in PME at the level of the command and staff course could take: a Nordic Defence College, standardized national command and staff courses, and a core curriculum of common courses for common purposes. I conclude with a discussion of how...

  10. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  11. Maximum entropy beam diagnostic tomography

    International Nuclear Information System (INIS)

    Mottershead, C.T.

    1985-01-01

    This paper reviews the formalism of maximum entropy beam diagnostic tomography as applied to the Fusion Materials Irradiation Test (FMIT) prototype accelerator. The same formalism has also been used with streak camera data to produce an ultrahigh speed movie of the beam profile of the Experimental Test Accelerator (ETA) at Livermore. 11 refs., 4 figs

  12. A portable storage maximum thermometer

    International Nuclear Information System (INIS)

    Fayart, Gerard.

    1976-01-01

    A clinical thermometer storing the voltage corresponding to the maximum temperature in an analog memory is described. End of the measurement is shown by a lamp switch out. The measurement time is shortened by means of a low thermal inertia platinum probe. This portable thermometer is fitted with cell test and calibration system [fr

  13. Spatio-temporal observations of the tertiary ozone maximum

    Directory of Open Access Journals (Sweden)

    V. F. Sofieva

    2009-07-01

    Full Text Available We present spatio-temporal distributions of the tertiary ozone maximum (TOM, based on GOMOS (Global Ozone Monitoring by Occultation of Stars ozone measurements in 2002–2006. The tertiary ozone maximum is typically observed in the high-latitude winter mesosphere at an altitude of ~72 km. Although the explanation for this phenomenon has been found recently – low concentrations of odd-hydrogen cause the subsequent decrease in odd-oxygen losses – models have had significant deviations from existing observations until recently. Good coverage of polar night regions by GOMOS data has allowed for the first time to obtain spatial and temporal observational distributions of night-time ozone mixing ratio in the mesosphere.

    The distributions obtained from GOMOS data have specific features, which are variable from year to year. In particular, due to a long lifetime of ozone in polar night conditions, the downward transport of polar air by the meridional circulation is clearly observed in the tertiary ozone maximum time series. Although the maximum tertiary ozone mixing ratio is achieved close to the polar night terminator (as predicted by the theory, TOM can be observed also at very high latitudes, not only in the beginning and at the end, but also in the middle of winter. We have compared the observational spatio-temporal distributions of the tertiary ozone maximum with that obtained using WACCM (Whole Atmosphere Community Climate Model and found that the specific features are reproduced satisfactorily by the model.

    Since ozone in the mesosphere is very sensitive to HOx concentrations, energetic particle precipitation can significantly modify the shape of the ozone profiles. In particular, GOMOS observations have shown that the tertiary ozone maximum was temporarily destroyed during the January 2005 and December 2006 solar proton events as a result of the HOx enhancement from the increased ionization.

  14. Nonequilibrium thermodynamics and maximum entropy production in the Earth system: applications and implications.

    Science.gov (United States)

    Kleidon, Axel

    2009-06-01

    The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

  15. Fungic microflora of Panicum maximum and Styosanthes spp. commercial seed / Microflora fúngica de sementes comerciais de Panicum maximum e Stylosanthes spp.

    Directory of Open Access Journals (Sweden)

    Larissa Rodrigues Fabris

    2010-09-01

    Full Text Available The sanitary quality of 26 lots commercial seeds of tropical forages, produced in different regions (2004-05 and 2005-06 was analyzed. The lots were composed of seeds of Panicum maximum ('Massai', 'Mombaça' e 'Tanzânia' and stylo ('Estilosantes Campo Grande' - ECG. Additionally, seeds of two lots of P. maximum for exportation were analyzed. The blotter test was used, at 20ºC under alternating light and darkness in a 12 h photoperiod, for seven days. The Aspergillus, Cladosporium and Rhizopus genus consisted the secondary or saprophytes fungi (FSS with greatest frequency in P. maximum lots. In general, there was low incidence of these fungus in the seeds. In relation to pathogenic fungi (FP, it was detected high frequency of contaminated lots by Bipolaris, Curvularia, Fusarium and Phoma genus. Generally, there was high incidence of FP in P. maximum seeds. The occurrence of Phoma sp. was hight, because in 81% of the lots showed incidence superior to 50%. In 'ECG' seeds it was detected FSS (Aspergillus, Cladosporium, and Penicillium genus and FP (Bipolaris, Curvularia, Fusarium and Phoma genus, usually, in low incidence. FSS and FP were associated to P. maximum seeds for exportation, with significant incidence in some cases. The results indicated that there was a limiting factor in all producer regions regarding sanitary quality of the seeds.Sementes comerciais de forrageiras tropicais, pertencente a 26 lotes produzidos em diferentes regiões (safras 2004-05 e 2005-06, foram avaliadas quanto à sanidade. Foram analisadas sementes de cultivares de Panicum maximum (Massai, Mombaça e Tanzânia e de estilosantes (Estilosantes Campo Grande – ECG. Adicionalmente, avaliou-se a qualidade sanitária de dois lotes de sementes de P. maximum destinados à exportação. Para isso, as sementes foram submetidas ao teste de papel de filtro em gerbox, os quais foram incubados a 20ºC, com fotoperíodo de 12 h, durante sete dias. Os fungos saprófitos ou

  16. Maximum Likelihood Method for Predicting Environmental Conditions from Assemblage Composition: The R Package bio.infer

    Directory of Open Access Journals (Sweden)

    Lester L. Yuan

    2007-06-01

    Full Text Available This paper provides a brief introduction to the R package bio.infer, a set of scripts that facilitates the use of maximum likelihood (ML methods for predicting environmental conditions from assemblage composition. Environmental conditions can often be inferred from only biological data, and these inferences are useful when other sources of data are unavailable. ML prediction methods are statistically rigorous and applicable to a broader set of problems than more commonly used weighted averaging techniques. However, ML methods require a substantially greater investment of time to program algorithms and to perform computations. This package is designed to reduce the effort required to apply ML prediction methods.

  17. Bearing Fault Detection Based on Maximum Likelihood Estimation and Optimized ANN Using the Bees Algorithm

    Directory of Open Access Journals (Sweden)

    Behrooz Attaran

    2015-01-01

    Full Text Available Rotating machinery is the most common machinery in industry. The root of the faults in rotating machinery is often faulty rolling element bearings. This paper presents a technique using optimized artificial neural network by the Bees Algorithm for automated diagnosis of localized faults in rolling element bearings. The inputs of this technique are a number of features (maximum likelihood estimation values, which are derived from the vibration signals of test data. The results shows that the performance of the proposed optimized system is better than most previous studies, even though it uses only two features. Effectiveness of the above method is illustrated using obtained bearing vibration data.

  18. 78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties

    Science.gov (United States)

    2013-08-14

    ... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...

  19. 5 CFR 838.711 - Maximum former spouse survivor annuity.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 2 2010-01-01 2010-01-01 false Maximum former spouse survivor annuity... Orders Awarding Former Spouse Survivor Annuities Limitations on Survivor Annuities § 838.711 Maximum former spouse survivor annuity. (a) Under CSRS, payments under a court order may not exceed the amount...

  20. A tropospheric ozone maximum over the equatorial Southern Indian Ocean

    Directory of Open Access Journals (Sweden)

    L. Zhang

    2012-05-01

    Full Text Available We examine the distribution of tropical tropospheric ozone (O3 from the Microwave Limb Sounder (MLS and the Tropospheric Emission Spectrometer (TES by using a global three-dimensional model of tropospheric chemistry (GEOS-Chem. MLS and TES observations of tropospheric O3 during 2005 to 2009 reveal a distinct, persistent O3 maximum, both in mixing ratio and tropospheric column, in May over the Equatorial Southern Indian Ocean (ESIO. The maximum is most pronounced in 2006 and 2008 and less evident in the other three years. This feature is also consistent with the total column O3 observations from the Ozone Mapping Instrument (OMI and the Atmospheric Infrared Sounder (AIRS. Model results reproduce the observed May O3 maximum and the associated interannual variability. The origin of the maximum reflects a complex interplay of chemical and dynamic factors. The O3 maximum is dominated by the O3 production driven by lightning nitrogen oxides (NOx emissions, which accounts for 62% of the tropospheric column O3 in May 2006. We find the contribution from biomass burning, soil, anthropogenic and biogenic sources to the O3 maximum are rather small. The O3 productions in the lightning outflow from Central Africa and South America both peak in May and are directly responsible for the O3 maximum over the western ESIO. The lightning outflow from Equatorial Asia dominates over the eastern ESIO. The interannual variability of the O3 maximum is driven largely by the anomalous anti-cyclones over the southern Indian Ocean in May 2006 and 2008. The lightning outflow from Central Africa and South America is effectively entrained by the anti-cyclones followed by northward transport to the ESIO.

  1. Solar maximum observatory

    International Nuclear Information System (INIS)

    Rust, D.M.

    1984-01-01

    The successful retrieval and repair of the Solar Maximum Mission (SMM) satellite by Shuttle astronauts in April 1984 permitted continuance of solar flare observations that began in 1980. The SMM carries a soft X ray polychromator, gamma ray, UV and hard X ray imaging spectrometers, a coronagraph/polarimeter and particle counters. The data gathered thus far indicated that electrical potentials of 25 MeV develop in flares within 2 sec of onset. X ray data show that flares are composed of compressed magnetic loops that have come too close together. Other data have been taken on mass ejection, impacts of electron beams and conduction fronts with the chromosphere and changes in the solar radiant flux due to sunspots. 13 references

  2. Post optimization paradigm in maximum 3-satisfiability logic programming

    Science.gov (United States)

    Mansor, Mohd. Asyraf; Sathasivam, Saratha; Kasihmuddin, Mohd Shareduwan Mohd

    2017-08-01

    Maximum 3-Satisfiability (MAX-3SAT) is a counterpart of the Boolean satisfiability problem that can be treated as a constraint optimization problem. It deals with a conundrum of searching the maximum number of satisfied clauses in a particular 3-SAT formula. This paper presents the implementation of enhanced Hopfield network in hastening the Maximum 3-Satisfiability (MAX-3SAT) logic programming. Four post optimization techniques are investigated, including the Elliot symmetric activation function, Gaussian activation function, Wavelet activation function and Hyperbolic tangent activation function. The performances of these post optimization techniques in accelerating MAX-3SAT logic programming will be discussed in terms of the ratio of maximum satisfied clauses, Hamming distance and the computation time. Dev-C++ was used as the platform for training, testing and validating our proposed techniques. The results depict the Hyperbolic tangent activation function and Elliot symmetric activation function can be used in doing MAX-3SAT logic programming.

  3. Mass mortality of the vermetid gastropod Ceraesignum maximum

    Science.gov (United States)

    Brown, A. L.; Frazer, T. K.; Shima, J. S.; Osenberg, C. W.

    2016-09-01

    Ceraesignum maximum (G.B. Sowerby I, 1825), formerly Dendropoma maximum, was subject to a sudden, massive die-off in the Society Islands, French Polynesia, in 2015. On Mo'orea, where we have detailed documentation of the die-off, these gastropods were previously found in densities up to 165 m-2. In July 2015, we surveyed shallow back reefs of Mo'orea before, during and after the die-off, documenting their swift decline. All censused populations incurred 100% mortality. Additional surveys and observations from Mo'orea, Tahiti, Bora Bora, and Huahine (but not Taha'a) suggested a similar, and approximately simultaneous, die-off. The cause(s) of this cataclysmic mass mortality are currently unknown. Given the previously documented negative effects of C. maximum on corals, we expect the die-off will have cascading effects on the reef community.

  4. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    Histograms of counted events are Poisson distributed, but are typically fitted without justification using nonlinear least squares fitting. The more appropriate maximum likelihood estimator (MLE) for Poisson distributed data is seldom used. We extend the use of the Levenberg-Marquardt algorithm commonly used for nonlinear least squares minimization for use with the MLE for Poisson distributed data. In so doing, we remove any excuse for not using this more appropriate MLE. We demonstrate the use of the algorithm and the superior performance of the MLE using simulations and experiments in the context of fluorescence lifetime imaging. Scientists commonly form histograms of counted events from their data, and extract parameters by fitting to a specified model. Assuming that the probability of occurrence for each bin is small, event counts in the histogram bins will be distributed according to the Poisson distribution. We develop here an efficient algorithm for fitting event counting histograms using the maximum likelihood estimator (MLE) for Poisson distributed data, rather than the non-linear least squares measure. This algorithm is a simple extension of the common Levenberg-Marquardt (L-M) algorithm, is simple to implement, quick and robust. Fitting using a least squares measure is most common, but it is the maximum likelihood estimator only for Gaussian-distributed data. Non-linear least squares methods may be applied to event counting histograms in cases where the number of events is very large, so that the Poisson distribution is well approximated by a Gaussian. However, it is not easy to satisfy this criterion in practice - which requires a large number of events. It has been well-known for years that least squares procedures lead to biased results when applied to Poisson-distributed data; a recent paper providing extensive characterization of these biases in exponential fitting is given. The more appropriate measure based on the maximum likelihood estimator (MLE

  5. Fatigue life prediction method for contact wire using maximum local stress

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Yong Seok; Haochuang, Li; Seok, Chang Sung; Koo, Jae Mean [Sungkyunkwan University, Suwon (Korea, Republic of); Lee, Ki Won; Kwon, Sam Young; Cho, Yong Hyeon [Korea Railroad Research Institute, Uiwang (Korea, Republic of)

    2015-01-15

    Railway contact wires supplying electricity to trains are exposed to repeated mechanical strain and stress caused by their own weight and discontinuous contact with a pantograph during train operation. Since the speed of railway transportation has increased continuously, railway industries have recently reported a number of contact wire failures caused by mechanical fatigue fractures instead of normal wear, which has been a more common failure mechanism. To secure the safety and durability of contact wires in environments with increased train speeds, a bending fatigue test on contact wire has been performed. The test equipment is too complicated to evaluate the fatigue characteristics of contact wire. Thus, the axial tension fatigue test was performed for a standard specimen, and the bending fatigue life for the contact wire structure was then predicted using the maximum local stress occurring at the top of the contact wire. Lastly, the tested bending fatigue life of the structure was compared with the fatigue life predicted by the axial tension fatigue test for verification.

  6. Fatigue life prediction method for contact wire using maximum local stress

    International Nuclear Information System (INIS)

    Kim, Yong Seok; Haochuang, Li; Seok, Chang Sung; Koo, Jae Mean; Lee, Ki Won; Kwon, Sam Young; Cho, Yong Hyeon

    2015-01-01

    Railway contact wires supplying electricity to trains are exposed to repeated mechanical strain and stress caused by their own weight and discontinuous contact with a pantograph during train operation. Since the speed of railway transportation has increased continuously, railway industries have recently reported a number of contact wire failures caused by mechanical fatigue fractures instead of normal wear, which has been a more common failure mechanism. To secure the safety and durability of contact wires in environments with increased train speeds, a bending fatigue test on contact wire has been performed. The test equipment is too complicated to evaluate the fatigue characteristics of contact wire. Thus, the axial tension fatigue test was performed for a standard specimen, and the bending fatigue life for the contact wire structure was then predicted using the maximum local stress occurring at the top of the contact wire. Lastly, the tested bending fatigue life of the structure was compared with the fatigue life predicted by the axial tension fatigue test for verification.

  7. Food Insecurity and Common Mental Disorders among Ethiopian Youth: Structural Equation Modeling

    Science.gov (United States)

    Lindstrom, David; Belachew, Tefera; Hadley, Craig; Lachat, Carl; Verstraeten, Roos; De Cock, Nathalie; Kolsteren, Patrick

    2016-01-01

    Background Although the consequences of food insecurity on physical health and nutritional status of youth living have been reported, its effect on their mental health remains less investigated in developing countries. The aim of this study was to examine the pathways through which food insecurity is associated with poor mental health status among youth living in Ethiopia. Methods We used data from Jimma Longitudinal Family Survey of Youth (JLFSY) collected in 2009/10. A total of 1,521 youth were included in the analysis. We measured food insecurity using a 5-items scale and common mental disorders using the 20-item Self-Reporting Questionnaire (SRQ-20). Structural and generalized equation modeling using maximum likelihood estimation method was used to analyze the data. Results The prevalence of common mental disorders was 30.8% (95% CI: 28.6, 33.2). Food insecurity was independently associated with common mental disorders (β = 0.323, Pinsecurity on common mental disorders was direct and only 8.2% of their relationship was partially mediated by physical health. In addition, poor self-rated health (β = 0.285, Pinsecurity is directly associated with common mental disorders among youth in Ethiopia. Interventions that aim to improve mental health status of youth should consider strategies to improve access to sufficient, safe and nutritious food. PMID:27846283

  8. Maximum phytoplankton concentrations in the sea

    DEFF Research Database (Denmark)

    Jackson, G.A.; Kiørboe, Thomas

    2008-01-01

    A simplification of plankton dynamics using coagulation theory provides predictions of the maximum algal concentration sustainable in aquatic systems. These predictions have previously been tested successfully against results from iron fertilization experiments. We extend the test to data collect...

  9. Selection of magnetorheological brake types via optimal design considering maximum torque and constrained volume

    International Nuclear Information System (INIS)

    Nguyen, Q H; Choi, S B

    2012-01-01

    This research focuses on optimal design of different types of magnetorheological brakes (MRBs), from which an optimal selection of MRB types is identified. In the optimization, common types of MRB such as disc-type, drum-type, hybrid-types, and T-shaped type are considered. The optimization problem is to find the optimal value of significant geometric dimensions of the MRB that can produce a maximum braking torque. The MRB is constrained in a cylindrical volume of a specific radius and length. After a brief description of the configuration of MRB types, the braking torques of the MRBs are derived based on the Herschel–Bulkley model of the MR fluid. The optimal design of MRBs constrained in a specific cylindrical volume is then analysed. The objective of the optimization is to maximize the braking torque while the torque ratio (the ratio of maximum braking torque and the zero-field friction torque) is constrained to be greater than a certain value. A finite element analysis integrated with an optimization tool is employed to obtain optimal solutions of the MRBs. Optimal solutions of MRBs constrained in different volumes are obtained based on the proposed optimization procedure. From the results, discussions on the optimal selection of MRB types depending on constrained volumes are given. (paper)

  10. On the Pontryagin maximum principle for systems with delays. Economic applications

    Science.gov (United States)

    Kim, A. V.; Kormyshev, V. M.; Kwon, O. B.; Mukhametshin, E. R.

    2017-11-01

    The Pontryagin maximum principle [6] is the key stone of finite-dimensional optimal control theory [1, 2, 5]. So beginning with opening the maximum principle it was important to extend the maximum principle on various classes of dynamical systems. In t he paper we consider some aspects of application of i-smooth analysis [3, 4] in the theory of the Pontryagin maximum principle [6] for systems with delays, obtained results can be applied by elaborating optimal program controls in economic models with delays.

  11. A simple method for finding the scattering coefficients of quantum graphs

    International Nuclear Information System (INIS)

    Cottrell, Seth S.

    2015-01-01

    Quantum walks are roughly analogous to classical random walks, and similar to classical walks they have been used to find new (quantum) algorithms. When studying the behavior of large graphs or combinations of graphs, it is useful to find the response of a subgraph to signals of different frequencies. In doing so, we can replace an entire subgraph with a single vertex with variable scattering coefficients. In this paper, a simple technique for quickly finding the scattering coefficients of any discrete-time quantum graph will be presented. These scattering coefficients can be expressed entirely in terms of the characteristic polynomial of the graph’s time step operator. This is a marked improvement over previous techniques which have traditionally required finding eigenstates for a given eigenvalue, which is far more computationally costly. With the scattering coefficients we can easily derive the “impulse response” which is the key to predicting the response of a graph to any signal. This gives us a powerful set of tools for rapidly understanding the behavior of graphs or for reducing a large graph into its constituent subgraphs regardless of how they are connected

  12. A Hybrid Physical and Maximum-Entropy Landslide Susceptibility Model

    Directory of Open Access Journals (Sweden)

    Jerry Davis

    2015-06-01

    Full Text Available The clear need for accurate landslide susceptibility mapping has led to multiple approaches. Physical models are easily interpreted and have high predictive capabilities but rely on spatially explicit and accurate parameterization, which is commonly not possible. Statistical methods can include other factors influencing slope stability such as distance to roads, but rely on good landslide inventories. The maximum entropy (MaxEnt model has been widely and successfully used in species distribution mapping, because data on absence are often uncertain. Similarly, knowledge about the absence of landslides is often limited due to mapping scale or methodology. In this paper a hybrid approach is described that combines the physically-based landslide susceptibility model “Stability INdex MAPping” (SINMAP with MaxEnt. This method is tested in a coastal watershed in Pacifica, CA, USA, with a well-documented landslide history including 3 inventories of 154 scars on 1941 imagery, 142 in 1975, and 253 in 1983. Results indicate that SINMAP alone overestimated susceptibility due to insufficient data on root cohesion. Models were compared using SINMAP stability index (SI or slope alone, and SI or slope in combination with other environmental factors: curvature, a 50-m trail buffer, vegetation, and geology. For 1941 and 1975, using slope alone was similar to using SI alone; however in 1983 SI alone creates an Areas Under the receiver operator Curve (AUC of 0.785, compared with 0.749 for slope alone. In maximum-entropy models created using all environmental factors, the stability index (SI from SINMAP represented the greatest contributions in all three years (1941: 48.1%; 1975: 35.3; and 1983: 48%, with AUC of 0.795, 0822, and 0.859, respectively; however; using slope instead of SI created similar overall AUC values, likely due to the combined effect with plan curvature indicating focused hydrologic inputs and vegetation identifying the effect of root cohesion

  13. Parametric optimization of thermoelectric elements footprint for maximum power generation

    DEFF Research Database (Denmark)

    Rezania, A.; Rosendahl, Lasse; Yin, Hao

    2014-01-01

    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost......-performance, and variation of efficiency in the uni-couple over a wide range of the heat transfer coefficient on the cold junction. The three-dimensional (3D) governing equations of the thermoelectricity and the heat transfer are solved using the finite element method (FEM) for temperature dependent properties of TE...... materials. The results, which are in good agreement with the previous computational studies, show that the maximum power generation and the maximum cost-performance in the module occur at An/Ap

  14. Coronary ligation reduces maximum sustained swimming speed in Chinook salmon, Oncorhynchus tshawytscha

    DEFF Research Database (Denmark)

    Farrell, A P; Steffensen, J F

    1987-01-01

    The maximum aerobic swimming speed of Chinook salmon (Oncorhynchus tshawytscha) was measured before and after ligation of the coronary artery. Coronary artery ligation prevented blood flow to the compact layer of the ventricular myocardium, which represents 30% of the ventricular mass, and produced...... a statistically significant 35.5% reduction in maximum swimming speed. We conclude that the coronary circulation is important for maximum aerobic swimming and implicit in this conclusion is that maximum cardiac performance is probably necessary for maximum aerobic swimming performance....

  15. b-tree facets for the simple graph partitioning polytope

    DEFF Research Database (Denmark)

    Sørensen, Michael Malmros

    2004-01-01

    The simple graph partitioning problem is to partition an edge-weighted graph into mutually disjoint subgraphs, each consisting of no more than b nodes, such that the sum of the weights of all edges in the subgraphs is maximal. In this paper we introduce a large class of facet defining inequalities...... for the simple graph partitioning polytopes P_n(b), b >= 3, associated with the complete graph on n nodes. These inequalities are induced by a graph configuration which is built upon trees of cardinality b. We provide a closed-form theorem that states all necessary and sufficient conditions for the facet...... defining property of the inequalities. Udgivelsesdato: JUN...

  16. Effects of bruxism on the maximum bite force

    Directory of Open Access Journals (Sweden)

    Todić Jelena T.

    2017-01-01

    Full Text Available Background/Aim. Bruxism is a parafunctional activity of the masticatory system, which is characterized by clenching or grinding of teeth. The purpose of this study was to determine whether the presence of bruxism has impact on maximum bite force, with particular reference to the potential impact of gender on bite force values. Methods. This study included two groups of subjects: without and with bruxism. The presence of bruxism in the subjects was registered using a specific clinical questionnaire on bruxism and physical examination. The subjects from both groups were submitted to the procedure of measuring the maximum bite pressure and occlusal contact area using a single-sheet pressure-sensitive films (Fuji Prescale MS and HS Film. Maximal bite force was obtained by multiplying maximal bite pressure and occlusal contact area values. Results. The average values of maximal bite force were significantly higher in the subjects with bruxism compared to those without bruxism (p 0.01. Maximal bite force was significantly higher in the males compared to the females in all segments of the research. Conclusion. The presence of bruxism influences the increase in the maximum bite force as shown in this study. Gender is a significant determinant of bite force. Registration of maximum bite force can be used in diagnosing and analysing pathophysiological events during bruxism.

  17. Acumulación de hojarasca en un pastizal de Panicum maximum y en un sistema silvopastoril de Panicum maximum y Leucaena leucocephala Litter accumulation in a Panicum maximum grassland and in a silvopastoral system of Panicum maximum and Leucaena leucocephala

    Directory of Open Access Journals (Sweden)

    Saray Sánchez

    2007-09-01

    Full Text Available Se realizó un estudio en la Estación Experimental de Pastos y Forrajes "Indio Hatuey", Matanzas, Cuba, con el objetivo de determinar la acumulación de la hojarasca en un pastizal de Panicum maximum Jacq cv. Likoni y en un sistema silvopastoril de Panicum maximum y Leucaena leucocephala (Lam de Wit cv. Cunningham. En los pastizales de P. maximum de ambos sistemas se determinó la acumulación de la hojarasca según la técnica propuesta por Bruce y Ebershon (1982, mientras que la hojarasca de L. leucocephala acumulada en el sistema silvopastoril se determinó según Santa Regina et al. (1997. De forma general, los resultados demostraron que en ambos pastizales la guinea acumuló una menor cantidad de hojarasca durante el período junio-diciembre, etapa en la que se produce su mayor desarrollo vegetativo. En la leucaena la mayor producción de hojarasca ocurrió en el período de diciembre a enero, asociada con la caída natural de sus hojas que se produce por efecto de las temperaturas más bajas y la escasa humedad en el suelo. En el sistema silvopastoril la hojarasca de leucaena representó el mayor porcentaje de peso dentro de la producción total, con un contenido más alto de nitrógeno y de calcio que el de la hojarasca del estrato herbáceo. En la guinea la lluvia fue el factor climático que mayor correlación negativa presentó con la producción de hojarasca en ambos sistemas, y en la leucaena la mayor correlación negativa se encontró con la temperatura mínima.A study was carried out at the Experimental Station of Pastures and Forages "Indio Hatuey", Matanzas, Cuba, with the objective of determining the litter accumulation in a pastureland of Panicum maximum Jacq cv. Likoni and in a silvopastoral system of Panicum maximum and Leucaena leucocephala (Lam de Wit cv. Cunningham. In the P. maximum pasturelands of both systems the litter accumulation was determined by means of the technique proposed by Bruce and Ebershon (1982, while

  18. Parameters determining maximum wind velocity in a tropical cyclone

    International Nuclear Information System (INIS)

    Choudhury, A.M.

    1984-09-01

    The spiral structure of a tropical cyclone was earlier explained by a tangential velocity distribution which varies inversely as the distance from the cyclone centre outside the circle of maximum wind speed. The case has been extended in the present paper by adding a radial velocity. It has been found that a suitable combination of radial and tangential velocities can account for the spiral structure of a cyclone. This enables parametrization of the cyclone. Finally a formula has been derived relating maximum velocity in a tropical cyclone with angular momentum, radius of maximum wind speed and the spiral angle. The shapes of the spirals have been computed for various spiral angles. (author)

  19. Exercise-induced maximum metabolic rate scaled to body mass by ...

    African Journals Online (AJOL)

    Exercise-induced maximum metabolic rate scaled to body mass by the fractal ... rate scaling is that exercise-induced maximum aerobic metabolic rate (MMR) is ... muscle stress limitation, and maximized oxygen delivery and metabolic rates.

  20. Electron density distribution in Si and Ge using multipole, maximum ...

    Indian Academy of Sciences (India)

    Si and Ge has been studied using multipole, maximum entropy method (MEM) and ... and electron density distribution using the currently available versatile ..... data should be subjected to maximum possible utility for the characterization of.

  1. The maximum-entropy method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš; Schneider, M.

    2003-01-01

    Roč. 59, - (2003), s. 459-469 ISSN 0108-7673 Grant - others:DFG(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : maximum-entropy method, * aperiodic crystals * electron density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.558, year: 2003

  2. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  3. Common dolphin (Delphinus delphis bycatch in New Zealand commercial trawl fisheries.

    Directory of Open Access Journals (Sweden)

    Finlay N Thompson

    Full Text Available Marine mammals are regularly reported as bycatch in commercial and artisanal fisheries, but data are often insufficient to allow assessment of these incidental mortalities. Observer coverage of the mackerel trawl fishery in New Zealand waters between 1995 and 2011 allowed evaluation of common dolphin Delphinus delphis bycatch on the North Island west coast, where this species is the most frequently caught cetacean. Observer data were used to develop a statistical model to estimate total captures and explore covariates related to captures. A two-stage Bayesian hurdle model was used, with a logistic generalised linear model predicting whether any common dolphin captures occurred on a given tow of the net, and a zero-truncated Poisson distribution to estimate the number of dolphin captures, given that there was a capture event. Over the 16-year study period, there were 119 common dolphin captures reported on 4299 observed tows. Capture events frequently involved more than one individual, with a maximum of nine common dolphin observed caught in a single tow. There was a peak of 141 estimated common dolphin captures (95% c.i.: 56 to 276; 6.27 captures per 100 tows in 2002-03, following the marked expansion in annual effort in this fishery to over 2000 tows. Subsequently, the number of captures fluctuated although fishing effort remained relatively high. Of the observed capture events, 60% were during trawls where the top of the net (headline was <40 m below the surface, and the model determined that this covariate best explained common dolphin captures. Increasing headline depth by 21 m would halve the probability of a dolphin capture event on a tow. While lack of abundance data prevents assessment of the impact of these mortalities on the local common dolphin population, a clear recommendation from this study is the increasing of headline depth to reduce common dolphin captures.

  4. Is tube repair of aortic aneurysm followed by aneurysmal change in the common iliac arteries?

    Science.gov (United States)

    Provan, J L; Fialkov, J; Ameli, F M; St Louis, E L

    1990-10-01

    To address the concern that tube repair of an abdominal aortic aneurysm might be followed by aneurysmal change in the common iliac arteries, 23 patients who had undergone the operation were re-examined 3 to 5 years later. Although 9 had had minimal ectasia of these arteries preoperatively, in none of the 23 was there symptomatic or radiologic evidence of aneurysmal change on follow-up. Measurements of the maximum intraluminal diameters were made by computed tomography; they indicated no significant differences between the preoperative and follow-up sizes of the common iliac arteries. The variation in time to follow-up also showed no significant correlation with change in artery diameter.

  5. 5 CFR 534.203 - Maximum stipends.

    Science.gov (United States)

    2010-01-01

    ... maximum stipend established under this section. (e) A trainee at a non-Federal hospital, clinic, or medical or dental laboratory who is assigned to a Federal hospital, clinic, or medical or dental... Administrative Personnel OFFICE OF PERSONNEL MANAGEMENT CIVIL SERVICE REGULATIONS PAY UNDER OTHER SYSTEMS Student...

  6. Influence of maximum bite force on jaw movement during gummy jelly mastication.

    Science.gov (United States)

    Kuninori, T; Tomonari, H; Uehara, S; Kitashima, F; Yagi, T; Miyawaki, S

    2014-05-01

    It is known that maximum bite force has various influences on chewing function; however, there have not been studies in which the relationships between maximum bite force and masticatory jaw movement have been clarified. The aim of this study was to investigate the effect of maximum bite force on masticatory jaw movement in subjects with normal occlusion. Thirty young adults (22 men and 8 women; mean age, 22.6 years) with good occlusion were divided into two groups based on whether they had a relatively high or low maximum bite force according to the median. The maximum bite force was determined according to the Dental Prescale System using pressure-sensitive sheets. Jaw movement during mastication of hard gummy jelly (each 5.5 g) on the preferred chewing side was recorded using a six degrees of freedom jaw movement recording system. The motion of the lower incisal point of the mandible was computed, and the mean values of 10 cycles (cycles 2-11) were calculated. A masticatory performance test was conducted using gummy jelly. Subjects with a lower maximum bite force showed increased maximum lateral amplitude, closing distance, width and closing angle; wider masticatory jaw movement; and significantly lower masticatory performance. However, no differences in the maximum vertical or maximum anteroposterior amplitudes were observed between the groups. Although other factors, such as individual morphology, may influence masticatory jaw movement, our results suggest that subjects with a lower maximum bite force show increased lateral jaw motion during mastication. © 2014 John Wiley & Sons Ltd.

  7. Maximum parsimony on subsets of taxa.

    Science.gov (United States)

    Fischer, Mareike; Thatte, Bhalchandra D

    2009-09-21

    In this paper we investigate mathematical questions concerning the reliability (reconstruction accuracy) of Fitch's maximum parsimony algorithm for reconstructing the ancestral state given a phylogenetic tree and a character. In particular, we consider the question whether the maximum parsimony method applied to a subset of taxa can reconstruct the ancestral state of the root more accurately than when applied to all taxa, and we give an example showing that this indeed is possible. A surprising feature of our example is that ignoring a taxon closer to the root improves the reliability of the method. On the other hand, in the case of the two-state symmetric substitution model, we answer affirmatively a conjecture of Li, Steel and Zhang which states that under a molecular clock the probability that the state at a single taxon is a correct guess of the ancestral state is a lower bound on the reconstruction accuracy of Fitch's method applied to all taxa.

  8. An event- and network-level analysis of college students' maximum drinking day.

    Science.gov (United States)

    Meisel, Matthew K; DiBello, Angelo M; Balestrieri, Sara G; Ott, Miles Q; DiGuiseppi, Graham T; Clark, Melissa A; Barnett, Nancy P

    2018-04-01

    Heavy episodic drinking is common among college students and remains a serious public health issue. Previous event-level research among college students has examined behaviors and individual-level characteristics that drive consumption and related consequences but often ignores the social network of people with whom these heavy drinking episodes occur. The main aim of the current study was to investigate the network of social connections between drinkers on their heaviest drinking occasions. Sociocentric network methods were used to collect information from individuals in the first-year class (N=1342) at one university. Past-month drinkers (N=972) reported on the characteristics of their heaviest drinking occasion in the past month and indicated who else among their network connections was present during this occasion. Average max drinking day indegree, or the total number of times a participant was nominated as being present on another students' heaviest drinking occasion, was 2.50 (SD=2.05). Network autocorrelation models indicated that max drinking day indegree (e.g., popularity on heaviest drinking occassions) and peers' number of drinks on their own maximum drinking occasions were significantly associated with participant maximum number of drinks, after controlling for demographic variables, pregaming, and global network indegree (e.g., popularity in the entire first-year class). Being present at other peers' heaviest drinking occasions is associated with greater drinking quantities on one's own heaviest drinking occasion. These findings suggest the potential for interventions that target peer influences within close social networks of drinkers. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Maximum length scale in density based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  10. Imaging VLBI polarimetry data from Active Galactic Nuclei using the Maximum Entropy Method

    Directory of Open Access Journals (Sweden)

    Coughlan Colm P.

    2013-12-01

    Full Text Available Mapping the relativistic jets emanating from AGN requires the use of a deconvolution algorithm to account for the effects of missing baseline spacings. The CLEAN algorithm is the most commonly used algorithm in VLBI imaging today and is suitable for imaging polarisation data. The Maximum Entropy Method (MEM is presented as an alternative with some advantages over the CLEAN algorithm, including better spatial resolution and a more rigorous and unbiased approach to deconvolution. We have developed a MEM code suitable for deconvolving VLBI polarisation data. Monte Carlo simulations investigating the performance of CLEAN and the MEM code on a variety of source types are being carried out. Real polarisation (VLBA data taken at multiple wavelengths have also been deconvolved using MEM, and several of the resulting polarisation and Faraday rotation maps are presented and discussed.

  11. Maximum Water Hammer Sensitivity Analysis

    OpenAIRE

    Jalil Emadi; Abbas Solemani

    2011-01-01

    Pressure waves and Water Hammer occur in a pumping system when valves are closed or opened suddenly or in the case of sudden failure of pumps. Determination of maximum water hammer is considered one of the most important technical and economical items of which engineers and designers of pumping stations and conveyance pipelines should take care. Hammer Software is a recent application used to simulate water hammer. The present study focuses on determining significance of ...

  12. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    OpenAIRE

    Gregor, Ivan; Steinbr?ck, Lars; McHardy, Alice C.

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we ...

  13. The maximum significant wave height in the Southern North Sea

    NARCIS (Netherlands)

    Bouws, E.; Tolman, H.L.; Holthuijsen, L.H.; Eldeberky, Y.; Booij, N.; Ferier, P.

    1995-01-01

    The maximum possible wave conditions along the Dutch coast, which seem to be dominated by the limited water depth, have been estimated in the present study with numerical simulations. Discussions with meteorologists suggest that the maximum possible sustained wind speed in North Sea conditions is

  14. The narrow endemic Norwegian peat moss Sphagnum troendelagicum originated before the last glacial maximum.

    Science.gov (United States)

    Stenøien, H K; Shaw, A J; Stengrundet, K; Flatberg, K I

    2011-02-01

    It is commonly found that individual hybrid, polyploid species originate recurrently and that many polyploid species originated relatively recently. It has been previously hypothesized that the extremely rare allopolyploid peat moss Sphagnum troendelagicum has originated multiple times, possibly after the last glacial maximum in Scandinavia. This conclusion was based on low linkage disequilibrium in anonymous genetic markers within natural populations, in which sexual reproduction has never been observed. Here we employ microsatellite markers and chloroplast DNA (cpDNA)-encoded trnG sequence data to test hypotheses concerning the origin and evolution of this species. We find that S. tenellum is the maternal progenitor and S. balticum is the paternal progenitor of S. troendelagicum. Using various Bayesian approaches, we estimate that S. troendelagicum originated before the Holocene but not before c. 80,000 years ago (median expected time since speciation 40 000 years before present). The observed lack of complete linkage disequilibrium in the genome of this species suggests cryptic sexual reproduction and recombination. Several lines of evidence suggest multiple origins for S. troendelagicum, but a single origin is supported by approximate Bayesian computation analyses. We hypothesize that S. troendelagicum originated in a peat-dominated refugium before last glacial maximum, and subsequently immigrated to central Norway by means of spore flow during the last thousands of years.

  15. The narrow endemic Norwegian peat moss Sphagnum troendelagicum originated before the last glacial maximum

    Science.gov (United States)

    Stenøien, H K; Shaw, A J; Stengrundet, K; Flatberg, K I

    2011-01-01

    It is commonly found that individual hybrid, polyploid species originate recurrently and that many polyploid species originated relatively recently. It has been previously hypothesized that the extremely rare allopolyploid peat moss Sphagnum troendelagicum has originated multiple times, possibly after the last glacial maximum in Scandinavia. This conclusion was based on low linkage disequilibrium in anonymous genetic markers within natural populations, in which sexual reproduction has never been observed. Here we employ microsatellite markers and chloroplast DNA (cpDNA)-encoded trnG sequence data to test hypotheses concerning the origin and evolution of this species. We find that S. tenellum is the maternal progenitor and S. balticum is the paternal progenitor of S. troendelagicum. Using various Bayesian approaches, we estimate that S. troendelagicum originated before the Holocene but not before c. 80 000 years ago (median expected time since speciation 40 000 years before present). The observed lack of complete linkage disequilibrium in the genome of this species suggests cryptic sexual reproduction and recombination. Several lines of evidence suggest multiple origins for S. troendelagicum, but a single origin is supported by approximate Bayesian computation analyses. We hypothesize that S. troendelagicum originated in a peat-dominated refugium before last glacial maximum, and subsequently immigrated to central Norway by means of spore flow during the last thousands of years. PMID:20717162

  16. Maximum mass ratio of AM CVn-type binary systems and maximum white dwarf mass in ultra-compact X-ray binaries

    Directory of Open Access Journals (Sweden)

    Arbutina Bojan

    2011-01-01

    Full Text Available AM CVn-type stars and ultra-compact X-ray binaries are extremely interesting semi-detached close binary systems in which the Roche lobe filling component is a white dwarf transferring mass to another white dwarf, neutron star or a black hole. Earlier theoretical considerations show that there is a maximum mass ratio of AM CVn-type binary systems (qmax ≈ 2/3 below which the mass transfer is stable. In this paper we derive slightly different value for qmax and more interestingly, by applying the same procedure, we find the maximum expected white dwarf mass in ultra-compact X-ray binaries.

  17. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.; Ito, N.

    2013-01-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  18. Size dependence of efficiency at maximum power of heat engine

    KAUST Repository

    Izumida, Y.

    2013-10-01

    We perform a molecular dynamics computer simulation of a heat engine model to study how the engine size difference affects its performance. Upon tactically increasing the size of the model anisotropically, we determine that there exists an optimum size at which the model attains the maximum power for the shortest working period. This optimum size locates between the ballistic heat transport region and the diffusive heat transport one. We also study the size dependence of the efficiency at the maximum power. Interestingly, we find that the efficiency at the maximum power around the optimum size attains a value that has been proposed as a universal upper bound, and it even begins to exceed the bound as the size further increases. We explain this behavior of the efficiency at maximum power by using a linear response theory for the heat engine operating under a finite working period, which naturally extends the low-dissipation Carnot cycle model [M. Esposito, R. Kawai, K. Lindenberg, C. Van den Broeck, Phys. Rev. Lett. 105, 150603 (2010)]. The theory also shows that the efficiency at the maximum power under an extreme condition may reach the Carnot efficiency in principle.© EDP Sciences Società Italiana di Fisica Springer-Verlag 2013.

  19. Maximum Principle for General Controlled Systems Driven by Fractional Brownian Motions

    International Nuclear Information System (INIS)

    Han Yuecai; Hu Yaozhong; Song Jian

    2013-01-01

    We obtain a maximum principle for stochastic control problem of general controlled stochastic differential systems driven by fractional Brownian motions (of Hurst parameter H>1/2). This maximum principle specifies a system of equations that the optimal control must satisfy (necessary condition for the optimal control). This system of equations consists of a backward stochastic differential equation driven by both fractional Brownian motions and the corresponding underlying standard Brownian motions. In addition to this backward equation, the maximum principle also involves the Malliavin derivatives. Our approach is to use conditioning and Malliavin calculus. To arrive at our maximum principle we need to develop some new results of stochastic analysis of the controlled systems driven by fractional Brownian motions via fractional calculus. Our approach of conditioning and Malliavin calculus is also applied to classical system driven by standard Brownian motions while the controller has only partial information. As a straightforward consequence, the classical maximum principle is also deduced in this more natural and simpler way.

  20. 20 CFR 10.806 - How are the maximum fees defined?

    Science.gov (United States)

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  1. Maximum-likelihood estimation of the hyperbolic parameters from grouped observations

    DEFF Research Database (Denmark)

    Jensen, Jens Ledet

    1988-01-01

    a least-squares problem. The second procedure Hypesti first approaches the maximum-likelihood estimate by iterating in the profile-log likelihood function for the scale parameter. Close to the maximum of the likelihood function, the estimation is brought to an end by iteration, using all four parameters...

  2. Scientific substantination of maximum allowable concentration of fluopicolide in water

    Directory of Open Access Journals (Sweden)

    Pelo I.М.

    2014-03-01

    Full Text Available In order to substantiate fluopicolide maximum allowable concentration in the water of water reservoirs the research was carried out. Methods of study: laboratory hygienic experiment using organoleptic and sanitary-chemical, sanitary-toxicological, sanitary-microbiological and mathematical methods. The results of fluopicolide influence on organoleptic properties of water, sanitary regimen of reservoirs for household purposes were given and its subthreshold concentration in water by sanitary and toxicological hazard index was calculated. The threshold concentration of the substance by the main hazard criteria was established, the maximum allowable concentration in water was substantiated. The studies led to the following conclusions: fluopicolide threshold concentration in water by organoleptic hazard index (limiting criterion – the smell – 0.15 mg/dm3, general sanitary hazard index (limiting criteria – impact on the number of saprophytic microflora, biochemical oxygen demand and nitrification – 0.015 mg/dm3, the maximum noneffective concentration – 0.14 mg/dm3, the maximum allowable concentration - 0.015 mg/dm3.

  3. Maximum likelihood estimation of finite mixture model for economic data

    Science.gov (United States)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir

    2014-06-01

    Finite mixture model is a mixture model with finite-dimension. This models are provides a natural representation of heterogeneity in a finite number of latent classes. In addition, finite mixture models also known as latent class models or unsupervised learning models. Recently, maximum likelihood estimation fitted finite mixture models has greatly drawn statistician's attention. The main reason is because maximum likelihood estimation is a powerful statistical method which provides consistent findings as the sample sizes increases to infinity. Thus, the application of maximum likelihood estimation is used to fit finite mixture model in the present paper in order to explore the relationship between nonlinear economic data. In this paper, a two-component normal mixture model is fitted by maximum likelihood estimation in order to investigate the relationship among stock market price and rubber price for sampled countries. Results described that there is a negative effect among rubber price and stock market price for Malaysia, Thailand, Philippines and Indonesia.

  4. Efficiency of autonomous soft nanomachines at maximum power.

    Science.gov (United States)

    Seifert, Udo

    2011-01-14

    We consider nanosized artificial or biological machines working in steady state enforced by imposing nonequilibrium concentrations of solutes or by applying external forces, torques, or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall into three different classes characterized, respectively, as "strong and efficient," "strong and inefficient," and "balanced." For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.

  5. Changing nest placement of Hawaiian Common Amakihi during the breeding cycle

    Science.gov (United States)

    van Riper, Charles; Kern, M. D.; Sogge, M. K.

    1993-01-01

    We studied the nesting behavior of the Common Amakihi (Hemignathus virens) from 1970-1981 on the island of Hawaii to determine if the species alters nest placement over a protracted 9-month breeding season. Birds preferentially chose the southwest quadrant of trees in which to build nests during all phases of the breeding season. It appeared that ambient temperature (Ta) was a contributing factor to differential nest placement between early and late phases of the annual breeding cycle. When Ta is low during the early (December-March) breeding period, Common Amakihi selected exposed nesting locations that benefitted them with maximum solar insolation. However, in the later phase of the breeding period (April-July) when Ta was much higher, renesting birds selected nest sites deeper in the canopy in significantly taller trees. This is one of the few documented examples in which a species changes location of nest during a breeding season, thus allowing exploitation of temporally differing microclimatic conditions.

  6. Zipf's law, power laws and maximum entropy

    International Nuclear Information System (INIS)

    Visser, Matt

    2013-01-01

    Zipf's law, and power laws in general, have attracted and continue to attract considerable attention in a wide variety of disciplines—from astronomy to demographics to software structure to economics to linguistics to zoology, and even warfare. A recent model of random group formation (RGF) attempts a general explanation of such phenomena based on Jaynes' notion of maximum entropy applied to a particular choice of cost function. In the present paper I argue that the specific cost function used in the RGF model is in fact unnecessarily complicated, and that power laws can be obtained in a much simpler way by applying maximum entropy ideas directly to the Shannon entropy subject only to a single constraint: that the average of the logarithm of the observable quantity is specified. (paper)

  7. Einstein-Dirac theory in spin maximum I

    International Nuclear Information System (INIS)

    Crumeyrolle, A.

    1975-01-01

    An unitary Einstein-Dirac theory, first in spin maximum 1, is constructed. An original feature of this article is that it is written without any tetrapod technics; basic notions and existence conditions for spinor structures on pseudo-Riemannian fibre bundles are only used. A coupling gravitation-electromagnetic field is pointed out, in the geometric setting of the tangent bundle over space-time. Generalized Maxwell equations for inductive media in presence of gravitational field are obtained. Enlarged Einstein-Schroedinger theory, gives a particular case of this E.D. theory. E. S. theory is a truncated E.D. theory in spin maximum 1. A close relation between torsion-vector and Schroedinger's potential exists and nullity of torsion-vector has a spinor meaning. Finally the Petiau-Duffin-Kemmer theory is incorporated in this geometric setting [fr

  8. CombiMotif: A new algorithm for network motifs discovery in protein-protein interaction networks

    Science.gov (United States)

    Luo, Jiawei; Li, Guanghui; Song, Dan; Liang, Cheng

    2014-12-01

    Discovering motifs in protein-protein interaction networks is becoming a current major challenge in computational biology, since the distribution of the number of network motifs can reveal significant systemic differences among species. However, this task can be computationally expensive because of the involvement of graph isomorphic detection. In this paper, we present a new algorithm (CombiMotif) that incorporates combinatorial techniques to count non-induced occurrences of subgraph topologies in the form of trees. The efficiency of our algorithm is demonstrated by comparing the obtained results with the current state-of-the art subgraph counting algorithms. We also show major differences between unicellular and multicellular organisms. The datasets and source code of CombiMotif are freely available upon request.

  9. Solved and unsolved problems of chemical graph theory

    International Nuclear Information System (INIS)

    Trinajstic, N.; Klein, D.J.; Randic, M.

    1986-01-01

    The development of several novel graph theoretical concepts and their applications in different branches of chemistry are reviewed. After a few introductory remarks they follow with an outline of selected important graph theoretical invariants, introducing some new results and indicating some open problems. They continue with discussing the problem of graph characterization and construction of graphs of chemical interest, with a particular emphasis on large systems. Finally they consider various problems and difficulties associated with special subgraphs, including subgraphs representing Kekule valence structures. The paper ends with a brief review of structure-property and structure-activity correlations, the topic which is one of prime motivations for application of graph theory to chemistry

  10. Fuzzy Controller Design Using FPGA for Photovoltaic Maximum Power Point Tracking

    OpenAIRE

    Basil M Hamed; Mohammed S. El-Moghany

    2012-01-01

    The cell has optimum operating point to be able to get maximum power. To obtain Maximum Power from photovoltaic array, photovoltaic power system usually requires Maximum Power Point Tracking (MPPT) controller. This paper provides a small power photovoltaic control system based on fuzzy control with FPGA technology design and implementation for MPPT. The system composed of photovoltaic module, buck converter and the fuzzy logic controller implemented on FPGA for controlling on/off time of MOSF...

  11. Maximum penetration level of distributed generation without violating voltage limits

    NARCIS (Netherlands)

    Morren, J.; Haan, de S.W.H.

    2009-01-01

    Connection of Distributed Generation (DG) units to a distribution network will result in a local voltage increase. As there will be a maximum on the allowable voltage increase, this will limit the maximum allowable penetration level of DG. By reactive power compensation (by the DG unit itself) a

  12. 25 CFR 141.36 - Maximum finance charges on pawn transactions.

    Science.gov (United States)

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false Maximum finance charges on pawn transactions. 141.36... PRACTICES ON THE NAVAJO, HOPI AND ZUNI RESERVATIONS Pawnbroker Practices § 141.36 Maximum finance charges on pawn transactions. No pawnbroker may impose an annual finance charge greater than twenty-four percent...

  13. Microprocessor-controlled step-down maximum-power-point tracker for photovoltaic systems

    Science.gov (United States)

    Mazmuder, R. K.; Haidar, S.

    1992-12-01

    An efficient maximum power point tracker (MPPT) has been developed and can be used with a photovoltaic (PV) array and a load which requires lower voltage than the PV array voltage to be operated. The MPPT makes the PV array to operate at maximum power point (MPP) under all insolation and temperature, which ensures the maximum amount of available PV power to be delivered to the load. The performance of the MPPT has been studied under different insolation levels.

  14. A Selectivity based approach to Continuous Pattern Detection in Streaming Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Choudhury, Sutanay; Holder, Larry; Chin, George; Agarwal, Khushbu; Feo, John T.

    2015-05-27

    Cyber security is one of the most significant technical challenges in current times. Detecting adversarial activities, prevention of theft of intellectual properties and customer data is a high priority for corporations and government agencies around the world. Cyber defenders need to analyze massive-scale, high-resolution network flows to identify, categorize, and mitigate attacks involving networks spanning institutional and national boundaries. Many of the cyber attacks can be described as subgraph patterns, with prominent examples being insider infiltrations (path queries), denial of service (parallel paths) and malicious spreads (tree queries). This motivates us to explore subgraph matching on streaming graphs in a continuous setting. The novelty of our work lies in using the subgraph distributional statistics collected from the streaming graph to determine the query processing strategy. We introduce a ``Lazy Search" algorithm where the search strategy is decided on a vertex-to-vertex basis depending on the likelihood of a match in the vertex neighborhood. We also propose a metric named ``Relative Selectivity" that is used to select between different query processing strategies. Our experiments performed on real online news, network traffic stream and a synthetic social network benchmark demonstrate 10-100x speedups over non-incremental, selectivity agnostic approaches.

  15. Event-based criteria in GT-STAF information indices: theory, exploratory diversity analysis and QSPR applications.

    Science.gov (United States)

    Barigye, S J; Marrero-Ponce, Y; Martínez López, Y; Martínez Santiago, O; Torrens, F; García Domenech, R; Galvez, J

    2013-01-01

    Versatile event-based approaches for the definition of novel information theory-based indices (IFIs) are presented. An event in this context is the criterion followed in the "discovery" of molecular substructures, which in turn serve as basis for the construction of the generalized incidence and relations frequency matrices, Q and F, respectively. From the resultant F, Shannon's, mutual, conditional and joint entropy-based IFIs are computed. In previous reports, an event named connected subgraphs was presented. The present study is an extension of this notion, in which we introduce other events, namely: terminal paths, vertex path incidence, quantum subgraphs, walks of length k, Sach's subgraphs, MACCs, E-state and substructure fingerprints and, finally, Ghose and Crippen atom-types for hydrophobicity and refractivity. Moreover, we define magnitude-based IFIs, introducing the use of the magnitude criterion in the definition of mutual, conditional and joint entropy-based IFIs. We also discuss the use of information-theoretic parameters as a measure of the dissimilarity of codified structural information of molecules. Finally, a comparison of the statistics for QSPR models obtained with the proposed IFIs and DRAGON's molecular descriptors for two physicochemical properties log P and log K of 34 derivatives of 2-furylethylenes demonstrates similar to better predictive ability than the latter.

  16. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1989-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. The author reviews the need for such methods in data analysis and shows, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. He concludes with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  17. Probable maximum flood control

    International Nuclear Information System (INIS)

    DeGabriele, C.E.; Wu, C.L.

    1991-11-01

    This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility

  18. Introduction to maximum entropy

    International Nuclear Information System (INIS)

    Sivia, D.S.

    1988-01-01

    The maximum entropy (MaxEnt) principle has been successfully used in image reconstruction in a wide variety of fields. We review the need for such methods in data analysis and show, by use of a very simple example, why MaxEnt is to be preferred over other regularizing functions. This leads to a more general interpretation of the MaxEnt method, and its use is illustrated with several different examples. Practical difficulties with non-linear problems still remain, this being highlighted by the notorious phase problem in crystallography. We conclude with an example from neutron scattering, using data from a filter difference spectrometer to contrast MaxEnt with a conventional deconvolution. 12 refs., 8 figs., 1 tab

  19. Application of Maximum Entropy Distribution to the Statistical Properties of Wave Groups

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The new distributions of the statistics of wave groups based on the maximum entropy principle are presented. The maximum entropy distributions appear to be superior to conventional distributions when applied to a limited amount of information. Its applications to the wave group properties show the effectiveness of the maximum entropy distribution. FFT filtering method is employed to obtain the wave envelope fast and efficiently. Comparisons of both the maximum entropy distribution and the distribution of Longuet-Higgins (1984) with the laboratory wind-wave data show that the former gives a better fit.

  20. Maximum-confidence discrimination among symmetric qudit states

    International Nuclear Information System (INIS)

    Jimenez, O.; Solis-Prosser, M. A.; Delgado, A.; Neves, L.

    2011-01-01

    We study the maximum-confidence (MC) measurement strategy for discriminating among nonorthogonal symmetric qudit states. Restricting to linearly dependent and equally likely pure states, we find the optimal positive operator valued measure (POVM) that maximizes our confidence in identifying each state in the set and minimizes the probability of obtaining inconclusive results. The physical realization of this POVM is completely determined and it is shown that after an inconclusive outcome, the input states may be mapped into a new set of equiprobable symmetric states, restricted, however, to a subspace of the original qudit Hilbert space. By applying the MC measurement again onto this new set, we can still gain some information about the input states, although with less confidence than before. This leads us to introduce the concept of sequential maximum-confidence (SMC) measurements, where the optimized MC strategy is iterated in as many stages as allowed by the input set, until no further information can be extracted from an inconclusive result. Within each stage of this measurement our confidence in identifying the input states is the highest possible, although it decreases from one stage to the next. In addition, the more stages we accomplish within the maximum allowed, the higher will be the probability of correct identification. We will discuss an explicit example of the optimal SMC measurement applied in the discrimination among four symmetric qutrit states and propose an optical network to implement it.

  1. Dinosaur Metabolism and the Allometry of Maximum Growth Rate

    OpenAIRE

    Myhrvold, Nathan P.

    2016-01-01

    The allometry of maximum somatic growth rate has been used in prior studies to classify the metabolic state of both extant vertebrates and dinosaurs. The most recent such studies are reviewed, and their data is reanalyzed. The results of allometric regressions on growth rate are shown to depend on the choice of independent variable; the typical choice used in prior studies introduces a geometric shear transformation that exaggerates the statistical power of the regressions. The maximum growth...

  2. The spectrum of R Cygni during its exceptionally low maximum of 1983

    International Nuclear Information System (INIS)

    Wallerstein, G.; Dominy, J.F.; Mattei, J.A.; Smith, V.V.

    1985-01-01

    In 1983 R Cygni experienced its faintest maximum ever recorded. A study of the light curve shows correlations between brightness at maximum and interval from the previous cycle, in the sense that fainter maxima occur later than normal and are followed by maxima that occur earlier than normal. Emission and absorption lines in the optical and near infrared (2.2 μm region) reveal two significant correlations. The amplitude of line doubling is independent of the magnitude at maximum for msub(v)(max)=7.1 to 9.8. The velocities of the emission lines, however, correlate with the magnitude at maximum, in that during bright maxima they are negatively displaced by 15 km s -1 with respect to the red component of absorption lines, while during the faintest maximum there is no displacement. (author)

  3. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  4. [A retrieval method of drug molecules based on graph collapsing].

    Science.gov (United States)

    Qu, J W; Lv, X Q; Liu, Z M; Liao, Y; Sun, P H; Wang, B; Tang, Z

    2018-04-18

    To establish a compact and efficient hypergraph representation and a graph-similarity-based retrieval method of molecules to achieve effective and efficient medicine information retrieval. Chemical structural formula (CSF) was a primary search target as a unique and precise identifier for each compound at the molecular level in the research field of medicine information retrieval. To retrieve medicine information effectively and efficiently, a complete workflow of the graph-based CSF retrieval system was introduced. This system accepted the photos taken from smartphones and the sketches drawn on tablet personal computers as CSF inputs, and formalized the CSFs with the corresponding graphs. Then this paper proposed a compact and efficient hypergraph representation for molecules on the basis of analyzing factors that directly affected the efficiency of graph matching. According to the characteristics of CSFs, a hierarchical collapsing method combining graph isomorphism and frequent subgraph mining was adopted. There was yet a fundamental challenge, subgraph overlapping during the collapsing procedure, which hindered the method from establishing the correct compact hypergraph of an original CSF graph. Therefore, a graph-isomorphism-based algorithm was proposed to select dominant acyclic subgraphs on the basis of overlapping analysis. Finally, the spatial similarity among graphical CSFs was evaluated by multi-dimensional measures of similarity. To evaluate the performance of the proposed method, the proposed system was firstly compared with Wikipedia Chemical Structure Explorer (WCSE), the state-of-the-art system that allowed CSF similarity searching within Wikipedia molecules dataset, on retrieval accuracy. The system achieved higher values on mean average precision, discounted cumulative gain, rank-biased precision, and expected reciprocal rank than WCSE from the top-2 to the top-10 retrieved results. Specifically, the system achieved 10%, 1.41, 6.42%, and 1

  5. 7 CFR 762.129 - Percent of guarantee and maximum loss.

    Science.gov (United States)

    2010-01-01

    ... loss. (a) General. The percent of guarantee will not exceed 90 percent based on the credit risk to the lender and the Agency both before and after the transaction. The Agency will determine the percentage of... PLP lenders will not be less than 80 percent. (d) Maximum loss. The maximum amount the Agency will pay...

  6. On the maximum entropy distributions of inherently positive nuclear data

    Energy Technology Data Exchange (ETDEWEB)

    Taavitsainen, A., E-mail: aapo.taavitsainen@gmail.com; Vanhanen, R.

    2017-05-11

    The multivariate log-normal distribution is used by many authors and statistical uncertainty propagation programs for inherently positive quantities. Sometimes it is claimed that the log-normal distribution results from the maximum entropy principle, if only means, covariances and inherent positiveness of quantities are known or assumed to be known. In this article we show that this is not true. Assuming a constant prior distribution, the maximum entropy distribution is in fact a truncated multivariate normal distribution – whenever it exists. However, its practical application to multidimensional cases is hindered by lack of a method to compute its location and scale parameters from means and covariances. Therefore, regardless of its theoretical disadvantage, use of other distributions seems to be a practical necessity. - Highlights: • Statistical uncertainty propagation requires a sampling distribution. • The objective distribution of inherently positive quantities is determined. • The objectivity is based on the maximum entropy principle. • The maximum entropy distribution is the truncated normal distribution. • Applicability of log-normal or normal distribution approximation is limited.

  7. Effects of fasting on maximum thermogenesis in temperature-acclimated rats

    Science.gov (United States)

    Wang, L. C. H.

    1981-09-01

    To further investigate the limiting effect of substrates on maximum thermogenesis in acute cold exposure, the present study examined the prevalence of this effect at different thermogenic capabilities consequent to cold- or warm-acclimation. Male Sprague-Dawley rats (n=11) were acclimated to 6, 16 and 26‡C, in succession, their thermogenic capabilities after each acclimation temperature were measured under helium-oxygen (21% oxygen, balance helium) at -10‡C after overnight fasting or feeding. Regardless of feeding conditions, both maximum and total heat production were significantly greater in 6>16>26‡C-acclimated conditions. In the fed state, the total heat production was significantly greater than that in the fasted state at all acclimating temperatures but the maximum thermogenesis was significant greater only in the 6 and 16‡C-acclimated states. The results indicate that the limiting effect of substrates on maximum and total thermogenesis is independent of the magnitude of thermogenic capability, suggesting a substrate-dependent component in restricting the effective expression of existing aerobic metabolic capability even under severe stress.

  8. Improvement of the detector resolution in X-ray spectrometry by using the maximum entropy method

    International Nuclear Information System (INIS)

    Fernández, Jorge E.; Scot, Viviana; Giulio, Eugenio Di; Sabbatucci, Lorenzo

    2015-01-01

    In every X-ray spectroscopy measurement the influence of the detection system causes loss of information. Different mechanisms contribute to form the so-called detector response function (DRF): the detector efficiency, the escape of photons as a consequence of photoelectric or scattering interactions, the spectrum smearing due to the energy resolution, and, in solid states detectors (SSD), the charge collection artifacts. To recover the original spectrum, it is necessary to remove the detector influence by solving the so-called inverse problem. The maximum entropy unfolding technique solves this problem by imposing a set of constraints, taking advantage of the known a priori information and preserving the positive-defined character of the X-ray spectrum. This method has been included in the tool UMESTRAT (Unfolding Maximum Entropy STRATegy), which adopts a semi-automatic strategy to solve the unfolding problem based on a suitable combination of the codes MAXED and GRAVEL, developed at PTB. In the past UMESTRAT proved the capability to resolve characteristic peaks which were revealed as overlapped by a Si SSD, giving good qualitative results. In order to obtain quantitative results, UMESTRAT has been modified to include the additional constraint of the total number of photons of the spectrum, which can be easily determined by inverting the diagonal efficiency matrix. The features of the improved code are illustrated with some examples of unfolding from three commonly used SSD like Si, Ge, and CdTe. The quantitative unfolding can be considered as a software improvement of the detector resolution. - Highlights: • Radiation detection introduces distortions in X- and Gamma-ray spectrum measurements. • UMESTRAT is a graphical tool to unfold X- and Gamma-ray spectra. • UMESTRAT uses the maximum entropy method. • UMESTRAT’s new version produces unfolded spectra with quantitative meaning. • UMESTRAT is a software tool to improve the detector resolution.

  9. Maximum gain of Yagi-Uda arrays

    DEFF Research Database (Denmark)

    Bojsen, J.H.; Schjær-Jacobsen, Hans; Nilsson, E.

    1971-01-01

    Numerical optimisation techniques have been used to find the maximum gain of some specific parasitic arrays. The gain of an array of infinitely thin, equispaced dipoles loaded with arbitrary reactances has been optimised. The results show that standard travelling-wave design methods are not optimum....... Yagi–Uda arrays with equal and unequal spacing have also been optimised with experimental verification....

  10. Solar Maximum Mission Experiment - Ultraviolet Spectroscopy and Polarimetry on the Solar Maximum Mission

    Science.gov (United States)

    Tandberg-Hanssen, E.; Cheng, C. C.; Woodgate, B. E.; Brandt, J. C.; Chapman, R. D.; Athay, R. G.; Beckers, J. M.; Bruner, E. C.; Gurman, J. B.; Hyder, C. L.

    1981-01-01

    The Ultraviolet Spectrometer and Polarimeter on the Solar Maximum Mission spacecraft is described. It is pointed out that the instrument, which operates in the wavelength range 1150-3600 A, has a spatial resolution of 2-3 arcsec and a spectral resolution of 0.02 A FWHM in second order. A Gregorian telescope, with a focal length of 1.8 m, feeds a 1 m Ebert-Fastie spectrometer. A polarimeter comprising rotating Mg F2 waveplates can be inserted behind the spectrometer entrance slit; it permits all four Stokes parameters to be determined. Among the observing modes are rasters, spectral scans, velocity measurements, and polarimetry. Examples of initial observations made since launch are presented.

  11. Comparing fishers' and scientific estimates of size at maturity and maximum body size as indicators for overfishing.

    Science.gov (United States)

    Mclean, Elizabeth L; Forrester, Graham E

    2018-04-01

    We tested whether fishers' local ecological knowledge (LEK) of two fish life-history parameters, size at maturity (SAM) at maximum body size (MS), was comparable to scientific estimates (SEK) of the same parameters, and whether LEK influenced fishers' perceptions of sustainability. Local ecological knowledge was documented for 82 fishers from a small-scale fishery in Samaná Bay, Dominican Republic, whereas SEK was compiled from the scientific literature. Size at maturity estimates derived from LEK and SEK overlapped for most of the 15 commonly harvested species (10 of 15). In contrast, fishers' maximum size estimates were usually lower than (eight species), or overlapped with (five species) scientific estimates. Fishers' size-based estimates of catch composition indicate greater potential for overfishing than estimates based on SEK. Fishers' estimates of size at capture relative to size at maturity suggest routine inclusion of juveniles in the catch (9 of 15 species), and fishers' estimates suggest that harvested fish are substantially smaller than maximum body size for most species (11 of 15 species). Scientific estimates also suggest that harvested fish are generally smaller than maximum body size (13 of 15), but suggest that the catch is dominated by adults for most species (9 of 15 species), and that juveniles are present in the catch for fewer species (6 of 15). Most Samaná fishers characterized the current state of their fishery as poor (73%) and as having changed for the worse over the past 20 yr (60%). Fishers stated that concern about overfishing, catching small fish, and catching immature fish contributed to these perceptions, indicating a possible influence of catch-size composition on their perceptions. Future work should test this link more explicitly because we found no evidence that the minority of fishers with more positive perceptions of their fishery reported systematically different estimates of catch-size composition than those with the more

  12. Maximum β limited by ideal MHD ballooning instabilites in JT-60

    International Nuclear Information System (INIS)

    Seki, Shogo; Azumi, Masashi

    1986-03-01

    Maximum β limited by ideal MHD ballooning instabilities is investigated on divertor configurations in JT-60. Maximum β against ballooning modes in JT-60 has strong dependecy on the distribution of the safety factor over the magnetic surfaces. Maximum β is ∼ 2 % for q 0 = 1.0, while more than 3 % for q 0 = 1.5. These results suggest that the profile control of the safety factor, especially on the magnetic axis, is attractive to the higher β operation in JT-60. (author)

  13. A maximum principle for time dependent transport in systems with voids

    International Nuclear Information System (INIS)

    Schofield, S.L.; Ackroyd, R.T.

    1996-01-01

    A maximum principle is developed for the first-order time dependent Boltzmann equation. The maximum principle is a generalization of Schofield's κ(θ) principle for the first-order steady state Boltzmann equation, and provides a treatment of time dependent transport in systems with void regions. The formulation comprises a direct least-squares minimization allied with a suitable choice of bilinear functional, and gives rise to a maximum principle whose functional is free of terms that have previously led to difficulties in treating void regions. (Author)

  14. The maximum sizes of large scale structures in alternative theories of gravity

    Energy Technology Data Exchange (ETDEWEB)

    Bhattacharya, Sourav [IUCAA, Pune University Campus, Post Bag 4, Ganeshkhind, Pune, 411 007 India (India); Dialektopoulos, Konstantinos F. [Dipartimento di Fisica, Università di Napoli ' Federico II' , Complesso Universitario di Monte S. Angelo, Edificio G, Via Cinthia, Napoli, I-80126 Italy (Italy); Romano, Antonio Enea [Instituto de Física, Universidad de Antioquia, Calle 70 No. 52–21, Medellín (Colombia); Skordis, Constantinos [Department of Physics, University of Cyprus, 1 Panepistimiou Street, Nicosia, 2109 Cyprus (Cyprus); Tomaras, Theodore N., E-mail: sbhatta@iitrpr.ac.in, E-mail: kdialekt@gmail.com, E-mail: aer@phys.ntu.edu.tw, E-mail: skordis@ucy.ac.cy, E-mail: tomaras@physics.uoc.gr [Institute of Theoretical and Computational Physics and Department of Physics, University of Crete, 70013 Heraklion (Greece)

    2017-07-01

    The maximum size of a cosmic structure is given by the maximum turnaround radius—the scale where the attraction due to its mass is balanced by the repulsion due to dark energy. We derive generic formulae for the estimation of the maximum turnaround radius in any theory of gravity obeying the Einstein equivalence principle, in two situations: on a spherically symmetric spacetime and on a perturbed Friedman-Robertson-Walker spacetime. We show that the two formulae agree. As an application of our formula, we calculate the maximum turnaround radius in the case of the Brans-Dicke theory of gravity. We find that for this theory, such maximum sizes always lie above the ΛCDM value, by a factor 1 + 1/3ω, where ω>> 1 is the Brans-Dicke parameter, implying consistency of the theory with current data.

  15. Is there a common SUV threshold in oncological FDG PET/CT, at least for some common indications? A retrospective study

    International Nuclear Information System (INIS)

    Nguyen, Nghi C.; Kaushik, Aarti; Wolverson, Michael K.; Osman, Medhat M.

    2011-01-01

    Purpose. We retrospectively compared the maximum standard uptake value (SUVmax) of FDG PET in four different sites to evaluate whether a common diagnostic SUVmax threshold may exist in these tumor locations. We further postulate that the SUVmax thresholds are higher in thoracic lesions than in extrathoracic lesions. Material and methods. N = 143 patients in four subgroups underwent a FDG PET/CT: a) 42 patients for solitary pulmonary nodules (SPNs) characterization with b) respective mediastinal lymph nodes (LNs), c) 65 patients for LN staging of head and neck cancer, and d) 36 cancer patients diagnosed with adrenal lesions. Receiver operating characteristics of SUVmax values were evaluated. Results. The SUVmax were statistically significantly greater in malignant than in benign lesions. For SPNs and mediastinal LNs, a SUVmax > 3.6 each resulted in a sensitivity of 81% and 87%, and a specificity of 94% and 89%. For cervical LNs and adrenal glands, a SUVmax > 2.2 each showed a sensitivity of 98% and 100%, and a specificity of 83% and 93%. Conclusion. A common SUVmax threshold did not exist in the four studied subgroups. The variable FDG uptake in SPNs and mediastinal LNs are associated with the high prevalence of inflammation/infection within the chest. Similar SUVmax thresholds however may exist for extrathoracic regions where the prevalence of inflammation/infection is low

  16. Comparison between maximum radial expansion of ultrasound contrast agents and experimental postexcitation signal results.

    Science.gov (United States)

    King, Daniel A; O'Brien, William D

    2011-01-01

    Experimental postexcitation signal data of collapsing Definity microbubbles are compared with the Marmottant theoretical model for large amplitude oscillations of ultrasound contrast agents (UCAs). After taking into account the insonifying pulse characteristics and size distribution of the population of UCAs, a good comparison between simulated results and previously measured experimental data is obtained by determining a threshold maximum radial expansion (Rmax) to indicate the onset of postexcitation. This threshold Rmax is found to range from 3.4 to 8.0 times the initial bubble radius, R0, depending on insonification frequency. These values are well above the typical free bubble inertial cavitation threshold commonly chosen at 2R0. The close agreement between the experiment and models suggests that lipid-shelled UCAs behave as unshelled bubbles during most of a large amplitude cavitation cycle, as proposed in the Marmottant equation.

  17. Maximum nondiffracting propagation distance of aperture-truncated Airy beams

    Science.gov (United States)

    Chu, Xingchun; Zhao, Shanghong; Fang, Yingwu

    2018-05-01

    Airy beams have called attention of many researchers due to their non-diffracting, self-healing and transverse accelerating properties. A key issue in research of Airy beams and its applications is how to evaluate their nondiffracting propagation distance. In this paper, the critical transverse extent of physically realizable Airy beams is analyzed under the local spatial frequency methodology. The maximum nondiffracting propagation distance of aperture-truncated Airy beams is formulated and analyzed based on their local spatial frequency. The validity of the formula is verified by comparing the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam, aperture-truncated exponentially decaying Airy beam and exponentially decaying Airy beam. Results show that the formula can be used to evaluate accurately the maximum nondiffracting propagation distance of an aperture-truncated ideal Airy beam. Therefore, it can guide us to select appropriate parameters to generate Airy beams with long nondiffracting propagation distance that have potential application in the fields of laser weapons or optical communications.

  18. Maximum Entropy and Theory Construction: A Reply to Favretti

    Directory of Open Access Journals (Sweden)

    John Harte

    2018-04-01

    Full Text Available In the maximum entropy theory of ecology (METE, the form of a function describing the distribution of abundances over species and metabolic rates over individuals in an ecosystem is inferred using the maximum entropy inference procedure. Favretti shows that an alternative maximum entropy model exists that assumes the same prior knowledge and makes predictions that differ from METE’s. He shows that both cannot be correct and asserts that his is the correct one because it can be derived from a classic microstate-counting calculation. I clarify here exactly what the core entities and definitions are for METE, and discuss the relevance of two critical issues raised by Favretti: the existence of a counting procedure for microstates and the choices of definition of the core elements of a theory. I emphasize that a theorist controls how the core entities of his or her theory are defined, and that nature is the final arbiter of the validity of a theory.

  19. A novel algorithm for single-axis maximum power generation sun trackers

    International Nuclear Information System (INIS)

    Lee, Kung-Yen; Chung, Chi-Yao; Huang, Bin-Juine; Kuo, Ting-Jung; Yang, Huang-Wei; Cheng, Hung-Yen; Hsu, Po-Chien; Li, Kang

    2017-01-01

    Highlights: • A novel algorithm for a single-axis sun tracker is developed to increase the efficiency. • Photovoltaic module is rotated to find the optimal angle for generating the maximum power. • Electric energy increases up to 8.3%, compared with that of the tracker with three fixed angles. • The rotation range is optimized to reduce energy consumption from the rotation operations. - Abstract: The purpose of this study is to develop a novel algorithm for a single-axis maximum power generation sun tracker in order to identify the optimal stopping angle for generating the maximum amount of daily electric energy. First, the photovoltaic modules of the single-axis maximum power generation sun tracker are automatically rotated from 50° east to 50° west. During the rotation, the instantaneous power generated at different angles is recorded and compared, meaning that the optimal angle for generating the maximum power can be determined. Once the rotation (detection) is completed, the photovoltaic modules are then rotated to the resulting angle for generating the maximum power. The photovoltaic module is rotated once per hour in an attempt to detect the maximum irradiation and overcome the impact of environmental effects such as shading from cloud cover, other photovoltaic modules and surrounding buildings. Furthermore, the detection range is halved so as to reduce the energy consumption from the rotation operations and to improve the reliability of the sun tracker. The results indicate that electric energy production is increased by 3.4% in spring and autumn, 5.4% in summer, and 8.3% in winter, compared with that of the same sun tracker with three fixed angles of 50° east in the morning, 0° at noon and 50° west in the afternoon.

  20. Effects of variability in probable maximum precipitation patterns on flood losses

    Science.gov (United States)

    Zischg, Andreas Paul; Felder, Guido; Weingartner, Rolf; Quinn, Niall; Coxon, Gemma; Neal, Jeffrey; Freer, Jim; Bates, Paul

    2018-05-01

    The assessment of the impacts of extreme floods is important for dealing with residual risk, particularly for critical infrastructure management and for insurance purposes. Thus, modelling of the probable maximum flood (PMF) from probable maximum precipitation (PMP) by coupling hydrological and hydraulic models has gained interest in recent years. Herein, we examine whether variability in precipitation patterns exceeds or is below selected uncertainty factors in flood loss estimation and if the flood losses within a river basin are related to the probable maximum discharge at the basin outlet. We developed a model experiment with an ensemble of probable maximum precipitation scenarios created by Monte Carlo simulations. For each rainfall pattern, we computed the flood losses with a model chain and benchmarked the effects of variability in rainfall distribution with other model uncertainties. The results show that flood losses vary considerably within the river basin and depend on the timing and superimposition of the flood peaks from the basin's sub-catchments. In addition to the flood hazard component, the other components of flood risk, exposure, and vulnerability contribute remarkably to the overall variability. This leads to the conclusion that the estimation of the probable maximum expectable flood losses in a river basin should not be based exclusively on the PMF. Consequently, the basin-specific sensitivities to different precipitation patterns and the spatial organization of the settlements within the river basin need to be considered in the analyses of probable maximum flood losses.

  1. Maximum potential preventive effect of hip protectors

    NARCIS (Netherlands)

    van Schoor, N.M.; Smit, J.H.; Bouter, L.M.; Veenings, B.; Asma, G.B.; Lips, P.T.A.M.

    2007-01-01

    OBJECTIVES: To estimate the maximum potential preventive effect of hip protectors in older persons living in the community or homes for the elderly. DESIGN: Observational cohort study. SETTING: Emergency departments in the Netherlands. PARTICIPANTS: Hip fracture patients aged 70 and older who

  2. 24 CFR 232.565 - Maximum loan amount.

    Science.gov (United States)

    2010-04-01

    ... URBAN DEVELOPMENT MORTGAGE AND LOAN INSURANCE PROGRAMS UNDER NATIONAL HOUSING ACT AND OTHER AUTHORITIES MORTGAGE INSURANCE FOR NURSING HOMES, INTERMEDIATE CARE FACILITIES, BOARD AND CARE HOMES, AND ASSISTED... Fire Safety Equipment Eligible Security Instruments § 232.565 Maximum loan amount. The principal amount...

  3. Achieving maximum sustainable yield in mixed fisheries

    NARCIS (Netherlands)

    Ulrich, Clara; Vermard, Youen; Dolder, Paul J.; Brunel, Thomas; Jardim, Ernesto; Holmes, Steven J.; Kempf, Alexander; Mortensen, Lars O.; Poos, Jan Jaap; Rindorf, Anna

    2017-01-01

    Achieving single species maximum sustainable yield (MSY) in complex and dynamic fisheries targeting multiple species (mixed fisheries) is challenging because achieving the objective for one species may mean missing the objective for another. The North Sea mixed fisheries are a representative example

  4. Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application

    Directory of Open Access Journals (Sweden)

    Riza Muhida

    2013-07-01

    Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.

  5. Stochastic behavior of a cold standby system with maximum repair time

    Directory of Open Access Journals (Sweden)

    Ashish Kumar

    2015-09-01

    Full Text Available The main aim of the present paper is to analyze the stochastic behavior of a cold standby system with concept of preventive maintenance, priority and maximum repair time. For this purpose, a stochastic model is developed in which initially one unit is operative and other is kept as cold standby. There is a single server who visits the system immediately as and when required. The server takes the unit under preventive maintenance after a maximum operation time at normal mode if one standby unit is available for operation. If the repair of the failed unit is not possible up to a maximum repair time, failed unit is replaced by new one. The failure time, maximum operation time and maximum repair time distributions of the unit are considered as exponentially distributed while repair and maintenance time distributions are considered as arbitrary. All random variables are statistically independent and repairs are perfect. Various measures of system effectiveness are obtained by using the technique of semi-Markov process and RPT. To highlight the importance of the study numerical results are also obtained for MTSF, availability and profit function.

  6. Estimation method for first excursion probability of secondary system with impact and friction using maximum response

    International Nuclear Information System (INIS)

    Shigeru Aoki

    2005-01-01

    The secondary system such as pipings, tanks and other mechanical equipment is installed in the primary system such as building. The important secondary systems should be designed to maintain their function even if they are subjected to destructive earthquake excitations. The secondary system has many nonlinear characteristics. Impact and friction characteristic, which are observed in mechanical supports and joints, are common nonlinear characteristics. As impact damper and friction damper, impact and friction characteristic are used for reduction of seismic response. In this paper, analytical methods of the first excursion probability of the secondary system with impact and friction, subjected to earthquake excitation are proposed. By using the methods, the effects of impact force, gap size and friction force on the first excursion probability are examined. When the tolerance level is normalized by the maximum response of the secondary system without impact or friction characteristics, variation of the first excursion probability is very small for various values of the natural period. In order to examine the effectiveness of the proposed method, the obtained results are compared with those obtained by the simulation method. Some estimation methods for the maximum response of the secondary system with nonlinear characteristics have been developed. (author)

  7. Generic maximum likely scale selection

    DEFF Research Database (Denmark)

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo

    2007-01-01

    in this work is on applying this selection principle under a Brownian image model. This image model provides a simple scale invariant prior for natural images and we provide illustrative examples of the behavior of our scale estimation on such images. In these illustrative examples, estimation is based......The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...

  8. Maximum principle and convergence of central schemes based on slope limiters

    KAUST Repository

    Mehmetoglu, Orhan; Popov, Bojan

    2012-01-01

    A maximum principle and convergence of second order central schemes is proven for scalar conservation laws in dimension one. It is well known that to establish a maximum principle a nonlinear piecewise linear reconstruction is needed and a typical choice is the minmod limiter. Unfortunately, this implies that the scheme uses a first order reconstruction at local extrema. The novelty here is that we allow local nonlinear reconstructions which do not reduce to first order at local extrema and still prove maximum principle and convergence. © 2011 American Mathematical Society.

  9. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Science.gov (United States)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  10. Maximum power point tracker for photovoltaic power plants

    Science.gov (United States)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  11. Maximum mass-particle velocities in Kantor's information mechanics

    International Nuclear Information System (INIS)

    Sverdlik, D.I.

    1989-01-01

    Kantor's information mechanics links phenomena previously regarded as not treatable by a single theory. It is used here to calculate the maximum velocities υ m of single particles. For the electron, υ m /c ∼ 1 - 1.253814 x 10 -77 . The maximum υ m corresponds to υ m /c ∼ 1 -1.097864 x 10 -122 for a single mass particle with a rest mass of 3.078496 x 10 -5 g. This is the fastest that matter can move. Either information mechanics or classical mechanics can be used to show that υ m is less for heavier particles. That υ m is less for lighter particles can be deduced from an information mechanics argument alone

  12. A comparison of methods of predicting maximum oxygen uptake.

    OpenAIRE

    Grant, S; Corbett, K; Amjad, A M; Wilson, J; Aitchison, T

    1995-01-01

    The aim of this study was to compare the results from a Cooper walk run test, a multistage shuttle run test, and a submaximal cycle test with the direct measurement of maximum oxygen uptake on a treadmill. Three predictive tests of maximum oxygen uptake--linear extrapolation of heart rate of VO2 collected from a submaximal cycle ergometer test (predicted L/E), the Cooper 12 min walk, run test, and a multi-stage progressive shuttle run test (MST)--were performed by 22 young healthy males (mean...

  13. A Maximum Entropy Method for a Robust Portfolio Problem

    Directory of Open Access Journals (Sweden)

    Yingying Xu

    2014-06-01

    Full Text Available We propose a continuous maximum entropy method to investigate the robustoptimal portfolio selection problem for the market with transaction costs and dividends.This robust model aims to maximize the worst-case portfolio return in the case that allof asset returns lie within some prescribed intervals. A numerical optimal solution tothe problem is obtained by using a continuous maximum entropy method. Furthermore,some numerical experiments indicate that the robust model in this paper can result in betterportfolio performance than a classical mean-variance model.

  14. The average size of ordered binary subgraphs

    NARCIS (Netherlands)

    van Leeuwen, J.; Hartel, Pieter H.

    To analyse the demands made on the garbage collector in a graph reduction system, the change in size of an average graph is studied when an arbitrary edge is removed. In ordered binary trees the average number of deleted nodes as a result of cutting a single edge is equal to the average size of a

  15. Hydraulic Limits on Maximum Plant Transpiration

    Science.gov (United States)

    Manzoni, S.; Vico, G.; Katul, G. G.; Palmroth, S.; Jackson, R. B.; Porporato, A. M.

    2011-12-01

    Photosynthesis occurs at the expense of water losses through transpiration. As a consequence of this basic carbon-water interaction at the leaf level, plant growth and ecosystem carbon exchanges are tightly coupled to transpiration. In this contribution, the hydraulic constraints that limit transpiration rates under well-watered conditions are examined across plant functional types and climates. The potential water flow through plants is proportional to both xylem hydraulic conductivity (which depends on plant carbon economy) and the difference in water potential between the soil and the atmosphere (the driving force that pulls water from the soil). Differently from previous works, we study how this potential flux changes with the amplitude of the driving force (i.e., we focus on xylem properties and not on stomatal regulation). Xylem hydraulic conductivity decreases as the driving force increases due to cavitation of the tissues. As a result of this negative feedback, more negative leaf (and xylem) water potentials would provide a stronger driving force for water transport, while at the same time limiting xylem hydraulic conductivity due to cavitation. Here, the leaf water potential value that allows an optimum balance between driving force and xylem conductivity is quantified, thus defining the maximum transpiration rate that can be sustained by the soil-to-leaf hydraulic system. To apply the proposed framework at the global scale, a novel database of xylem conductivity and cavitation vulnerability across plant types and biomes is developed. Conductivity and water potential at 50% cavitation are shown to be complementary (in particular between angiosperms and conifers), suggesting a tradeoff between transport efficiency and hydraulic safety. Plants from warmer and drier biomes tend to achieve larger maximum transpiration than plants growing in environments with lower atmospheric water demand. The predicted maximum transpiration and the corresponding leaf water

  16. correlation between maximum dry density and cohesion

    African Journals Online (AJOL)

    HOD

    represents maximum dry density, signifies plastic limit and is liquid limit. Researchers [6, 7] estimate compaction parameters. Aside from the correlation existing between compaction parameters and other physical quantities there are some other correlations that have been investigated by other researchers. The well-known.

  17. Maximum Work of Free-Piston Stirling Engine Generators

    Science.gov (United States)

    Kojima, Shinji

    2017-04-01

    Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.

  18. Valid analytical performance specifications for combined analytical bias and imprecision for the use of common reference intervals.

    Science.gov (United States)

    Hyltoft Petersen, Per; Lund, Flemming; Fraser, Callum G; Sandberg, Sverre; Sölétormos, György

    2018-01-01

    Background Many clinical decisions are based on comparison of patient results with reference intervals. Therefore, an estimation of the analytical performance specifications for the quality that would be required to allow sharing common reference intervals is needed. The International Federation of Clinical Chemistry (IFCC) recommended a minimum of 120 reference individuals to establish reference intervals. This number implies a certain level of quality, which could then be used for defining analytical performance specifications as the maximum combination of analytical bias and imprecision required for sharing common reference intervals, the aim of this investigation. Methods Two methods were investigated for defining the maximum combination of analytical bias and imprecision that would give the same quality of common reference intervals as the IFCC recommendation. Method 1 is based on a formula for the combination of analytical bias and imprecision and Method 2 is based on the Microsoft Excel formula NORMINV including the fractional probability of reference individuals outside each limit and the Gaussian variables of mean and standard deviation. The combinations of normalized bias and imprecision are illustrated for both methods. The formulae are identical for Gaussian and log-Gaussian distributions. Results Method 2 gives the correct results with a constant percentage of 4.4% for all combinations of bias and imprecision. Conclusion The Microsoft Excel formula NORMINV is useful for the estimation of analytical performance specifications for both Gaussian and log-Gaussian distributions of reference intervals.

  19. Counting Triangles to Sum Squares

    Science.gov (United States)

    DeMaio, Joe

    2012-01-01

    Counting complete subgraphs of three vertices in complete graphs, yields combinatorial arguments for identities for sums of squares of integers, odd integers, even integers and sums of the triangular numbers.

  20. 40 CFR 141.51 - Maximum contaminant level goals for inorganic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for inorganic contaminants. 141.51 Section 141.51 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level...

  1. 40 CFR 141.50 - Maximum contaminant level goals for organic contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for organic contaminants. 141.50 Section 141.50 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level...

  2. 40 CFR 141.52 - Maximum contaminant level goals for microbiological contaminants.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 22 2010-07-01 2010-07-01 false Maximum contaminant level goals for microbiological contaminants. 141.52 Section 141.52 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) WATER PROGRAMS (CONTINUED) NATIONAL PRIMARY DRINKING WATER REGULATIONS Maximum Contaminant Level...

  3. Solar maximum mission

    International Nuclear Information System (INIS)

    Ryan, J.

    1981-01-01

    By understanding the sun, astrophysicists hope to expand this knowledge to understanding other stars. To study the sun, NASA launched a satellite on February 14, 1980. The project is named the Solar Maximum Mission (SMM). The satellite conducted detailed observations of the sun in collaboration with other satellites and ground-based optical and radio observations until its failure 10 months into the mission. The main objective of the SMM was to investigate one aspect of solar activity: solar flares. A brief description of the flare mechanism is given. The SMM satellite was valuable in providing information on where and how a solar flare occurs. A sequence of photographs of a solar flare taken from SMM satellite shows how a solar flare develops in a particular layer of the solar atmosphere. Two flares especially suitable for detailed observations by a joint effort occurred on April 30 and May 21 of 1980. These flares and observations of the flares are discussed. Also discussed are significant discoveries made by individual experiments

  4. Models of gene gain and gene loss for probabilistic reconstruction of gene content in the last universal common ancestor of life

    OpenAIRE

    Kannan, Lavanya; Li, Hua; Rubinstein, Boris; Mushegian, Arcady

    2013-01-01

    Background The problem of probabilistic inference of gene content in the last common ancestor of several extant species with completely sequenced genomes is: for each gene that is conserved in all or some of the genomes, assign the probability that its ancestral gene was present in the genome of their last common ancestor. Results We have developed a family of models of gene gain and gene loss in evolution, and applied the maximum-likelihood approach that uses phylogenetic tree of prokaryotes...

  5. How long do centenarians survive? Life expectancy and maximum lifespan.

    Science.gov (United States)

    Modig, K; Andersson, T; Vaupel, J; Rau, R; Ahlbom, A

    2017-08-01

    The purpose of this study was to explore the pattern of mortality above the age of 100 years. In particular, we aimed to examine whether Scandinavian data support the theory that mortality reaches a plateau at particularly old ages. Whether the maximum length of life increases with time was also investigated. The analyses were based on individual level data on all Swedish and Danish centenarians born from 1870 to 1901; in total 3006 men and 10 963 women were included. Birth cohort-specific probabilities of dying were calculated. Exact ages were used for calculations of maximum length of life. Whether maximum age changed over time was analysed taking into account increases in cohort size. The results confirm that there has not been any improvement in mortality amongst centenarians in the past 30 years and that the current rise in life expectancy is driven by reductions in mortality below the age of 100 years. The death risks seem to reach a plateau of around 50% at the age 103 years for men and 107 years for women. Despite the rising life expectancy, the maximum age does not appear to increase, in particular after accounting for the increasing number of individuals of advanced age. Mortality amongst centenarians is not changing despite improvements at younger ages. An extension of the maximum lifespan and a sizeable extension of life expectancy both require reductions in mortality above the age of 100 years. © 2017 The Association for the Publication of the Journal of Internal Medicine.

  6. Modelling information flow along the human connectome using maximum flow.

    Science.gov (United States)

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Exploring high-density baryonic matter: Maximum freeze-out density

    Energy Technology Data Exchange (ETDEWEB)

    Randrup, Joergen [Lawrence Berkeley National Laboratory, Nuclear Science Division, Berkeley, CA (United States); Cleymans, Jean [University of Cape Town, UCT-CERN Research Centre and Department of Physics, Rondebosch (South Africa)

    2016-08-15

    The hadronic freeze-out line is calculated in terms of the net baryon density and the energy density instead of the usual T and μ{sub B}. This analysis makes it apparent that the freeze-out density exhibits a maximum as the collision energy is varied. This maximum freeze-out density has μ{sub B} = 400 - 500 MeV, which is above the critical value, and it is reached for a fixed-target bombarding energy of 20-30 GeV/N well within the parameters of the proposed NICA collider facility. (orig.)

  8. Maximum organic carbon limits at different melter feed rates (U)

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    This report documents the results of a study to assess the impact of varying melter feed rates on the maximum total organic carbon (TOC) limits allowable in the DWPF melter feed. Topics discussed include: carbon content; feed rate; feed composition; melter vapor space temperature; combustion and dilution air; off-gas surges; earlier work on maximum TOC; overview of models; and the results of the work completed

  9. Food Insecurity and Common Mental Disorders among Ethiopian Youth: Structural Equation Modeling.

    Directory of Open Access Journals (Sweden)

    Mulusew G Jebena

    Full Text Available Although the consequences of food insecurity on physical health and nutritional status of youth living have been reported, its effect on their mental health remains less investigated in developing countries. The aim of this study was to examine the pathways through which food insecurity is associated with poor mental health status among youth living in Ethiopia.We used data from Jimma Longitudinal Family Survey of Youth (JLFSY collected in 2009/10. A total of 1,521 youth were included in the analysis. We measured food insecurity using a 5-items scale and common mental disorders using the 20-item Self-Reporting Questionnaire (SRQ-20. Structural and generalized equation modeling using maximum likelihood estimation method was used to analyze the data.The prevalence of common mental disorders was 30.8% (95% CI: 28.6, 33.2. Food insecurity was independently associated with common mental disorders (β = 0.323, P<0.05. Most (91.8% of the effect of food insecurity on common mental disorders was direct and only 8.2% of their relationship was partially mediated by physical health. In addition, poor self-rated health (β = 0.285, P<0.05, high socioeconomic status (β = -0.076, P<0.05, parental education (β = 0.183, P<0.05, living in urban area (β = 0.139, P<0.05, and female-headed household (β = 0.192, P<0.05 were associated with common mental disorders.Food insecurity is directly associated with common mental disorders among youth in Ethiopia. Interventions that aim to improve mental health status of youth should consider strategies to improve access to sufficient, safe and nutritious food.

  10. Keyword Query Expansion Paradigm Based on Recommendation and Interpretation in Relational Databases

    Directory of Open Access Journals (Sweden)

    Yingqi Wang

    2017-01-01

    Full Text Available Due to the ambiguity and impreciseness of keyword query in relational databases, the research on keyword query expansion has attracted wide attention. Existing query expansion methods expose users’ query intention to a certain extent, but most of them cannot balance the precision and recall. To address this problem, a novel two-step query expansion approach is proposed based on query recommendation and query interpretation. First, a probabilistic recommendation algorithm is put forward by constructing a term similarity matrix and Viterbi model. Second, by using the translation algorithm of triples and construction algorithm of query subgraphs, query keywords are translated to query subgraphs with structural and semantic information. Finally, experimental results on a real-world dataset demonstrate the effectiveness and rationality of the proposed method.

  11. Maximum speed of dewetting on a fiber

    NARCIS (Netherlands)

    Chan, Tak Shing; Gueudre, Thomas; Snoeijer, Jacobus Hendrikus

    2011-01-01

    A solid object can be coated by a nonwetting liquid since a receding contact line cannot exceed a critical speed. We theoretically investigate this forced wetting transition for axisymmetric menisci on fibers of varying radii. First, we use a matched asymptotic expansion and derive the maximum speed

  12. Maximum entropy estimation via Gauss-LP quadratures

    NARCIS (Netherlands)

    Thély, Maxime; Sutter, Tobias; Mohajerin Esfahani, P.; Lygeros, John; Dochain, Denis; Henrion, Didier; Peaucelle, Dimitri

    2017-01-01

    We present an approximation method to a class of parametric integration problems that naturally appear when solving the dual of the maximum entropy estimation problem. Our method builds up on a recent generalization of Gauss quadratures via an infinite-dimensional linear program, and utilizes a

  13. Maximum Power Tracking by VSAS approach for Wind Turbine, Renewable Energy Sources

    Directory of Open Access Journals (Sweden)

    Nacer Kouider Msirdi

    2015-08-01

    Full Text Available This paper gives a review of the most efficient algorithms designed to track the maximum power point (MPP for catching the maximum wind power by a variable speed wind turbine (VSWT. We then design a new maximum power point tracking (MPPT algorithm using the Variable Structure Automatic Systems approach (VSAS. The proposed approachleads efficient algorithms as shown in this paper by the analysis and simulations.

  14. Estimating the maximum potential revenue for grid connected electricity storage :

    Energy Technology Data Exchange (ETDEWEB)

    Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.

    2012-12-01

    The valuation of an electricity storage device is based on the expected future cash flow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the

  15. Maximum likelihood pedigree reconstruction using integer linear programming.

    Science.gov (United States)

    Cussens, James; Bartlett, Mark; Jones, Elinor M; Sheehan, Nuala A

    2013-01-01

    Large population biobanks of unrelated individuals have been highly successful in detecting common genetic variants affecting diseases of public health concern. However, they lack the statistical power to detect more modest gene-gene and gene-environment interaction effects or the effects of rare variants for which related individuals are ideally required. In reality, most large population studies will undoubtedly contain sets of undeclared relatives, or pedigrees. Although a crude measure of relatedness might sometimes suffice, having a good estimate of the true pedigree would be much more informative if this could be obtained efficiently. Relatives are more likely to share longer haplotypes around disease susceptibility loci and are hence biologically more informative for rare variants than unrelated cases and controls. Distant relatives are arguably more useful for detecting variants with small effects because they are less likely to share masking environmental effects. Moreover, the identification of relatives enables appropriate adjustments of statistical analyses that typically assume unrelatedness. We propose to exploit an integer linear programming optimisation approach to pedigree learning, which is adapted to find valid pedigrees by imposing appropriate constraints. Our method is not restricted to small pedigrees and is guaranteed to return a maximum likelihood pedigree. With additional constraints, we can also search for multiple high-probability pedigrees and thus account for the inherent uncertainty in any particular pedigree reconstruction. The true pedigree is found very quickly by comparison with other methods when all individuals are observed. Extensions to more complex problems seem feasible. © 2012 Wiley Periodicals, Inc.

  16. Maximum Acceptable Weight of Lift reflects peak lumbosacral extension moments in a Functional Capacity Evaluation test using free style, stoop, and squat lifting

    OpenAIRE

    Kuijer, P.P.F.M.; van Oostrom, S.H.; Duijzer, K.; van Dieen, J.H.

    2012-01-01

    It is unclear whether the maximum acceptable weight of lift (MAWL), a common psychophysical method, reflects joint kinetics when different lifting techniques are employed. In a within-participants study (n = 12), participants performed three lifting techniques - free style, stoop and squat lifting from knee to waist level - using the same dynamic functional capacity evaluation lifting test to assess MAWL and to calculate low back and knee kinetics. We assessed which knee and back kinetic para...

  17. Rule reversal: Ecogeographical patterns of body size variation in the common treeshrew (Mammalia, Scandentia)

    Science.gov (United States)

    Sargis, Eric J.; Millien, Virginie; Woodman, Neal; Olson, Link E.

    2018-01-01

    There are a number of ecogeographical “rules” that describe patterns of geographical variation among organisms. The island rule predicts that populations of larger mammals on islands evolve smaller mean body size than their mainland counterparts, whereas smaller‐bodied mammals evolve larger size. Bergmann's rule predicts that populations of a species in colder climates (generally at higher latitudes) have larger mean body sizes than conspecifics in warmer climates (at lower latitudes). These two rules are rarely tested together and neither has been rigorously tested in treeshrews, a clade of small‐bodied mammals in their own order (Scandentia) broadly distributed in mainland Southeast Asia and on islands throughout much of the Sunda Shelf. The common treeshrew, Tupaia glis, is an excellent candidate for study and was used to test these two rules simultaneously for the first time in treeshrews. This species is distributed on the Malay Peninsula and several offshore islands east, west, and south of the mainland. Using craniodental dimensions as a proxy for body size, we investigated how island size, distance from the mainland, and maximum sea depth between the mainland and the islands relate to body size of 13 insular T. glis populations while also controlling for latitude and correlation among variables. We found a strong negative effect of latitude on body size in the common treeshrew, indicating the inverse of Bergmann's rule. We did not detect any overall difference in body size between the island and mainland populations. However, there was an effect of island area and maximum sea depth on body size among island populations. Although there is a strong latitudinal effect on body size, neither Bergmann's rule nor the island rule applies to the common treeshrew. The results of our analyses demonstrate the necessity of assessing multiple variables simultaneously in studies of ecogeographical rules.

  18. 12 CFR 221.7 - Supplement: Maximum loan value of margin stock and other collateral.

    Science.gov (United States)

    2010-01-01

    ... value of margin stock and other collateral. (a) Maximum loan value of margin stock. The maximum loan... nonmargin stock and all other collateral. The maximum loan value of nonmargin stock and all other collateral... 12 Banks and Banking 3 2010-01-01 2010-01-01 false Supplement: Maximum loan value of margin stock...

  19. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  20. On the quirks of maximum parsimony and likelihood on phylogenetic networks.

    Science.gov (United States)

    Bryant, Christopher; Fischer, Mareike; Linz, Simone; Semple, Charles

    2017-03-21

    Maximum parsimony is one of the most frequently-discussed tree reconstruction methods in phylogenetic estimation. However, in recent years it has become more and more apparent that phylogenetic trees are often not sufficient to describe evolution accurately. For instance, processes like hybridization or lateral gene transfer that are commonplace in many groups of organisms and result in mosaic patterns of relationships cannot be represented by a single phylogenetic tree. This is why phylogenetic networks, which can display such events, are becoming of more and more interest in phylogenetic research. It is therefore necessary to extend concepts like maximum parsimony from phylogenetic trees to networks. Several suggestions for possible extensions can be found in recent literature, for instance the softwired and the hardwired parsimony concepts. In this paper, we analyze the so-called big parsimony problem under these two concepts, i.e. we investigate maximum parsimonious networks and analyze their properties. In particular, we show that finding a softwired maximum parsimony network is possible in polynomial time. We also show that the set of maximum parsimony networks for the hardwired definition always contains at least one phylogenetic tree. Lastly, we investigate some parallels of parsimony to different likelihood concepts on phylogenetic networks. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Maximum Likelihood and Restricted Likelihood Solutions in Multiple-Method Studies.

    Science.gov (United States)

    Rukhin, Andrew L

    2011-01-01

    A formulation of the problem of combining data from several sources is discussed in terms of random effects models. The unknown measurement precision is assumed not to be the same for all methods. We investigate maximum likelihood solutions in this model. By representing the likelihood equations as simultaneous polynomial equations, the exact form of the Groebner basis for their stationary points is derived when there are two methods. A parametrization of these solutions which allows their comparison is suggested. A numerical method for solving likelihood equations is outlined, and an alternative to the maximum likelihood method, the restricted maximum likelihood, is studied. In the situation when methods variances are considered to be known an upper bound on the between-method variance is obtained. The relationship between likelihood equations and moment-type equations is also discussed.

  2. A Bayes-Maximum Entropy method for multi-sensor data fusion

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.

    1991-01-01

    In this paper we introduce a Bayes-Maximum Entropy formalism for multi-sensor data fusion, and present an application of this methodology to the fusion of ultrasound and visual sensor data as acquired by a mobile robot. In our approach the principle of maximum entropy is applied to the construction of priors and likelihoods from the data. Distances between ultrasound and visual points of interest in a dual representation are used to define Gibbs likelihood distributions. Both one- and two-dimensional likelihoods are presented, and cast into a form which makes explicit their dependence upon the mean. The Bayesian posterior distributions are used to test a null hypothesis, and Maximum Entropy Maps used for navigation are updated using the resulting information from the dual representation. 14 refs., 9 figs.

  3. Penalized maximum-likelihood sinogram restoration for dual focal spot computed tomography

    International Nuclear Information System (INIS)

    Forthmann, P; Koehler, T; Begemann, P G C; Defrise, M

    2007-01-01

    Due to various system non-idealities, the raw data generated by a computed tomography (CT) machine are not readily usable for reconstruction. Although the deterministic nature of corruption effects such as crosstalk and afterglow permits correction by deconvolution, there is a drawback because deconvolution usually amplifies noise. Methods that perform raw data correction combined with noise suppression are commonly termed sinogram restoration methods. The need for sinogram restoration arises, for example, when photon counts are low and non-statistical reconstruction algorithms such as filtered backprojection are used. Many modern CT machines offer a dual focal spot (DFS) mode, which serves the goal of increased radial sampling by alternating the focal spot between two positions on the anode plate during the scan. Although the focal spot mode does not play a role with respect to how the data are affected by the above-mentioned corruption effects, it needs to be taken into account if regularized sinogram restoration is to be applied to the data. This work points out the subtle difference in processing that sinogram restoration for DFS requires, how it is correctly employed within the penalized maximum-likelihood sinogram restoration algorithm and what impact it has on image quality

  4. What if the Diatoms of the Deep Chlorophyll Maximum Can Ascend?

    Science.gov (United States)

    Villareal, T. A.

    2016-02-01

    Buoyancy regulation is an integral part of diatom ecology via its role in sinking rates and is fundamental to understanding their distribution and abundance. Numerous studies have documented the effects of size and nutrition on sinking rates. Many pelagic diatoms have low intrinsic sinking rates when healthy and nutrient-replete (deep chlorophyll maximum. The potential for ascending behavior adds an additional layer of complexity by allowing both active depth regulation similar to that observed in flagellated taxa and upward transport by some fraction of deep euphotic zone diatom blooms supported by nutrient injection. In this talk, I review the data documenting positive buoyancy in small diatoms, offer direct visual evidence of ascending behavior in common diatoms typical of both oceanic and coastal zones, and note the characteristics of sinking rate distributions within a single species. Buoyancy control leads to bidirectional movement at similar rates across a wide size spectrum of diatoms although the frequency of ascending behavior may be only a small portion of the individual species' abundance. While much remains to be learned, the paradigm of unidirectional downward movement by diatoms is both inaccurate and an oversimplification.

  5. Functional Maximum Autocorrelation Factors

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg

    2005-01-01

    MAF outperforms the functional PCA in concentrating the interesting' spectra/shape variation in one end of the eigenvalue spectrum and allows for easier interpretation of effects. Conclusions. Functional MAF analysis is a useful methods for extracting low dimensional models of temporally or spatially......Purpose. We aim at data where samples of an underlying function are observed in a spatial or temporal layout. Examples of underlying functions are reflectance spectra and biological shapes. We apply functional models based on smoothing splines and generalize the functional PCA in......\\verb+~+\\$\\backslash\\$cite{ramsay97} to functional maximum autocorrelation factors (MAF)\\verb+~+\\$\\backslash\\$cite{switzer85,larsen2001d}. We apply the method to biological shapes as well as reflectance spectra. {\\$\\backslash\\$bf Methods}. MAF seeks linear combination of the original variables that maximize autocorrelation between...

  6. Geometric constraint subsets and subgraphs in the analysis of assemblies and mechanisms Geometric constraint subsets and subgraphs in the analysis of assemblies and mechanisms

    Directory of Open Access Journals (Sweden)

    Oscar E Ruiz

    2006-06-01

    Full Text Available Geometric Reasoning ability is central to many applications in CAD/CAM/CAPP environments. An increasing demand exists for Geometric Reasoning systems which evaluate the feasibility of virtual scenes specified by geometric relations. Thus, the Geometric Constraint Satisfaction or Scene Feasibility (GCS/SF problem consists of a basic scenario containing geometric entities, whose context is used to propose constraining relations among still undefined entities. If the constraint specification is consistent, the answer of the problem is one of finitely or infinitely many solution scenarios satisfying the prescribed constraints. Otherwise, a diagnostic of inconsistency is expected. The three main approaches used for this problem are numerical, procedural or operational and mathematical. Numerical and procedural approaches answer only part of the problem, and are not complete in the sense that a failure to provide an answer does not preclude the existence of one. The mathematical approach previously presented by the authors describes the problem using a set of polynomial equations. The common roots to this set of polynomials characterizes the solution space for such a problem. That work presents the use of Groebner basis techniques for verifying the consistency of the constraints. It also integrates subgroups of the Special Euclidean Group of Displacements SE(3 in the problem formulation to exploit the structure implied by geometric relations. Although theoretically sound, these techniques require large amounts of computing resources. This work proposes Divide-and-Conquer techniques applied to local GCS/SF subproblems to identify strongly constrained clusters of geometric entities. The identification and preprocessing of these clusters generally reduces the effort required in solving the overall problem. Cluster identification can be related to identifying short cycles in the Spatial Constraint graph for the GCS/SF problem. Their preprocessing

  7. Microbiological Quality of Panicum maximum Grass Silage with Addition of Lactobacillus sp. as Starter

    Science.gov (United States)

    Sumarsih, S.; Sulistiyanto, B.; Utama, C. S.

    2018-02-01

    The aim of the research was to evaluate microbiological quality of Panicum maximum grass silage with addition Lactobacillus sp as starter. The completely randomized design was been used on this research with 4 treaments and 3 replications. The treatments were P0 ( Panicum maximum grass silage without addition Lactobacillus sp ), P1 ( Panicum maximum grass silage with 2% addition Lactobacillus sp), P2 (Panicum maximum grass silage with 4% addition Lactobacillus sp) and P3 (Panicum maximum grass silage with 6% addition Lactobacillus sp).The parameters were microbial populations of Panicum maximum grass silage (total lactic acid bacteria, total bacteria, total fungi, and Coliform bacteria. The data obtained were analyzed variance (ANOVA) and further tests performed Duncan’s Multiple Areas. The population of lactic acid bacteria was higher (PMicrobiological quality of Panicum maximum grass silage with addition Lactobacillus sp was better than no addition Lactobacillus sp.

  8. Estimating Rhododendron maximum L. (Ericaceae) Canopy Cover Using GPS/GIS Technology

    Science.gov (United States)

    Tyler J. Tran; Katherine J. Elliott

    2012-01-01

    In the southern Appalachians, Rhododendron maximum L. (Ericaceae) is a key evergreen understory species, often forming a subcanopy in forest stands. Little is known about the significance of R. maximum cover in relation to other forest structural variables. Only recently have studies used Global Positioning System (GPS) technology...

  9. CLINICAL PROFILE AND COMMON CAUSES OF HAEMOLYTIC ANAEMIA IN A TERTIARY CARE HOSPITAL, NORTHERN KERALA

    Directory of Open Access Journals (Sweden)

    Jog Antony

    2016-09-01

    Full Text Available BACKGROUND Haemolytic anaemia is a well-recognised clinical problem. This study looks into the clinical profile of haemolytic anaemia and also attempts to find out the common underlying causative disease. It also tries to group the patients according to the clinical manifestations and underlying causes. MATERIALS AND METHODS This is a hospital-based observational study conducted in a tertiary care centre in Northern Kerala. Forty-four adult patients with clinical manifestations and laboratory evidence of haemolytic anaemia were identified and studied for a period of one year. RESULTS Maximum number of cases were seen in the age group of 20-40 years. The overall male-female ratio was 1.1:1. The most common presenting symptoms were features of anaemia like breathlessness, easy fatigability, headache and tiredness. Family history of anaemia was present in 34.1%. The most common signs observed were pallor and jaundice. The most common causes were autoimmune haemolytic anaemia and sickle cell anaemia. CONCLUSION Haemolytic anaemia mostly affects individuals in their 3rd and 4th decade. There is no significant difference in gender distribution of haemolytic anaemia. Haemolytic anaemia most commonly presents with symptoms of anaemia and jaundice. Commonest causes of haemolytic anaemia are autoimmune haemolytic anaemia and sickle cell anaemia.

  10. Relating Topological Determinants of Complex Networks to Their Spectral Properties: Structural and Dynamical Effects

    Science.gov (United States)

    Castellano, Claudio; Pastor-Satorras, Romualdo

    2017-10-01

    The largest eigenvalue of a network's adjacency matrix and its associated principal eigenvector are key elements for determining the topological structure and the properties of dynamical processes mediated by it. We present a physically grounded expression relating the value of the largest eigenvalue of a given network to the largest eigenvalue of two network subgraphs, considered as isolated: the hub with its immediate neighbors and the densely connected set of nodes with maximum K -core index. We validate this formula by showing that it predicts, with good accuracy, the largest eigenvalue of a large set of synthetic and real-world topologies. We also present evidence of the consequences of these findings for broad classes of dynamics taking place on the networks. As a by-product, we reveal that the spectral properties of heterogeneous networks built according to the linear preferential attachment model are qualitatively different from those of their static counterparts.

  11. 5 CFR 581.402 - Maximum garnishment limitations.

    Science.gov (United States)

    2010-01-01

    ... PROCESSING GARNISHMENT ORDERS FOR CHILD SUPPORT AND/OR ALIMONY Consumer Credit Protection Act Restrictions..., pursuant to section 1673(b)(2) (A) and (B) of title 15 of the United States Code (the Consumer Credit... local law, the maximum part of the aggregate disposable earnings subject to garnishment to enforce any...

  12. Finite mixture model: A maximum likelihood estimation approach on time series data

    Science.gov (United States)

    Yen, Phoong Seuk; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-09-01

    Recently, statistician emphasized on the fitting of finite mixture model by using maximum likelihood estimation as it provides asymptotic properties. In addition, it shows consistency properties as the sample sizes increases to infinity. This illustrated that maximum likelihood estimation is an unbiased estimator. Moreover, the estimate parameters obtained from the application of maximum likelihood estimation have smallest variance as compared to others statistical method as the sample sizes increases. Thus, maximum likelihood estimation is adopted in this paper to fit the two-component mixture model in order to explore the relationship between rubber price and exchange rate for Malaysia, Thailand, Philippines and Indonesia. Results described that there is a negative effect among rubber price and exchange rate for all selected countries.

  13. Decision Aggregation in Distributed Classification by a Transductive Extension of Maximum Entropy/Improved Iterative Scaling

    Directory of Open Access Journals (Sweden)

    George Kesidis

    2008-06-01

    Full Text Available In many ensemble classification paradigms, the function which combines local/base classifier decisions is learned in a supervised fashion. Such methods require common labeled training examples across the classifier ensemble. However, in some scenarios, where an ensemble solution is necessitated, common labeled data may not exist: (i legacy/proprietary classifiers, and (ii spatially distributed and/or multiple modality sensors. In such cases, it is standard to apply fixed (untrained decision aggregation such as voting, averaging, or naive Bayes rules. In recent work, an alternative transductive learning strategy was proposed. There, decisions on test samples were chosen aiming to satisfy constraints measured by each local classifier. This approach was shown to reliably correct for class prior mismatch and to robustly account for classifier dependencies. Significant gains in accuracy over fixed aggregation rules were demonstrated. There are two main limitations of that work. First, feasibility of the constraints was not guaranteed. Second, heuristic learning was applied. Here, we overcome these problems via a transductive extension of maximum entropy/improved iterative scaling for aggregation in distributed classification. This method is shown to achieve improved decision accuracy over the earlier transductive approach and fixed rules on a number of UC Irvine datasets.

  14. Mid-depth temperature maximum in an estuarine lake

    Science.gov (United States)

    Stepanenko, V. M.; Repina, I. A.; Artamonov, A. Yu; Gorin, S. L.; Lykossov, V. N.; Kulyamin, D. V.

    2018-03-01

    The mid-depth temperature maximum (TeM) was measured in an estuarine Bol’shoi Vilyui Lake (Kamchatka peninsula, Russia) in summer 2015. We applied 1D k-ɛ model LAKE to the case, and found it successfully simulating the phenomenon. We argue that the main prerequisite for mid-depth TeM development is a salinity increase below the freshwater mixed layer, sharp enough in order to increase the temperature with depth not to cause convective mixing and double diffusion there. Given that this condition is satisfied, the TeM magnitude is controlled by physical factors which we identified as: radiation absorption below the mixed layer, mixed-layer temperature dynamics, vertical heat conduction and water-sediments heat exchange. In addition to these, we formulate the mechanism of temperature maximum ‘pumping’, resulting from the phase shift between diurnal cycles of mixed-layer depth and temperature maximum magnitude. Based on the LAKE model results we quantify the contribution of the above listed mechanisms and find their individual significance highly sensitive to water turbidity. Relying on physical mechanisms identified we define environmental conditions favouring the summertime TeM development in salinity-stratified lakes as: small-mixed layer depth (roughly, ~wind and cloudless weather. We exemplify the effect of mixed-layer depth on TeM by a set of selected lakes.

  15. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm f...... for this problem has O(n^2) running time. We show how to obtain O(n^{3/2}*(log n)^3) expected running time. We also show that if G has bounded treewidth, its maximum detour can be computed in O(n*(log n)^3) expected time....

  16. Modeling of Maximum Power Point Tracking Controller for Solar Power System

    Directory of Open Access Journals (Sweden)

    Aryuanto Soetedjo

    2012-09-01

    Full Text Available In this paper, a Maximum Power Point Tracking (MPPT controller for solar power system is modeled using MATLAB Simulink. The model consists of PV module, buck converter, and MPPT controller. The contribution of the work is in the modeling of buck converter that allowing the input voltage of the converter, i.e. output voltage of PV is changed by varying the duty cycle, so that the maximum power point could be tracked when the environmental changes. The simulation results show that the developed model performs well in tracking the maximum power point (MPP of the PV module using Perturb and Observe (P&O Algorithm.

  17. Accurate Maximum Power Tracking in Photovoltaic Systems Affected by Partial Shading

    Directory of Open Access Journals (Sweden)

    Pierluigi Guerriero

    2015-01-01

    Full Text Available A maximum power tracking algorithm exploiting operating point information gained on individual solar panels is presented. The proposed algorithm recognizes the presence of multiple local maxima in the power voltage curve of a shaded solar field and evaluates the coordinated of the absolute maximum. The effectiveness of the proposed approach is evidenced by means of circuit level simulation and experimental results. Experiments evidenced that, in comparison with a standard perturb and observe algorithm, we achieve faster convergence in normal operating conditions (when the solar field is uniformly illuminated and we accurately locate the absolute maximum power point in partial shading conditions, thus avoiding the convergence on local maxima.

  18. Discontinuity of maximum entropy inference and quantum phase transitions

    International Nuclear Information System (INIS)

    Chen, Jianxin; Ji, Zhengfeng; Yu, Nengkun; Zeng, Bei; Li, Chi-Kwong; Poon, Yiu-Tung; Shen, Yi; Zhou, Duanlu

    2015-01-01

    In this paper, we discuss the connection between two genuinely quantum phenomena—the discontinuity of quantum maximum entropy inference and quantum phase transitions at zero temperature. It is shown that the discontinuity of the maximum entropy inference of local observable measurements signals the non-local type of transitions, where local density matrices of the ground state change smoothly at the transition point. We then propose to use the quantum conditional mutual information of the ground state as an indicator to detect the discontinuity and the non-local type of quantum phase transitions in the thermodynamic limit. (paper)

  19. 38 CFR 36.4204 - Loan purposes, maximum loan amounts and terms.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Loan purposes, maximum loan amounts and terms. 36.4204 Section 36.4204 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF... constructing a suitable pad for the manufactured home. (e) The maximum permissible loan terms shall not exceed...

  20. Design of Asymmetrical Relay Resonators for Maximum Efficiency of Wireless Power Transfer

    Directory of Open Access Journals (Sweden)

    Bo-Hee Choi

    2016-01-01

    Full Text Available This paper presents a new design method of asymmetrical relay resonators for maximum wireless power transfer. A new design method for relay resonators is demanded because maximum power transfer efficiency (PTE is not obtained at the resonant frequency of unit resonator. The maximum PTE for relay resonators is obtained at the different resonances of unit resonator. The optimum design of asymmetrical relay is conducted by both the optimum placement and the optimum capacitance of resonators. The optimum placement is found by scanning the positions of the relays and optimum capacitance can be found by using genetic algorithm (GA. The PTEs are enhanced when capacitance is optimally designed by GA according to the position of relays, respectively, and then maximum efficiency is obtained at the optimum placement of relays. The capacitance of the second resonator to nth resonator and the load resistance should be determined for maximum efficiency while the capacitance of the first resonator and the source resistance are obtained for the impedance matching. The simulated and measured results are in good agreement.

  1. A practical exact maximum compatibility algorithm for reconstruction of recent evolutionary history.

    Science.gov (United States)

    Cherry, Joshua L

    2017-02-23

    Maximum compatibility is a method of phylogenetic reconstruction that is seldom applied to molecular sequences. It may be ideal for certain applications, such as reconstructing phylogenies of closely-related bacteria on the basis of whole-genome sequencing. Here I present an algorithm that rapidly computes phylogenies according to a compatibility criterion. Although based on solutions to the maximum clique problem, this algorithm deals properly with ambiguities in the data. The algorithm is applied to bacterial data sets containing up to nearly 2000 genomes with several thousand variable nucleotide sites. Run times are several seconds or less. Computational experiments show that maximum compatibility is less sensitive than maximum parsimony to the inclusion of nucleotide data that, though derived from actual sequence reads, has been identified as likely to be misleading. Maximum compatibility is a useful tool for certain phylogenetic problems, such as inferring the relationships among closely-related bacteria from whole-genome sequence data. The algorithm presented here rapidly solves fairly large problems of this type, and provides robustness against misleading characters than can pollute large-scale sequencing data.

  2. [Identification of common medicinal snakes in medicated liquor of Guangdong by COI barcode sequence].

    Science.gov (United States)

    Liao, Jing; Chao, Zhi; Zhang, Liang

    2013-11-01

    To identify the common snakes in medicated liquor of Guangdong using COI barcode sequence,and to test the feasibility. The COI barcode sequences of collected medicinal snakes were amplified and sequenced. The sequences combined with the data from GenBank were analyzed for divergence and building a neighbor-joining(NJ) tree with MEGA 5.0. The genetic distance and NJ tree demonstrated that there were 241 variable sites in these species, and the average (A + T) content of 56.2% was higher than the average (G + C) content of 43.7%. The maximum interspecific genetic distance was 0.2568, and the minimum was 0. 1519. In the NJ tree,each species formed a monophyletic clade with bootstrap supports of 100%. DNA barcoding identification method based on the COI sequence is accurate and can be applied to identify the common medicinal snakes.

  3. ON THE MAXIMUM MASS OF STELLAR BLACK HOLES

    International Nuclear Information System (INIS)

    Belczynski, Krzysztof; Fryer, Chris L.; Bulik, Tomasz; Ruiter, Ashley; Valsecchi, Francesca; Vink, Jorick S.; Hurley, Jarrod R.

    2010-01-01

    We present the spectrum of compact object masses: neutron stars and black holes (BHs) that originate from single stars in different environments. In particular, we calculate the dependence of maximum BH mass on metallicity and on some specific wind mass loss rates (e.g., Hurley et al. and Vink et al.). Our calculations show that the highest mass BHs observed in the Galaxy M bh ∼ 15 M sun in the high metallicity environment (Z = Z sun = 0.02) can be explained with stellar models and the wind mass loss rates adopted here. To reach this result we had to set luminous blue variable mass loss rates at the level of ∼10 -4 M sun yr -1 and to employ metallicity-dependent Wolf-Rayet winds. With such winds, calibrated on Galactic BH mass measurements, the maximum BH mass obtained for moderate metallicity (Z = 0.3 Z sun = 0.006) is M bh,max = 30 M sun . This is a rather striking finding as the mass of the most massive known stellar BH is M bh = 23-34 M sun and, in fact, it is located in a small star-forming galaxy with moderate metallicity. We find that in the very low (globular cluster-like) metallicity environment the maximum BH mass can be as high as M bh,max = 80 M sun (Z = 0.01 Z sun = 0.0002). It is interesting to note that X-ray luminosity from Eddington-limited accretion onto an 80 M sun BH is of the order of ∼10 40 erg s -1 and is comparable to luminosities of some known ultra-luminous X-ray sources. We emphasize that our results were obtained for single stars only and that binary interactions may alter these maximum BH masses (e.g., accretion from a close companion). This is strictly a proof-of-principle study which demonstrates that stellar models can naturally explain even the most massive known stellar BHs.

  4. GraMi: Generalized Frequent Pattern Mining in a Single Large Graph

    KAUST Repository

    Saeedy, Mohammed El

    2011-11-01

    Mining frequent subgraphs is an important operation on graphs. Most existing work assumes a database of many small graphs, but modern applications, such as social networks, citation graphs or protein-protein interaction in bioinformatics, are modeled as a single large graph. Interesting interactions in such applications may be transitive (e.g., friend of a friend). Existing methods, however, search for frequent isomorphic (i.e., exact match) subgraphs and cannot discover many useful patterns. In this paper the authors propose GRAMI, a framework that generalizes frequent subgraph mining in a large single graph. GRAMI discovers frequent patterns. A pattern is a graph where edges are generalized to distance-constrained paths. Depending on the definition of the distance function, many instantiations of the framework are possible. Both directed and undirected graphs, as well as multiple labels per vertex, are supported. The authors developed an efficient implementation of the framework that models the frequency resolution phase as a constraint satisfaction problem, in order to avoid the costly enumeration of all instances of each pattern in the graph. The authors also implemented CGRAMI, a version that supports structural and semantic constraints; and AGRAMI, an approximate version that supports very large graphs. The experiments on real data demonstrate that the authors framework is up to 3 orders of magnitude faster and discovers more interesting patterns than existing approaches.

  5. A Hybrid Maximum Power Point Tracking Method for Automobile Exhaust Thermoelectric Generator

    Science.gov (United States)

    Quan, Rui; Zhou, Wei; Yang, Guangyou; Quan, Shuhai

    2017-05-01

    To make full use of the maximum output power of automobile exhaust thermoelectric generator (AETEG) based on Bi2Te3 thermoelectric modules (TEMs), taking into account the advantages and disadvantages of existing maximum power point tracking methods, and according to the output characteristics of TEMs, a hybrid maximum power point tracking method combining perturb and observe (P&O) algorithm, quadratic interpolation and constant voltage tracking method was put forward in this paper. Firstly, it searched the maximum power point with P&O algorithms and a quadratic interpolation method, then, it forced the AETEG to work at its maximum power point with constant voltage tracking. A synchronous buck converter and controller were implemented in the electric bus of the AETEG applied in a military sports utility vehicle, and the whole system was modeled and simulated with a MATLAB/Simulink environment. Simulation results demonstrate that the maximum output power of the AETEG based on the proposed hybrid method is increased by about 3.0% and 3.7% compared with that using only the P&O algorithm and the quadratic interpolation method, respectively. The shorter tracking time is only 1.4 s, which is reduced by half compared with that of the P&O algorithm and quadratic interpolation method, respectively. The experimental results demonstrate that the tracked maximum power is approximately equal to the real value using the proposed hybrid method,and it can preferentially deal with the voltage fluctuation of the AETEG with only P&O algorithm, and resolve the issue that its working point can barely be adjusted only with constant voltage tracking when the operation conditions change.

  6. Soil and Water Assessment Tool model predictions of annual maximum pesticide concentrations in high vulnerability watersheds.

    Science.gov (United States)

    Winchell, Michael F; Peranginangin, Natalia; Srinivasan, Raghavan; Chen, Wenlin

    2018-05-01

    Recent national regulatory assessments of potential pesticide exposure of threatened and endangered species in aquatic habitats have led to increased need for watershed-scale predictions of pesticide concentrations in flowing water bodies. This study was conducted to assess the ability of the uncalibrated Soil and Water Assessment Tool (SWAT) to predict annual maximum pesticide concentrations in the flowing water bodies of highly vulnerable small- to medium-sized watersheds. The SWAT was applied to 27 watersheds, largely within the midwest corn belt of the United States, ranging from 20 to 386 km 2 , and evaluated using consistent input data sets and an uncalibrated parameterization approach. The watersheds were selected from the Atrazine Ecological Exposure Monitoring Program and the Heidelberg Tributary Loading Program, both of which contain high temporal resolution atrazine sampling data from watersheds with exceptionally high vulnerability to atrazine exposure. The model performance was assessed based upon predictions of annual maximum atrazine concentrations in 1-d and 60-d durations, predictions critical in pesticide-threatened and endangered species risk assessments when evaluating potential acute and chronic exposure to aquatic organisms. The simulation results showed that for nearly half of the watersheds simulated, the uncalibrated SWAT model was able to predict annual maximum pesticide concentrations within a narrow range of uncertainty resulting from atrazine application timing patterns. An uncalibrated model's predictive performance is essential for the assessment of pesticide exposure in flowing water bodies, the majority of which have insufficient monitoring data for direct calibration, even in data-rich countries. In situations in which SWAT over- or underpredicted the annual maximum concentrations, the magnitude of the over- or underprediction was commonly less than a factor of 2, indicating that the model and uncalibrated parameterization

  7. 47 CFR 1.1507 - Rulemaking on maximum rates for attorney fees.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Rulemaking on maximum rates for attorney fees... § 1.1507 Rulemaking on maximum rates for attorney fees. (a) If warranted by an increase in the cost of... types of proceedings), the Commission may adopt regulations providing that attorney fees may be awarded...

  8. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    Science.gov (United States)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  9. Maximum Parsimony on Phylogenetic networks

    Science.gov (United States)

    2012-01-01

    Background Phylogenetic networks are generalizations of phylogenetic trees, that are used to model evolutionary events in various contexts. Several different methods and criteria have been introduced for reconstructing phylogenetic trees. Maximum Parsimony is a character-based approach that infers a phylogenetic tree by minimizing the total number of evolutionary steps required to explain a given set of data assigned on the leaves. Exact solutions for optimizing parsimony scores on phylogenetic trees have been introduced in the past. Results In this paper, we define the parsimony score on networks as the sum of the substitution costs along all the edges of the network; and show that certain well-known algorithms that calculate the optimum parsimony score on trees, such as Sankoff and Fitch algorithms extend naturally for networks, barring conflicting assignments at the reticulate vertices. We provide heuristics for finding the optimum parsimony scores on networks. Our algorithms can be applied for any cost matrix that may contain unequal substitution costs of transforming between different characters along different edges of the network. We analyzed this for experimental data on 10 leaves or fewer with at most 2 reticulations and found that for almost all networks, the bounds returned by the heuristics matched with the exhaustively determined optimum parsimony scores. Conclusion The parsimony score we define here does not directly reflect the cost of the best tree in the network that displays the evolution of the character. However, when searching for the most parsimonious network that describes a collection of characters, it becomes necessary to add additional cost considerations to prefer simpler structures, such as trees over networks. The parsimony score on a network that we describe here takes into account the substitution costs along the additional edges incident on each reticulate vertex, in addition to the substitution costs along the other edges which are

  10. Application of the maximum entropy method to profile analysis

    International Nuclear Information System (INIS)

    Armstrong, N.; Kalceff, W.; Cline, J.P.

    1999-01-01

    Full text: A maximum entropy (MaxEnt) method for analysing crystallite size- and strain-induced x-ray profile broadening is presented. This method treats the problems of determining the specimen profile, crystallite size distribution, and strain distribution in a general way by considering them as inverse problems. A common difficulty faced by many experimenters is their inability to determine a well-conditioned solution of the integral equation, which preserves the positivity of the profile or distribution. We show that the MaxEnt method overcomes this problem, while also enabling a priori information, in the form of a model, to be introduced into it. Additionally, we demonstrate that the method is fully quantitative, in that uncertainties in the solution profile or solution distribution can be determined and used in subsequent calculations, including mean particle sizes and rms strain. An outline of the MaxEnt method is presented for the specific problems of determining the specimen profile and crystallite or strain distributions for the correspondingly broadened profiles. This approach offers an alternative to standard methods such as those of Williamson-Hall and Warren-Averbach. An application of the MaxEnt method is demonstrated in the analysis of alumina size-broadened diffraction data (from NIST, Gaithersburg). It is used to determine the specimen profile and column-length distribution of the scattering domains. Finally, these results are compared with the corresponding Williamson-Hall and Warren-Averbach analyses. Copyright (1999) Australian X-ray Analytical Association Inc

  11. A Hybrid Maximum Power Point Search Method Using Temperature Measurements in Partial Shading Conditions

    Directory of Open Access Journals (Sweden)

    Mroczka Janusz

    2014-12-01

    Full Text Available Photovoltaic panels have a non-linear current-voltage characteristics to produce the maximum power at only one point called the maximum power point. In the case of the uniform illumination a single solar panel shows only one maximum power, which is also the global maximum power point. In the case an irregularly illuminated photovoltaic panel many local maxima on the power-voltage curve can be observed and only one of them is the global maximum. The proposed algorithm detects whether a solar panel is in the uniform insolation conditions. Then an appropriate strategy of tracking the maximum power point is taken using a decision algorithm. The proposed method is simulated in the environment created by the authors, which allows to stimulate photovoltaic panels in real conditions of lighting, temperature and shading.

  12. Maximum neutron flux in thermal reactors; Maksimum neutronskog fluksa kod termalnih reaktora

    Energy Technology Data Exchange (ETDEWEB)

    Strugar, P V [Institute of Nuclear Sciences Boris Kidric, Vinca, Beograd (Yugoslavia)

    1968-07-01

    Direct approach to the problem is to calculate spatial distribution of fuel concentration if the reactor core directly using the condition of maximum neutron flux and comply with thermal limitations. This paper proved that the problem can be solved by applying the variational calculus, i.e. by using the maximum principle of Pontryagin. Mathematical model of reactor core is based on the two-group neutron diffusion theory with some simplifications which make it appropriate from maximum principle point of view. Here applied theory of maximum principle are suitable for application. The solution of optimum distribution of fuel concentration in the reactor core is obtained in explicit analytical form. The reactor critical dimensions are roots of a system of nonlinear equations and verification of optimum conditions can be done only for specific examples.

  13. Discrete Model Predictive Control-Based Maximum Power Point Tracking for PV Systems: Overview and Evaluation

    DEFF Research Database (Denmark)

    Lashab, Abderezak; Sera, Dezso; Guerrero, Josep M.

    2018-01-01

    The main objective of this work is to provide an overview and evaluation of discrete model predictive controlbased maximum power point tracking (MPPT) for PV systems. A large number of MPC based MPPT methods have been recently introduced in the literature with very promising performance, however......, an in-depth investigation and comparison of these methods have not been carried out yet. Therefore, this paper has set out to provide an in-depth analysis and evaluation of MPC based MPPT methods applied to various common power converter topologies. The performance of MPC based MPPT is directly linked...... with the converter topology, and it is also affected by the accurate determination of the converter parameters, sensitivity to converter parameter variations is also investigated. The static and dynamic performance of the trackers are assessed according to the EN 50530 standard, using detailed simulation models...

  14. Structural pattern matching of nonribosomal peptides

    Directory of Open Access Journals (Sweden)

    Leclère Valérie

    2009-03-01

    Full Text Available Abstract Background Nonribosomal peptides (NRPs, bioactive secondary metabolites produced by many microorganisms, show a broad range of important biological activities (e.g. antibiotics, immunosuppressants, antitumor agents. NRPs are mainly composed of amino acids but their primary structure is not always linear and can contain cycles or branchings. Furthermore, there are several hundred different monomers that can be incorporated into NRPs. The NORINE database, the first resource entirely dedicated to NRPs, currently stores more than 700 NRPs annotated with their monomeric peptide structure encoded by undirected labeled graphs. This opens a way to a systematic analysis of structural patterns occurring in NRPs. Such studies can investigate the functional role of some monomeric chains, or analyse NRPs that have been computationally predicted from the synthetase protein sequence. A basic operation in such analyses is the search for a given structural pattern in the database. Results We developed an efficient method that allows for a quick search for a structural pattern in the NORINE database. The method identifies all peptides containing a pattern substructure of a given size. This amounts to solving a variant of the maximum common subgraph problem on pattern and peptide graphs, which is done by computing cliques in an appropriate compatibility graph. Conclusion The method has been incorporated into the NORINE database, available at http://bioinfo.lifl.fr/norine. Less than one second is needed to search for a pattern in the entire database.

  15. Growth and maximum size of tiger sharks (Galeocerdo cuvier) in Hawaii.

    Science.gov (United States)

    Meyer, Carl G; O'Malley, Joseph M; Papastamatiou, Yannis P; Dale, Jonathan J; Hutchinson, Melanie R; Anderson, James M; Royer, Mark A; Holland, Kim N

    2014-01-01

    Tiger sharks (Galecerdo cuvier) are apex predators characterized by their broad diet, large size and rapid growth. Tiger shark maximum size is typically between 380 & 450 cm Total Length (TL), with a few individuals reaching 550 cm TL, but the maximum size of tiger sharks in Hawaii waters remains uncertain. A previous study suggested tiger sharks grow rather slowly in Hawaii compared to other regions, but this may have been an artifact of the method used to estimate growth (unvalidated vertebral ring counts) compounded by small sample size and narrow size range. Since 1993, the University of Hawaii has conducted a research program aimed at elucidating tiger shark biology, and to date 420 tiger sharks have been tagged and 50 recaptured. All recaptures were from Hawaii except a single shark recaptured off Isla Jacques Cousteau (24°13'17″N 109°52'14″W), in the southern Gulf of California (minimum distance between tag and recapture sites  =  approximately 5,000 km), after 366 days at liberty (DAL). We used these empirical mark-recapture data to estimate growth rates and maximum size for tiger sharks in Hawaii. We found that tiger sharks in Hawaii grow twice as fast as previously thought, on average reaching 340 cm TL by age 5, and attaining a maximum size of 403 cm TL. Our model indicates the fastest growing individuals attain 400 cm TL by age 5, and the largest reach a maximum size of 444 cm TL. The largest shark captured during our study was 464 cm TL but individuals >450 cm TL were extremely rare (0.005% of sharks captured). We conclude that tiger shark growth rates and maximum sizes in Hawaii are generally consistent with those in other regions, and hypothesize that a broad diet may help them to achieve this rapid growth by maximizing prey consumption rates.

  16. An investigation of rugby scrimmaging posture and individual maximum pushing force.

    Science.gov (United States)

    Wu, Wen-Lan; Chang, Jyh-Jong; Wu, Jia-Hroung; Guo, Lan-Yuen

    2007-02-01

    Although rugby is a popular contact sport and the isokinetic muscle torque assessment has recently found widespread application in the field of sports medicine, little research has examined the factors associated with the performance of game-specific skills directly by using the isokinetic-type rugby scrimmaging machine. This study is designed to (a) measure and observe the differences in the maximum individual pushing forward force produced by scrimmaging in different body postures (3 body heights x 2 foot positions) with a self-developed rugby scrimmaging machine and (b) observe the variations in hip, knee, and ankle angles at different body postures and explore the relationship between these angle values and the individual maximum pushing force. Ten national rugby players were invited to participate in the examination. The experimental equipment included a self-developed rugby scrimmaging machine and a 3-dimensional motion analysis system. Our results showed that the foot positions (parallel and nonparallel foot positions) do not affect the maximum pushing force; however, the maximum pushing force was significantly lower in posture I (36% body height) than in posture II (38%) and posture III (40%). The maximum forward force in posture III (40% body height) was also slightly greater than for the scrum in posture II (38% body height). In addition, it was determined that hip, knee, and ankle angles under parallel feet positioning are factors that are closely negatively related in terms of affecting maximum pushing force in scrimmaging. In cross-feet postures, there was a positive correlation between individual forward force and hip angle of the rear leg. From our results, we can conclude that if the player stands in an appropriate starting position at the early stage of scrimmaging, it will benefit the forward force production.

  17. Maximum Likelihood Blind Channel Estimation for Space-Time Coding Systems

    Directory of Open Access Journals (Sweden)

    Hakan A. Çırpan

    2002-05-01

    Full Text Available Sophisticated signal processing techniques have to be developed for capacity enhancement of future wireless communication systems. In recent years, space-time coding is proposed to provide significant capacity gains over the traditional communication systems in fading wireless channels. Space-time codes are obtained by combining channel coding, modulation, transmit diversity, and optional receive diversity in order to provide diversity at the receiver and coding gain without sacrificing the bandwidth. In this paper, we consider the problem of blind estimation of space-time coded signals along with the channel parameters. Both conditional and unconditional maximum likelihood approaches are developed and iterative solutions are proposed. The conditional maximum likelihood algorithm is based on iterative least squares with projection whereas the unconditional maximum likelihood approach is developed by means of finite state Markov process modelling. The performance analysis issues of the proposed methods are studied. Finally, some simulation results are presented.

  18. Beat the Deviations in Estimating Maximum Power of Thermoelectric Modules

    DEFF Research Database (Denmark)

    Gao, Junling; Chen, Min

    2013-01-01

    Under a certain temperature difference, the maximum power of a thermoelectric module can be estimated by the open-circuit voltage and the short-circuit current. In practical measurement, there exist two switch modes, either from open to short or from short to open, but the two modes can give...... different estimations on the maximum power. Using TEG-127-2.8-3.5-250 and TEG-127-1.4-1.6-250 as two examples, the difference is about 10%, leading to some deviations with the temperature change. This paper analyzes such differences by means of a nonlinear numerical model of thermoelectricity, and finds out...... that the main cause is the influence of various currents on the produced electromotive potential. A simple and effective calibration method is proposed to minimize the deviations in specifying the maximum power. Experimental results validate the method with improved estimation accuracy....

  19. MPBoot: fast phylogenetic maximum parsimony tree inference and bootstrap approximation.

    Science.gov (United States)

    Hoang, Diep Thi; Vinh, Le Sy; Flouri, Tomáš; Stamatakis, Alexandros; von Haeseler, Arndt; Minh, Bui Quang

    2018-02-02

    The nonparametric bootstrap is widely used to measure the branch support of phylogenetic trees. However, bootstrapping is computationally expensive and remains a bottleneck in phylogenetic analyses. Recently, an ultrafast bootstrap approximation (UFBoot) approach was proposed for maximum likelihood analyses. However, such an approach is still missing for maximum parsimony. To close this gap we present MPBoot, an adaptation and extension of UFBoot to compute branch supports under the maximum parsimony principle. MPBoot works for both uniform and non-uniform cost matrices. Our analyses on biological DNA and protein showed that under uniform cost matrices, MPBoot runs on average 4.7 (DNA) to 7 times (protein data) (range: 1.2-20.7) faster than the standard parsimony bootstrap implemented in PAUP*; but 1.6 (DNA) to 4.1 times (protein data) slower than the standard bootstrap with a fast search routine in TNT (fast-TNT). However, for non-uniform cost matrices MPBoot is 5 (DNA) to 13 times (protein data) (range:0.3-63.9) faster than fast-TNT. We note that MPBoot achieves better scores more frequently than PAUP* and fast-TNT. However, this effect is less pronounced if an intensive but slower search in TNT is invoked. Moreover, experiments on large-scale simulated data show that while both PAUP* and TNT bootstrap estimates are too conservative, MPBoot bootstrap estimates appear more unbiased. MPBoot provides an efficient alternative to the standard maximum parsimony bootstrap procedure. It shows favorable performance in terms of run time, the capability of finding a maximum parsimony tree, and high bootstrap accuracy on simulated as well as empirical data sets. MPBoot is easy-to-use, open-source and available at http://www.cibiv.at/software/mpboot .

  20. RNA graph partitioning for the discovery of RNA modularity: a novel application of graph partition algorithm to biology.

    Directory of Open Access Journals (Sweden)

    Namhee Kim

    Full Text Available Graph representations have been widely used to analyze and design various economic, social, military, political, and biological networks. In systems biology, networks of cells and organs are useful for understanding disease and medical treatments and, in structural biology, structures of molecules can be described, including RNA structures. In our RNA-As-Graphs (RAG framework, we represent RNA structures as tree graphs by translating unpaired regions into vertices and helices into edges. Here we explore the modularity of RNA structures by applying graph partitioning known in graph theory to divide an RNA graph into subgraphs. To our knowledge, this is the first application of graph partitioning to biology, and the results suggest a systematic approach for modular design in general. The graph partitioning algorithms utilize mathematical properties of the Laplacian eigenvector (µ2 corresponding to the second eigenvalues (λ2 associated with the topology matrix defining the graph: λ2 describes the overall topology, and the sum of µ2's components is zero. The three types of algorithms, termed median, sign, and gap cuts, divide a graph by determining nodes of cut by median, zero, and largest gap of µ2's components, respectively. We apply these algorithms to 45 graphs corresponding to all solved RNA structures up through 11 vertices (∼ 220 nucleotides. While we observe that the median cut divides a graph into two similar-sized subgraphs, the sign and gap cuts partition a graph into two topologically-distinct subgraphs. We find that the gap cut produces the best biologically-relevant partitioning for RNA because it divides RNAs at less stable connections while maintaining junctions intact. The iterative gap cuts suggest basic modules and assembly protocols to design large RNA structures. Our graph substructuring thus suggests a systematic approach to explore the modularity of biological networks. In our applications to RNA structures, subgraphs

  1. On minimizing the maximum broadcast decoding delay for instantly decodable network coding

    KAUST Repository

    Douik, Ahmed S.; Sorour, Sameh; Alouini, Mohamed-Slim; Ai-Naffouri, Tareq Y.

    2014-01-01

    In this paper, we consider the problem of minimizing the maximum broadcast decoding delay experienced by all the receivers of generalized instantly decodable network coding (IDNC). Unlike the sum decoding delay, the maximum decoding delay as a

  2. Computing the Maximum Detour of a Plane Graph in Subquadratic Time

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a plane graph where each edge is a line segment. We consider the problem of computing the maximum detour of G, defined as the maximum over all pairs of distinct points p and q of G of the ratio between the distance between p and q in G and the distance |pq|. The fastest known algorithm...

  3. On an Objective Basis for the Maximum Entropy Principle

    Directory of Open Access Journals (Sweden)

    David J. Miller

    2015-01-01

    Full Text Available In this letter, we elaborate on some of the issues raised by a recent paper by Neapolitan and Jiang concerning the maximum entropy (ME principle and alternative principles for estimating probabilities consistent with known, measured constraint information. We argue that the ME solution for the “problematic” example introduced by Neapolitan and Jiang has stronger objective basis, rooted in results from information theory, than their alternative proposed solution. We also raise some technical concerns about the Bayesian analysis in their work, which was used to independently support their alternative to the ME solution. The letter concludes by noting some open problems involving maximum entropy statistical inference.

  4. Maximum Safety Regenerative Power Tracking for DC Traction Power Systems

    Directory of Open Access Journals (Sweden)

    Guifu Du

    2017-02-01

    Full Text Available Direct current (DC traction power systems are widely used in metro transport systems, with running rails usually being used as return conductors. When traction current flows through the running rails, a potential voltage known as “rail potential” is generated between the rails and ground. Currently, abnormal rises of rail potential exist in many railway lines during the operation of railway systems. Excessively high rail potentials pose a threat to human life and to devices connected to the rails. In this paper, the effect of regenerative power distribution on rail potential is analyzed. Maximum safety regenerative power tracking is proposed for the control of maximum absolute rail potential and energy consumption during the operation of DC traction power systems. The dwell time of multiple trains at each station and the trigger voltage of the regenerative energy absorbing device (READ are optimized based on an improved particle swarm optimization (PSO algorithm to manage the distribution of regenerative power. In this way, the maximum absolute rail potential and energy consumption of DC traction power systems can be reduced. The operation data of Guangzhou Metro Line 2 are used in the simulations, and the results show that the scheme can reduce the maximum absolute rail potential and energy consumption effectively and guarantee the safety in energy saving of DC traction power systems.

  5. Maximum Mass of Hybrid Stars in the Quark Bag Model

    Science.gov (United States)

    Alaverdyan, G. B.; Vartanyan, Yu. L.

    2017-12-01

    The effect of model parameters in the equation of state for quark matter on the magnitude of the maximum mass of hybrid stars is examined. Quark matter is described in terms of the extended MIT bag model including corrections for one-gluon exchange. For nucleon matter in the range of densities corresponding to the phase transition, a relativistic equation of state is used that is calculated with two-particle correlations taken into account based on using the Bonn meson-exchange potential. The Maxwell construction is used to calculate the characteristics of the first order phase transition and it is shown that for a fixed value of the strong interaction constant αs, the baryon concentrations of the coexisting phases grow monotonically as the bag constant B increases. It is shown that for a fixed value of the strong interaction constant αs, the maximum mass of a hybrid star increases as the bag constant B decreases. For a given value of the bag parameter B, the maximum mass rises as the strong interaction constant αs increases. It is shown that the configurations of hybrid stars with maximum masses equal to or exceeding the mass of the currently known most massive pulsar are possible for values of the strong interaction constant αs > 0.6 and sufficiently low values of the bag constant.

  6. Study of forecasting maximum demand of electric power

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, B.C.; Hwang, Y.J. [Korea Energy Economics Institute, Euiwang (Korea, Republic of)

    1997-08-01

    As far as the past performances of power supply and demand in Korea is concerned, one of the striking phenomena is that there have been repeated periodic surpluses and shortages of power generation facilities. Precise assumption and prediction of power demands is the basic work in establishing a supply plan and carrying out the right policy since facilities investment of the power generation industry requires a tremendous amount of capital and a long construction period. The purpose of this study is to study a model for the inference and prediction of a more precise maximum demand under these backgrounds. The non-parametric model considered in this study, paying attention to meteorological factors such as temperature and humidity, does not have a simple proportionate relationship with the maximum power demand, but affects it through mutual complicated nonlinear interaction. I used the non-parametric inference technique by introducing meteorological effects without importing any literal assumption on the interaction of temperature and humidity preliminarily. According to the analysis result, it is found that the non-parametric model that introduces the number of tropical nights which shows the continuity of the meteorological effect has better prediction power than the linear model. The non- parametric model that considers both the number of tropical nights and the number of cooling days at the same time is a model for predicting maximum demand. 7 refs., 6 figs., 9 tabs.

  7. Influence of Dynamic Neuromuscular Stabilization Approach on Maximum Kayak Paddling Force

    Directory of Open Access Journals (Sweden)

    Davidek Pavel

    2018-03-01

    Full Text Available The purpose of this study was to examine the effect of Dynamic Neuromuscular Stabilization (DNS exercise on maximum paddling force (PF and self-reported pain perception in the shoulder girdle area in flatwater kayakers. Twenty male flatwater kayakers from a local club (age = 21.9 ± 2.4 years, body height = 185.1 ± 7.9 cm, body mass = 83.9 ± 9.1 kg were randomly assigned to the intervention or control groups. During the 6-week study, subjects from both groups performed standard off-season training. Additionally, the intervention group engaged in a DNS-based core stabilization exercise program (quadruped exercise, side sitting exercise, sitting exercise and squat exercise after each standard training session. Using a kayak ergometer, the maximum PF stroke was measured four times during the six weeks. All subjects completed the Disabilities of the Arm, Shoulder and Hand (DASH questionnaire before and after the 6-week interval to evaluate subjective pain perception in the shoulder girdle area. Initially, no significant differences in maximum PF and the DASH questionnaire were identified between the two groups. Repeated measures analysis of variance indicated that the experimental group improved significantly compared to the control group on maximum PF (p = .004; Cohen’s d = .85, but not on the DASH questionnaire score (p = .731 during the study. Integration of DNS with traditional flatwater kayak training may significantly increase maximum PF, but may not affect pain perception to the same extent.

  8. Determining Maximum Photovoltaic Penetration in a Distribution Grid considering Grid Operation Limits

    DEFF Research Database (Denmark)

    Kordheili, Reza Ahmadi; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    High penetration of photovoltaic panels in distribution grid can bring the grid to its operation limits. The main focus of the paper is to determine maximum photovoltaic penetration level in the grid. Three main criteria were investigated for determining maximum penetration level of PV panels...... for this grid: even distribution of PV panels, aggregation of panels at the beginning of each feeder, and aggregation of panels at the end of each feeder. Load modeling is done using Velander formula. Since PV generation is highest in the summer due to irradiation, a summer day was chosen to determine maximum......; maximum voltage deviation of customers, cables current limits, and transformer nominal value. Voltage deviation of different buses was investigated for different penetration levels. The proposed model was simulated on a Danish distribution grid. Three different PV location scenarios were investigated...

  9. Effect of current on the maximum possible reward.

    Science.gov (United States)

    Gallistel, C R; Leon, M; Waraczynski, M; Hanau, M S

    1991-12-01

    Using a 2-lever choice paradigm with concurrent variable interval schedules of reward, it was found that when pulse frequency is increased, the preference-determining rewarding effect of 0.5-s trains of brief cathodal pulses delivered to the medial forebrain bundle of the rat saturates (stops increasing) at values ranging from 200 to 631 pulses/s (pps). Raising the current lowered the saturation frequency, which confirms earlier, more extensive findings showing that the rewarding effect of short trains saturates at pulse frequencies that vary from less than 100 pps to more than 800 pps, depending on the current. It was also found that the maximum possible reward--the magnitude of the reward at or beyond the saturation pulse frequency--increases with increasing current. Thus, increasing the current reduces the saturation frequency but increases the subjective magnitude of the maximum possible reward.

  10. Rumor Identification with Maximum Entropy in MicroNet

    Directory of Open Access Journals (Sweden)

    Suisheng Yu

    2017-01-01

    Full Text Available The widely used applications of Microblog, WeChat, and other social networking platforms (that we call MicroNet shorten the period of information dissemination and expand the range of information dissemination, which allows rumors to cause greater harm and have more influence. A hot topic in the information dissemination field is how to identify and block rumors. Based on the maximum entropy model, this paper constructs the recognition mechanism of rumor information in the micronetwork environment. First, based on the information entropy theory, we obtained the characteristics of rumor information using the maximum entropy model. Next, we optimized the original classifier training set and the feature function to divide the information into rumors and nonrumors. Finally, the experimental simulation results show that the rumor identification results using this method are better than the original classifier and other related classification methods.

  11. Golden Trail: Retrieving the Data History that Matters from a Comprehensive Provenance Repository

    Directory of Open Access Journals (Sweden)

    Paolo Missier

    2012-03-01

    Full Text Available Experimental science can be thought of as the exploration of a large research space, in search of a few valuable results. While it is this “Golden Data” that gets published, the history of the exploration is often as valuable to the scientists as some of its outcomes. We envision an e-research infrastructure that is capable of systematically and automatically recording such history – an assumption that holds today for a number of workflow management systems routinely used in e-science. In keeping with our gold rush metaphor, the provenance of a valuable result is a “Golden Trail”. Logically, this represents a detailed account of how the Golden Data was arrived at, and technically it is a sub-graph in the much larger graph of provenance traces that collectively tell the story of the entire research (or of some of it.In this paper we describe a model and architecture for a repository dedicated to storing provenance traces and selectively retrieving Golden Trails from it. As traces from multiple experiments over long periods of time are accommodated, the trails may be sub-graphs of one trace, or they may be the logical representation of a virtual experiment obtained by joining together traces that share common data.The project has been carried out within the Provenance Working Group of the Data Observation Network for Earth (DataONE NSF project. Ultimately, our longer-term plan is to integrate the provenance repository into the data preservation architecture currently being developed by DataONE.

  12. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  13. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  14. Study of ecologo-biological reactions of common flax to finely dispersed metallurgical wastes

    Science.gov (United States)

    Zakharova, O.; Gusev, A.; Skripnikova, E.; Skripnikova, M.; Krutyakov, Yu; Kudrinsky, A.; Mikhailov, I.; Senatova, S.; Chuprunov, C.; Kuznetsov, D.

    2015-11-01

    Study was carried out on the influence of metallurgic industrial sludge on morphometric and biochemical indicators as well as productivity of common flax under laboratory and field conditions. In laboratory settings negative influence on seed germinating ability and positive influence on sprouts biomass production in water medium were observed. In sand medium suppression of biological productivity under the influence of sludge together with photosynthetic system II (FS II) activity stimulation were registered. Biochemical study showed peroxidase activity decrease in laboratory, while activity of polyphenol oxidase, superoxide dismutase and catalase were given a mild boost under the influence of sludge. In the field trial, positive influence of sludge on flax photosynthetic apparatus was shown. Positive influence of sludge on vegetation and yield indicators was observed. The analysis of heavy metals content showed excess over maximum allowable concentration (MAC) of copper and zinc in control plants, it may point to the background soil pollution. In the plants from the trial groups receiving 0.5 and 2 ton/ha heavy metals content below the control values was registered. Application of 4 ton/ha led to the maximum content of copper and zinc in the plants among the trial groups. The analysis of soils from the test plots indicated no excess over maximum allowable concentrations of heavy metals. Thus, further study of possibilities of using metallurgic industrial sludge as a soil stimulator in flax cultivation at the application rate of 0.5 t/ha seems promising.

  15. Quality, precision and accuracy of the maximum No. 40 anemometer

    Energy Technology Data Exchange (ETDEWEB)

    Obermeir, J. [Otech Engineering, Davis, CA (United States); Blittersdorf, D. [NRG Systems Inc., Hinesburg, VT (United States)

    1996-12-31

    This paper synthesizes available calibration data for the Maximum No. 40 anemometer. Despite its long history in the wind industry, controversy surrounds the choice of transfer function for this anemometer. Many users are unaware that recent changes in default transfer functions in data loggers are producing output wind speed differences as large as 7.6%. Comparison of two calibration methods used for large samples of Maximum No. 40 anemometers shows a consistent difference of 4.6% in output speeds. This difference is significantly larger than estimated uncertainty levels. Testing, initially performed to investigate related issues, reveals that Gill and Maximum cup anemometers change their calibration transfer functions significantly when calibrated in the open atmosphere compared with calibration in a laminar wind tunnel. This indicates that atmospheric turbulence changes the calibration transfer function of cup anemometers. These results call into question the suitability of standard wind tunnel calibration testing for cup anemometers. 6 refs., 10 figs., 4 tabs.

  16. Maximum-entropy networks pattern detection, network reconstruction and graph combinatorics

    CERN Document Server

    Squartini, Tiziano

    2017-01-01

    This book is an introduction to maximum-entropy models of random graphs with given topological properties and their applications. Its original contribution is the reformulation of many seemingly different problems in the study of both real networks and graph theory within the unified framework of maximum entropy. Particular emphasis is put on the detection of structural patterns in real networks, on the reconstruction of the properties of networks from partial information, and on the enumeration and sampling of graphs with given properties.  After a first introductory chapter explaining the motivation, focus, aim and message of the book, chapter 2 introduces the formal construction of maximum-entropy ensembles of graphs with local topological constraints. Chapter 3 focuses on the problem of pattern detection in real networks and provides a powerful way to disentangle nontrivial higher-order structural features from those that can be traced back to simpler local constraints. Chapter 4 focuses on the problem o...

  17. MAXIMUM POWEWR POINT TRACKING SYSTEM FOR PHOTOVOLTAIC STATION: A REVIEW

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available In recent years there has been a growing attention towards the use of renewable energy sources. Among them solar energy is one of the most promising green energy resources due to its environment sustainability and inexhaustibility. However photovoltaic systems (PhV suffer from big cost of equipment and low efficiency. Moreover, the solar cell V-I characteristic is nonlinear and varies with irradiation and temperature. In general, there is a unique point of PhV operation, called the Maximum Power Point (MPP, in which the PV system operates with maximum efficiency and produces its maximum output power. The location of the MPP is not known in advance, but can be located, either through calculation models or by search algorithms. Therefore MPPT techniques are important to maintain the PV array’s high efficiency. Many different techniques for MPPT are discussed. This review paper hopefully will serve as a convenient tool for future work in PhV power conversion.

  18. Maximum Range of a Projectile Thrown from Constant-Speed Circular Motion

    Science.gov (United States)

    Poljak, Nikola

    2016-01-01

    The problem of determining the angle ? at which a point mass launched from ground level with a given speed v[subscript 0] will reach a maximum distance is a standard exercise in mechanics. There are many possible ways of solving this problem, leading to the well-known answer of ? = p/4, producing a maximum range of D[subscript max] = v[superscript…

  19. Anti-nutrient components of guinea grass ( Panicum maximum ...

    African Journals Online (AJOL)

    Yomi

    2012-01-31

    Jan 31, 2012 ... A true measure of forage quality is animal ... The anti-nutritional contents of a pasture could be ... nutrient factors in P. maximum; (2) assess the effect of nitrogen ..... 3. http://www.clemson.edu/Fairfield/local/news/quality.

  20. Artificial Neural Network In Maximum Power Point Tracking Algorithm Of Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Modestas Pikutis

    2014-05-01

    Full Text Available Scientists are looking for ways to improve the efficiency of solar cells all the time. The efficiency of solar cells which are available to the general public is up to 20%. Part of the solar energy is unused and a capacity of solar power plant is significantly reduced – if slow controller or controller which cannot stay at maximum power point of solar modules is used. Various algorithms of maximum power point tracking were created, but mostly algorithms are slow or make mistakes. In the literature more and more oftenartificial neural networks (ANN in maximum power point tracking process are mentioned, in order to improve performance of the controller. Self-learner artificial neural network and IncCond algorithm were used for maximum power point tracking in created solar power plant model. The algorithm for control was created. Solar power plant model is implemented in Matlab/Simulink environment.

  1. Seeking the epoch of maximum luminosity for dusty quasars

    International Nuclear Information System (INIS)

    Vardanyan, Valeri; Weedman, Daniel; Sargsyan, Lusine

    2014-01-01

    Infrared luminosities νL ν (7.8 μm) arising from dust reradiation are determined for Sloan Digital Sky Survey (SDSS) quasars with 1.4 maximum at any redshift z < 5, reaching a plateau for z ≳ 3 with maximum luminosity νL ν (7.8 μm) ≳ 10 47 erg s –1 ; luminosity functions show one quasar Gpc –3 having νL ν (7.8 μm) > 10 46.6 erg s –1 for all 2 maximum luminosity has not yet been identified at any redshift below 5. The most ultraviolet luminous quasars, defined by rest frame νL ν (0.25 μm), have the largest values of the ratio νL ν (0.25 μm)/νL ν (7.8 μm) with a maximum ratio at z = 2.9. From these results, we conclude that the quasars most luminous in the ultraviolet have the smallest dust content and appear luminous primarily because of lessened extinction. Observed ultraviolet/infrared luminosity ratios are used to define 'obscured' quasars as those having >5 mag of ultraviolet extinction. We present a new summary of obscured quasars discovered with the Spitzer Infrared Spectrograph and determine the infrared luminosity function of these obscured quasars at z ∼ 2.1. This is compared with infrared luminosity functions of optically discovered, unobscured quasars in the SDSS and in the AGN and Galaxy Evolution Survey. The comparison indicates comparable numbers of obscured and unobscured quasars at z ∼ 2.1 with a possible excess of obscured quasars at fainter luminosities.

  2. Maximum total organic carbon limit for DWPF melter feed

    International Nuclear Information System (INIS)

    Choi, A.S.

    1995-01-01

    DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T ampersand E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit

  3. Nonsymmetric entropy and maximum nonsymmetric entropy principle

    International Nuclear Information System (INIS)

    Liu Chengshi

    2009-01-01

    Under the frame of a statistical model, the concept of nonsymmetric entropy which generalizes the concepts of Boltzmann's entropy and Shannon's entropy, is defined. Maximum nonsymmetric entropy principle is proved. Some important distribution laws such as power law, can be derived from this principle naturally. Especially, nonsymmetric entropy is more convenient than other entropy such as Tsallis's entropy in deriving power laws.

  4. Remarks on the maximum luminosity

    Science.gov (United States)

    Cardoso, Vitor; Ikeda, Taishi; Moore, Christopher J.; Yoo, Chul-Moon

    2018-04-01

    The quest for fundamental limitations on physical processes is old and venerable. Here, we investigate the maximum possible power, or luminosity, that any event can produce. We show, via full nonlinear simulations of Einstein's equations, that there exist initial conditions which give rise to arbitrarily large luminosities. However, the requirement that there is no past horizon in the spacetime seems to limit the luminosity to below the Planck value, LP=c5/G . Numerical relativity simulations of critical collapse yield the largest luminosities observed to date, ≈ 0.2 LP . We also present an analytic solution to the Einstein equations which seems to give an unboundedly large luminosity; this will guide future numerical efforts to investigate super-Planckian luminosities.

  5. Scintillation counter, maximum gamma aspect

    International Nuclear Information System (INIS)

    Thumim, A.D.

    1975-01-01

    A scintillation counter, particularly for counting gamma ray photons, includes a massive lead radiation shield surrounding a sample-receiving zone. The shield is disassembleable into a plurality of segments to allow facile installation and removal of a photomultiplier tube assembly, the segments being so constructed as to prevent straight-line access of external radiation through the shield into radiation-responsive areas. Provisions are made for accurately aligning the photomultiplier tube with respect to one or more sample-transmitting bores extending through the shield to the sample receiving zone. A sample elevator, used in transporting samples into the zone, is designed to provide a maximum gamma-receiving aspect to maximize the gamma detecting efficiency. (U.S.)

  6. Comparsion of maximum viscosity and viscometric method for identification of irradiated sweet potato starch

    International Nuclear Information System (INIS)

    Yi, Sang Duk; Yang, Jae Seung

    2000-01-01

    A study was carried out to compare viscosity and maximum viscosity methods for the detection of irradiated sweet potato starch. The viscosity of all samples decreased by increasing stirring speeds and irradiation doses. This trend was similar for maximum viscosity. Regression coefficients and expressions of viscosity and maximum viscosity with increasing irradiation dose were 0.9823 (y=335.02e -0. 3 366x ) at 120 rpm and 0.9939 (y =-42.544x+730.26). This trend in viscosity was similar for all stirring speeds. Parameter A, B and C values showed a dose dependent relation and were a better parameter for detecting irradiation treatment than maximum viscosity and the viscosity value it self. These results suggest that the detection of irradiated sweet potato starch is possible by both the viscometric and maximum visosity method. Therefore, the authors think that the maximum viscosity method can be proposed as one of the new methods to detect the irradiation treatment for sweet potato starch

  7. Estimation of maximum credible atmospheric radioactivity concentrations and dose rates from nuclear tests

    International Nuclear Information System (INIS)

    Telegadas, K.

    1979-01-01

    A simple technique is presented for estimating maximum credible gross beta air concentrations from nuclear detonations in the atmosphere, based on aircraft sampling of radioactivity following each Chinese nuclear test from 1964 to 1976. The calculated concentration is a function of the total yield and fission yield, initial vertical radioactivity distribution, time after detonation, and rate of horizontal spread of the debris with time. calculated maximum credible concentrations are compared with the highest concentrations measured during aircraft sampling. The technique provides a reasonable estimate of maximum air concentrations from 1 to 10 days after a detonation. An estimate of the whole-body external gamma dose rate corresponding to the maximum credible gross beta concentration is also given. (author)

  8. On the maximum Q in feedback controlled subignited plasmas

    International Nuclear Information System (INIS)

    Anderson, D.; Hamnen, H.; Lisak, M.

    1990-01-01

    High Q operation in feedback controlled subignited fusion plasma requires the operating temperature to be close to the ignition temperature. In the present work we discuss technological and physical effects which may restrict this temperature difference. The investigation is based on a simplified, but still accurate, 0=D analytical analysis of the maximum Q of a subignited system. Particular emphasis is given to sawtooth ocsillations which complicate the interpretation of diagnostic neutron emission data into plasma temperatures and may imply an inherent lower bound on the temperature deviation from the ignition point. The estimated maximum Q is found to be marginal (Q = 10-20) from the point of view of a fusion reactor. (authors)

  9. Thermoelectric cooler concepts and the limit for maximum cooling

    International Nuclear Information System (INIS)

    Seifert, W; Hinsche, N F; Pluschke, V

    2014-01-01

    The conventional analysis of a Peltier cooler approximates the material properties as independent of temperature using a constant properties model (CPM). Alternative concepts have been published by Bian and Shakouri (2006 Appl. Phys. Lett. 89 212101), Bian (et al 2007 Phys. Rev. B 75 245208) and Snyder et al (2012 Phys. Rev. B 86 045202). While Snyder's Thomson cooler concept results from a consideration of compatibility, the method of Bian et al focuses on the redistribution of heat. Thus, both approaches are based on different principles. In this paper we compare the new concepts to CPM and we reconsider the limit for maximum cooling. The results provide a new perspective on maximum cooling. (paper)

  10. On the maximum of wave surface of sea waves

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, B

    1980-01-01

    This article considers wave surface as a normal stationary random process to solve the estimation of the maximum of wave surface in a given time interval by means of the theoretical results of probability theory. The results are represented by formulas (13) to (19) in this article. It was proved in this article that when time interval approaches infinite, the formulas (3), (6) of E )eta max) that were derived from the references (Cartwright, Longuet-Higgins) can also be derived by asymptotic distribution of the maximum of wave surface provided by the article. The advantage of the results obtained from this point of view as compared with the results obtained from the references was discussed.

  11. Multichannel Singular Spectrum Analysis in the Estimates of Common Environmental Effects Affecting GPS Observations

    Science.gov (United States)

    Gruszczynska, Marta; Rosat, Severine; Klos, Anna; Gruszczynski, Maciej; Bogusz, Janusz

    2018-03-01

    We described a spatio-temporal analysis of environmental loading models: atmospheric, continental hydrology, and non-tidal ocean changes, based on multichannel singular spectrum analysis (MSSA). We extracted the common annual signal for 16 different sections related to climate zones: equatorial, arid, warm, snow, polar and continents. We used the loading models estimated for a set of 229 ITRF2014 (International Terrestrial Reference Frame) International GNSS Service (IGS) stations and discussed the amount of variance explained by individual modes, proving that the common annual signal accounts for 16, 24 and 68% of the total variance of non-tidal ocean, atmospheric and hydrological loading models, respectively. Having removed the common environmental MSSA seasonal curve from the corresponding GPS position time series, we found that the residual station-specific annual curve modelled with the least-squares estimation has the amplitude of maximum 2 mm. This means that the environmental loading models underestimate the seasonalities observed by the GPS system. The remaining signal present in the seasonal frequency band arises from the systematic errors which are not of common environmental or geophysical origin. Using common mode error (CME) estimates, we showed that the direct removal of environmental loading models from the GPS series causes an artificial loss in the CME power spectra between 10 and 80 cycles per year. When environmental effect is removed from GPS series with MSSA curves, no influence on the character of spectra of CME estimates was noticed.

  12. Analysis of Minute Features in Speckled Imagery with Maximum Likelihood Estimation

    Directory of Open Access Journals (Sweden)

    Alejandro C. Frery

    2004-12-01

    Full Text Available This paper deals with numerical problems arising when performing maximum likelihood parameter estimation in speckled imagery using small samples. The noise that appears in images obtained with coherent illumination, as is the case of sonar, laser, ultrasound-B, and synthetic aperture radar, is called speckle, and it can neither be assumed Gaussian nor additive. The properties of speckle noise are well described by the multiplicative model, a statistical framework from which stem several important distributions. Amongst these distributions, one is regarded as the universal model for speckled data, namely, the 𝒢0 law. This paper deals with amplitude data, so the 𝒢A0 distribution will be used. The literature reports that techniques for obtaining estimates (maximum likelihood, based on moments and on order statistics of the parameters of the 𝒢A0 distribution require samples of hundreds, even thousands, of observations in order to obtain sensible values. This is verified for maximum likelihood estimation, and a proposal based on alternate optimization is made to alleviate this situation. The proposal is assessed with real and simulated data, showing that the convergence problems are no longer present. A Monte Carlo experiment is devised to estimate the quality of maximum likelihood estimators in small samples, and real data is successfully analyzed with the proposed alternated procedure. Stylized empirical influence functions are computed and used to choose a strategy for computing maximum likelihood estimates that is resistant to outliers.

  13. Erich Regener and the ionisation maximum of the atmosphere

    Science.gov (United States)

    Carlson, P.; Watson, A. A.

    2014-12-01

    In the 1930s the German physicist Erich Regener (1881-1955) did important work on the measurement of the rate of production of ionisation deep under water and in the atmosphere. Along with one of his students, Georg Pfotzer, he discovered the altitude at which the production of ionisation in the atmosphere reaches a maximum, often, but misleadingly, called the Pfotzer maximum. Regener was one of the first to estimate the energy density of cosmic rays, an estimate that was used by Baade and Zwicky to bolster their postulate that supernovae might be their source. Yet Regener's name is less recognised by present-day cosmic ray physicists than it should be, largely because in 1937 he was forced to take early retirement by the National Socialists as his wife had Jewish ancestors. In this paper we briefly review his work on cosmic rays and recommend an alternative naming of the ionisation maximum. The influence that Regener had on the field through his son, his son-in-law, his grandsons and his students, and through his links with Rutherford's group in Cambridge, is discussed in an appendix. Regener was nominated for the Nobel Prize in Physics by Schrödinger in 1938. He died in 1955 at the age of 73.

  14. PTree: pattern-based, stochastic search for maximum parsimony phylogenies

    Directory of Open Access Journals (Sweden)

    Ivan Gregor

    2013-06-01

    Full Text Available Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000–8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  15. PTree: pattern-based, stochastic search for maximum parsimony phylogenies.

    Science.gov (United States)

    Gregor, Ivan; Steinbrück, Lars; McHardy, Alice C

    2013-01-01

    Phylogenetic reconstruction is vital to analyzing the evolutionary relationship of genes within and across populations of different species. Nowadays, with next generation sequencing technologies producing sets comprising thousands of sequences, robust identification of the tree topology, which is optimal according to standard criteria such as maximum parsimony, maximum likelihood or posterior probability, with phylogenetic inference methods is a computationally very demanding task. Here, we describe a stochastic search method for a maximum parsimony tree, implemented in a software package we named PTree. Our method is based on a new pattern-based technique that enables us to infer intermediate sequences efficiently where the incorporation of these sequences in the current tree topology yields a phylogenetic tree with a lower cost. Evaluation across multiple datasets showed that our method is comparable to the algorithms implemented in PAUP* or TNT, which are widely used by the bioinformatics community, in terms of topological accuracy and runtime. We show that our method can process large-scale datasets of 1,000-8,000 sequences. We believe that our novel pattern-based method enriches the current set of tools and methods for phylogenetic tree inference. The software is available under: http://algbio.cs.uni-duesseldorf.de/webapps/wa-download/.

  16. Extending the maximum operation time of the MNSR reactor.

    Science.gov (United States)

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Evolutionary analysis of apolipoprotein E by Maximum Likelihood and complex network methods

    Directory of Open Access Journals (Sweden)

    Leandro de Jesus Benevides

    Full Text Available Abstract Apolipoprotein E (apo E is a human glycoprotein with 299 amino acids, and it is a major component of very low density lipoproteins (VLDL and a group of high-density lipoproteins (HDL. Phylogenetic studies are important to clarify how various apo E proteins are related in groups of organisms and whether they evolved from a common ancestor. Here, we aimed at performing a phylogenetic study on apo E carrying organisms. We employed a classical and robust method, such as Maximum Likelihood (ML, and compared the results using a more recent approach based on complex networks. Thirty-two apo E amino acid sequences were downloaded from NCBI. A clear separation could be observed among three major groups: mammals, fish and amphibians. The results obtained from ML method, as well as from the constructed networks showed two different groups: one with mammals only (C1 and another with fish (C2, and a single node with the single sequence available for an amphibian. The accordance in results from the different methods shows that the complex networks approach is effective in phylogenetic studies. Furthermore, our results revealed the conservation of apo E among animal groups.

  18. Neandertal admixture in Eurasia confirmed by maximum-likelihood analysis of three genomes.

    Science.gov (United States)

    Lohse, Konrad; Frantz, Laurent A F

    2014-04-01

    Although there has been much interest in estimating histories of divergence and admixture from genomic data, it has proved difficult to distinguish recent admixture from long-term structure in the ancestral population. Thus, recent genome-wide analyses based on summary statistics have sparked controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. Here we derive the probability of full mutational configurations in nonrecombining sequence blocks under both admixture and ancestral structure scenarios. Dividing the genome into short blocks gives an efficient way to compute maximum-likelihood estimates of parameters. We apply this likelihood scheme to triplets of human and Neandertal genomes and compare the relative support for a model of admixture from Neandertals into Eurasian populations after their expansion out of Africa against a history of persistent structure in their common ancestral population in Africa. Our analysis allows us to conclusively reject a model of ancestral structure in Africa and instead reveals strong support for Neandertal admixture in Eurasia at a higher rate (3.4-7.3%) than suggested previously. Using analysis and simulations we show that our inference is more powerful than previous summary statistics and robust to realistic levels of recombination.

  19. A comparison of two gluteus maximus EMG maximum voluntary isometric contraction positions

    Directory of Open Access Journals (Sweden)

    Bret Contreras

    2015-09-01

    Full Text Available Background. The purpose of this study was to compare the peak electromyography (EMG of the most commonly-used position in the literature, the prone bent-leg (90° hip extension against manual resistance applied to the distal thigh (PRONE, to a novel position, the standing glute squeeze (SQUEEZE.Methods. Surface EMG electrodes were placed on the upper and lower gluteus maximus of thirteen recreationally active females (age = 28.9 years; height = 164 cm; body mass = 58.2 kg, before three maximum voluntary isometric contraction (MVIC trials for each position were obtained in a randomized, counterbalanced fashion.Results. No statistically significant (p < 0.05 differences were observed between PRONE (upper: 91.94%; lower: 94.52% and SQUEEZE (upper: 92.04%; lower: 85.12% for both the upper and lower gluteus maximus. Neither the PRONE nor SQUEEZE was more effective between all subjects.Conclusions. In agreement with other studies, no single testing position is ideal for every participant. Therefore, it is recommended that investigators employ multiple MVIC positions, when possible, to ensure accuracy. Future research should investigate a variety of gluteus maximus MVIC positions in heterogeneous samples.

  20. An Efficient Algorithm for the Maximum Distance Problem

    Directory of Open Access Journals (Sweden)

    Gabrielle Assunta Grün

    2001-12-01

    Full Text Available Efficient algorithms for temporal reasoning are essential in knowledge-based systems. This is central in many areas of Artificial Intelligence including scheduling, planning, plan recognition, and natural language understanding. As such, scalability is a crucial consideration in temporal reasoning. While reasoning in the interval algebra is NP-complete, reasoning in the less expressive point algebra is tractable. In this paper, we explore an extension to the work of Gerevini and Schubert which is based on the point algebra. In their seminal framework, temporal relations are expressed as a directed acyclic graph partitioned into chains and supported by a metagraph data structure, where time points or events are represented by vertices, and directed edges are labelled with < or ≤. They are interested in fast algorithms for determining the strongest relation between two events. They begin by developing fast algorithms for the case where all points lie on a chain. In this paper, we are interested in a generalization of this, namely we consider the problem of finding the maximum ``distance'' between two vertices in a chain ; this problem arises in real world applications such as in process control and crew scheduling. We describe an O(n time preprocessing algorithm for the maximum distance problem on chains. It allows queries for the maximum number of < edges between two vertices to be answered in O(1 time. This matches the performance of the algorithm of Gerevini and Schubert for determining the strongest relation holding between two vertices in a chain.

  1. A conductance maximum observed in an inward-rectifier potassium channel

    OpenAIRE

    1994-01-01

    One prediction of a multi-ion pore is that its conductance should reach a maximum and then begin to decrease as the concentration of permeant ion is raised equally on both sides of the membrane. A conductance maximum has been observed at the single-channel level in gramicidin and in a Ca(2+)-activated K+ channel at extremely high ion concentration (> 1,000 mM) (Hladky, S. B., and D. A. Haydon. 1972. Biochimica et Biophysica Acta. 274:294-312; Eisenmam, G., J. Sandblom, and E. Neher. 1977. In ...

  2. 49 CFR 192.619 - Maximum allowable operating pressure: Steel or plastic pipelines.

    Science.gov (United States)

    2010-10-01

    ... plastic pipelines. 192.619 Section 192.619 Transportation Other Regulations Relating to Transportation... Operations § 192.619 Maximum allowable operating pressure: Steel or plastic pipelines. (a) No person may operate a segment of steel or plastic pipeline at a pressure that exceeds a maximum allowable operating...

  3. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  4. The constraint rule of the maximum entropy principle

    NARCIS (Netherlands)

    Uffink, J.

    1995-01-01

    The principle of maximum entropy is a method for assigning values to probability distributions on the basis of partial information. In usual formulations of this and related methods of inference one assumes that this partial information takes the form of a constraint on allowed probability

  5. Maximum Permissible Concentrations and Negligible Concentrations for pesticides

    NARCIS (Netherlands)

    Crommentuijn T; Kalf DF; Polder MD; Posthumus R; Plassche EJ van de; CSR

    1997-01-01

    Maximum Permissible Concentrations (MPCs) and Negligible Concentrations (NCs) derived for a series of pesticides are presented in this report. These MPCs and NCs are used by the Ministry of Housing, Spatial Planning and the Environment (VROM) to set Environmental Quality Objectives. For some of the

  6. Phytophthora stricta isolated from Rhododendron maximum in Pennsylvania

    Science.gov (United States)

    During a survey in October 2013, in the Michaux State Forest in Pennsylvania , necrotic Rhododendron maximum leaves were noticed on mature plants alongside a stream. Symptoms were nondescript necrotic lesions at the tips of mature leaves. Colonies resembling a Phytophthora sp. were observed from c...

  7. www.common-metrics.org: a web application to estimate scores from different patient-reported outcome measures on a common scale.

    Science.gov (United States)

    Fischer, H Felix; Rose, Matthias

    2016-10-19

    Recently, a growing number of Item-Response Theory (IRT) models has been published, which allow estimation of a common latent variable from data derived by different Patient Reported Outcomes (PROs). When using data from different PROs, direct estimation of the latent variable has some advantages over the use of sum score conversion tables. It requires substantial proficiency in the field of psychometrics to fit such models using contemporary IRT software. We developed a web application ( http://www.common-metrics.org ), which allows estimation of latent variable scores more easily using IRT models calibrating different measures on instrument independent scales. Currently, the application allows estimation using six different IRT models for Depression, Anxiety, and Physical Function. Based on published item parameters, users of the application can directly estimate latent trait estimates using expected a posteriori (EAP) for sum scores as well as for specific response patterns, Bayes modal (MAP), Weighted likelihood estimation (WLE) and Maximum likelihood (ML) methods and under three different prior distributions. The obtained estimates can be downloaded and analyzed using standard statistical software. This application enhances the usability of IRT modeling for researchers by allowing comparison of the latent trait estimates over different PROs, such as the Patient Health Questionnaire Depression (PHQ-9) and Anxiety (GAD-7) scales, the Center of Epidemiologic Studies Depression Scale (CES-D), the Beck Depression Inventory (BDI), PROMIS Anxiety and Depression Short Forms and others. Advantages of this approach include comparability of data derived with different measures and tolerance against missing values. The validity of the underlying models needs to be investigated in the future.

  8. PARTICLE SWARM OPTIMIZATION BASED OF THE MAXIMUM PHOTOVOLTAIC POWER TRACTIOQG UNDER DIFFERENT CONDITIONS

    Directory of Open Access Journals (Sweden)

    Y. Labbi

    2015-08-01

    Full Text Available Photovoltaic electricity is seen as an important source of renewable energy. The photovoltaic array is an unstable source of power since the peak power point depends on the temperature and the irradiation level. A maximum peak power point tracking is then necessary for maximum efficiency.In this work, a Particle Swarm Optimization (PSO is proposed for maximum power point tracker for photovoltaic panel, are used to generate the optimal MPP, such that solar panel maximum power is generated under different operating conditions. A photovoltaic system including a solar panel and PSO MPP tracker is modelled and simulated, it has been has been carried out which has shown the effectiveness of PSO to draw much energy and fast response against change in working conditions.

  9. 5 CFR 9901.312 - Maximum rates of base salary and adjusted salary.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 3 2010-01-01 2010-01-01 false Maximum rates of base salary and adjusted salary. 9901.312 Section 9901.312 Administrative Personnel DEPARTMENT OF DEFENSE HUMAN RESOURCES....312 Maximum rates of base salary and adjusted salary. (a) Subject to § 9901.105, the Secretary may...

  10. The Betz-Joukowsky limit for the maximum power coefficient of wind turbines

    DEFF Research Database (Denmark)

    Okulov, Valery; van Kuik, G.A.M.

    2009-01-01

    The article addresses to a history of an important scientific result in wind energy. The maximum efficiency of an ideal wind turbine rotor is well known as the ‘Betz limit’, named after the German scientist that formulated this maximum in 1920. Also Lanchester, a British scientist, is associated...

  11. Multi-Channel Maximum Likelihood Pitch Estimation

    DEFF Research Database (Denmark)

    Christensen, Mads Græsbøll

    2012-01-01

    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  12. Mercury and Lead Levels in Common Soaps from Local Markets in Mashhad, Iran

    Directory of Open Access Journals (Sweden)

    Anahita Alizadeh

    2017-06-01

    Full Text Available Background: The potential toxicity of human exposure was investigated to heavy metals from diverse sources but few or none was on Iranian soaps. Hence, we aimed to determine the presence of lead and mercury in selected soaps commonly used in Mashhad, northeastern Iran. Methods: Different common brands of cosmetic, hygiene and contraband soaps were purchased from retail market of Mashhad in 2016. Levels of these metals were determined using atomic absorption spectroscopy technique. Results: All samples had the mercury and lead levels but did not exceed the maximum acceptable level (1 µg/g for mercury and 20 µg/g for lead recommended by FDA. The mean levels of mercury were 0.02, 0.08 and 0.23 µg/g, respectively in cosmetic, hygiene and contraband soaps. These levels for lead were 0.10, 0.19 and 0.13 µg/g. The highest mercury and lead levels were detected in Halazoon contraband and P hygiene brands, respectively. Conclusion: The content of mercury and lead in common soaps is currently not a concern in this city. However, as human body may be exposed to several toxic metals from different care products simultaneously, cumulative toxic effects of these metals must be considered important.

  13. Handelman's hierarchy for the maximum stable set problem

    NARCIS (Netherlands)

    Laurent, M.; Sun, Z.

    2014-01-01

    The maximum stable set problem is a well-known NP-hard problem in combinatorial optimization, which can be formulated as the maximization of a quadratic square-free polynomial over the (Boolean) hypercube. We investigate a hierarchy of linear programming relaxations for this problem, based on a

  14. Applications of the Maximum Entropy Method in superspace

    Czech Academy of Sciences Publication Activity Database

    van Smaalen, S.; Palatinus, Lukáš

    2004-01-01

    Roč. 305, - (2004), s. 57-62 ISSN 0015-0193 Grant - others:DFG and FCI(DE) XX Institutional research plan: CEZ:AV0Z1010914 Keywords : Maximum Entropy Method * modulated structures * charge density Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.517, year: 2004

  15. 44 CFR 208.12 - Maximum Pay Rate Table.

    Science.gov (United States)

    2010-10-01

    ...) Physicians. DHS uses the latest Special Salary Rate Table Number 0290 for Medical Officers (Clinical... Personnel, in which case the Maximum Pay Rate Table would not apply. (3) Compensation for Sponsoring Agency... organizations, e.g., HMOs or medical or engineering professional associations, under the revised definition of...

  16. Maximum likelihood estimation of the attenuated ultrasound pulse

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The attenuated ultrasound pulse is divided into two parts: a stationary basic pulse and a nonstationary attenuation pulse. A standard ARMA model is used for the basic pulse, and a nonstandard ARMA model is derived for the attenuation pulse. The maximum likelihood estimator of the attenuated...

  17. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  18. What Causal Forces Shape Internet Connectivity at the AS-level?

    National Research Council Canada - National Science Library

    Chang, Hyunseok; Jamin, Sugih; Willinger, Walter

    2003-01-01

    ...." By focusing on the AS subgraph ASpc whose links represent provider-customer relationships, we present an empirical study that identifies three crucial causal forces at work in the design of AS connectivity: (i) AS-geography, i.e...

  19. The scaling of maximum and basal metabolic rates of mammals and birds

    Science.gov (United States)

    Barbosa, Lauro A.; Garcia, Guilherme J. M.; da Silva, Jafferson K. L.

    2006-01-01

    Allometric scaling is one of the most pervasive laws in biology. Its origin, however, is still a matter of dispute. Recent studies have established that maximum metabolic rate scales with an exponent larger than that found for basal metabolism. This unpredicted result sets a challenge that can decide which of the concurrent hypotheses is the correct theory. Here, we show that both scaling laws can be deduced from a single network model. Besides the 3/4-law for basal metabolism, the model predicts that maximum metabolic rate scales as M, maximum heart rate as M, and muscular capillary density as M, in agreement with data.

  20. Maximum field capability of energy saver superconducting magnets

    International Nuclear Information System (INIS)

    Turkot, F.; Cooper, W.E.; Hanft, R.; McInturff, A.

    1983-01-01

    At an energy of 1 TeV the superconducting cable in the Energy Saver dipole magnets will be operating at ca. 96% of its nominal short sample limit; the corresponding number in the quadrupole magnets will be 81%. All magnets for the Saver are individually tested for maximum current capability under two modes of operation; some 900 dipoles and 275 quadrupoles have now been measured. The dipole winding is composed of four individually wound coils which in general come from four different reels of cable. As part of the magnet fabrication quality control a short piece of cable from both ends of each reel has its critical current measured at 5T and 4.3K. In this paper the authors describe and present the statistical results of the maximum field tests (including quench and cycle) on Saver dipole and quadrupole magnets and explore the correlation of these tests with cable critical current