WorldWideScience

Sample records for computing compression trees

  1. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  2. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  3. Heterogeneous Compression of Large Collections of Evolutionary Trees.

    Matthews, Suzanne J

    2015-01-01

    Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.

  4. An efficient and extensible approach for compressing phylogenetic trees

    Matthews, Suzanne J

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference.Results: On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings.Conclusions: TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community. © 2011 Matthews and Williams; licensee BioMed Central Ltd.

  5. An efficient and extensible approach for compressing phylogenetic trees.

    Matthews, Suzanne J; Williams, Tiffani L

    2011-10-18

    Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend our TreeZip algorithm by handling trees with weighted branches. Furthermore, by using the compressed TreeZip file as input, we have designed an extensible decompressor that can extract subcollections of trees, compute majority and strict consensus trees, and merge tree collections using set operations such as union, intersection, and set difference. On unweighted phylogenetic trees, TreeZip is able to compress Newick files in excess of 98%. On weighted phylogenetic trees, TreeZip is able to compress a Newick file by at least 73%. TreeZip can be combined with 7zip with little overhead, allowing space savings in excess of 99% (unweighted) and 92%(weighted). Unlike TreeZip, 7zip is not immune to branch rotations, and performs worse as the level of variability in the Newick string representation increases. Finally, since the TreeZip compressed text (TRZ) file contains all the semantic information in a collection of trees, we can easily filter and decompress a subset of trees of interest (such as the set of unique trees), or build the resulting consensus tree in a matter of seconds. We also show the ease of which set operations can be performed on TRZ files, at speeds quicker than those performed on Newick or 7zip compressed Newick files, and without loss of space savings. TreeZip is an efficient approach for compressing large collections of phylogenetic trees. The semantic and compact nature of the TRZ file allow it to be operated upon directly and quickly, without a need to decompress the original Newick file. We believe that TreeZip will be vital for compressing and archiving trees in the biological community.

  6. Computer Tree

    Onur AĞAOĞLU

    2014-12-01

    Full Text Available It is crucial that gifted and talented students should be supported by different educational methods for their interests and skills. The science and arts centres (gifted centres provide the Supportive Education Program for these students with an interdisciplinary perspective. In line with the program, an ICT lesson entitled “Computer Tree” serves for identifying learner readiness levels, and defining the basic conceptual framework. A language teacher also contributes to the process, since it caters for the creative function of the basic linguistic skills. The teaching technique is applied for 9-11 aged student level. The lesson introduces an evaluation process including basic information, skills, and interests of the target group. Furthermore, it includes an observation process by way of peer assessment. The lesson is considered to be a good sample of planning for any subject, for the unpredicted convergence of visual and technical abilities with linguistic abilities.

  7. An efficient and extensible approach for compressing phylogenetic trees

    Matthews, Suzanne J; Williams, Tiffani L

    2011-01-01

    Background: Biologists require new algorithms to efficiently compress and store their large collections of phylogenetic trees. Our previous work showed that TreeZip is a promising approach for compressing phylogenetic trees. In this paper, we extend

  8. Joint compression and encryption using chaotically mutated Huffman trees

    Hermassi, Houcemeddine; Rhouma, Rhouma; Belghith, Safya

    2010-10-01

    This paper introduces a new scheme for joint compression and encryption using the Huffman codec. A basic tree is first generated for a given message and then based on a keystream generated from a chaotic map and depending from the input message, the basic tree is mutated without changing the statistical model. Hence a symbol can be coded by more than one codeword having the same length. The security of the scheme is tested against the known plaintext attack and the brute force attack. Performance analysis including encryption/decryption speed, additional computational complexity and compression ratio are given.

  9. Compressed Subsequence Matching and Packed Tree Coloring

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2017-01-01

    We present a new algorithm for subsequence matching in grammar compressed strings. Given a grammar of size n compressing a string of size N and a pattern string of size m over an alphabet of size \\(\\sigma \\), our algorithm uses \\(O(n+\\frac{n\\sigma }{w})\\) space and \\(O(n+\\frac{n\\sigma }{w}+m\\log N\\log...... w\\cdot occ)\\) or \\(O(n+\\frac{n\\sigma }{w}\\log w+m\\log N\\cdot occ)\\) time. Here w is the word size and occ is the number of minimal occurrences of the pattern. Our algorithm uses less space than previous algorithms and is also faster for \\(occ=o(\\frac{n}{\\log N})\\) occurrences. The algorithm uses...... a new data structure that allows us to efficiently find the next occurrence of a given character after a given position in a compressed string. This data structure in turn is based on a new data structure for the tree color problem, where the node colors are packed in bit strings....

  10. Wavelet compression of multichannel ECG data by enhanced set partitioning in hierarchical trees algorithm.

    Sharifahmadian, Ershad

    2006-01-01

    The set partitioning in hierarchical trees (SPIHT) algorithm is very effective and computationally simple technique for image and signal compression. Here the author modified the algorithm which provides even better performance than the SPIHT algorithm. The enhanced set partitioning in hierarchical trees (ESPIHT) algorithm has performance faster than the SPIHT algorithm. In addition, the proposed algorithm reduces the number of bits in a bit stream which is stored or transmitted. I applied it to compression of multichannel ECG data. Also, I presented a specific procedure based on the modified algorithm for more efficient compression of multichannel ECG data. This method employed on selected records from the MIT-BIH arrhythmia database. According to experiments, the proposed method attained the significant results regarding compression of multichannel ECG data. Furthermore, in order to compress one signal which is stored for a long time, the proposed multichannel compression method can be utilized efficiently.

  11. Computer calculations of compressibility of natural gas

    Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M

    An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.

  12. Bi-level image compression with tree coding

    Martins, Bo; Forchhammer, Søren

    1996-01-01

    Presently, tree coders are the best bi-level image coders. The current ISO standard, JBIG, is a good example. By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code. Three general-purpose coders...... are constructed by this principle. A multi-pass free tree coding scheme produces superior compression results for all test images. A multi-pass fast free template coding scheme produces much better results than JBIG for difficult images, such as halftonings. Rissanen's algorithm `Context' is presented in a new...

  13. Interactive computer graphics applications for compressible aerodynamics

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  14. Stem compression reversibly reduces phloem transport in Pinus sylvestris trees.

    Henriksson, Nils; Tarvainen, Lasse; Lim, Hyungwoo; Tor-Ngern, Pantana; Palmroth, Sari; Oren, Ram; Marshall, John; Näsholm, Torgny

    2015-10-01

    Manipulating tree belowground carbon (C) transport enables investigation of the ecological and physiological roles of tree roots and their associated mycorrhizal fungi, as well as a range of other soil organisms and processes. Girdling remains the most reliable method for manipulating this flux and it has been used in numerous studies. However, girdling is destructive and irreversible. Belowground C transport is mediated by phloem tissue, pressurized through the high osmotic potential resulting from its high content of soluble sugars. We speculated that phloem transport may be reversibly blocked through the application of an external pressure on tree stems. Thus, we here introduce a technique based on compression of the phloem, which interrupts belowground flow of assimilates, but allows trees to recover when the external pressure is removed. Metal clamps were wrapped around the stems and tightened to achieve a pressure theoretically sufficient to collapse the phloem tissue, thereby aiming to block transport. The compression's performance was tested in two field experiments: a (13)C canopy labelling study conducted on small Scots pine (Pinus sylvestris L.) trees [2-3 m tall, 3-7 cm diameter at breast height (DBH)] and a larger study involving mature pines (∼15 m tall, 15-25 cm DBH) where stem respiration, phloem and root carbohydrate contents, and soil CO2 efflux were measured. The compression's effectiveness was demonstrated by the successful blockage of (13)C transport. Stem compression doubled stem respiration above treatment, reduced soil CO2 efflux by 34% and reduced phloem sucrose content by 50% compared with control trees. Stem respiration and soil CO2 efflux returned to normal within 3 weeks after pressure release, and (13)C labelling revealed recovery of phloem function the following year. Thus, we show that belowground phloem C transport can be reduced by compression, and we also demonstrate that trees recover after treatment, resuming C

  15. Using the Sadakane Compressed Suffix Tree to Solve the All-Pairs Suffix-Prefix Problem

    Maan Haj Rachid

    2014-01-01

    Full Text Available The all-pairs suffix-prefix matching problem is a basic problem in string processing. It has an application in the de novo genome assembly task, which is one of the major bioinformatics problems. Due to the large size of the input data, it is crucial to use fast and space efficient solutions. In this paper, we present a space-economical solution to this problem using the generalized Sadakane compressed suffix tree. Furthermore, we present a parallel algorithm to provide more speed for shared memory computers. Our sequential and parallel algorithms are optimized by exploiting features of the Sadakane compressed index data structure. Experimental results show that our solution based on the Sadakane’s compressed index consumes significantly less space than the ones based on noncompressed data structures like the suffix tree and the enhanced suffix array. Our experimental results show that our parallel algorithm is efficient and scales well with increasing number of processors.

  16. Computer aided fault tree synthesis

    Poucet, A.

    1983-01-01

    Nuclear as well as non-nuclear organisations are showing during the past few years a growing interest in the field of reliability analysis. This urges for the development of powerful, state of the art methods and computer codes for performing such analysis on complex systems. In this report an interactive, computer aided approach is discussed, based on the well known fault tree technique. The time consuming and difficut task of manually constructing a system model (one or more fault trees) is replaced by an efficient interactive procedure in which the flexibility and the learning process inherent to the manual approach are combined with the accuracy in the modelling and the speed of the fully automatical approach. The method presented is based upon the use of a library containing component models. The possibility of setting up a standard library of models of general use and the link with a data collection system are discussed. The method has been implemented in the CAFTS-SALP software package which is described shortly in the report

  17. XPath Node Selection over Grammar-Compressed Trees

    Sebastian Maneth

    2013-11-01

    Full Text Available XML document markup is highly repetitive and therefore well compressible using grammar-based compression. Downward, navigational XPath can be executed over grammar-compressed trees in PTIME: the query is translated into an automaton which is executed in one pass over the grammar. This result is well-known and has been mentioned before. Here we present precise bounds on the time complexity of this problem, in terms of big-O notation. For a given grammar and XPath query, we consider three different tasks: (1 to count the number of nodes selected by the query, (2 to materialize the pre-order numbers of the selected nodes, and (3 to serialize the subtrees at the selected nodes.

  18. Computing Refined Buneman Trees in Cubic Time

    Brodal, G.S.; Fagerberg, R.; Östlin, A.

    2003-01-01

    Reconstructing the evolutionary tree for a set of n species based on pairwise distances between the species is a fundamental problem in bioinformatics. Neighbor joining is a popular distance based tree reconstruction method. It always proposes fully resolved binary trees despite missing evidence...... in the underlying distance data. Distance based methods based on the theory of Buneman trees and refined Buneman trees avoid this problem by only proposing evolutionary trees whose edges satisfy a number of constraints. These trees might not be fully resolved but there is strong combinatorial evidence for each...... proposed edge. The currently best algorithm for computing the refined Buneman tree from a given distance measure has a running time of O(n 5) and a space consumption of O(n 4). In this paper, we present an algorithm with running time O(n 3) and space consumption O(n 2). The improved complexity of our...

  19. Computer aided construction of fault tree

    Kovacs, Z.

    1982-01-01

    Computer code CAT for the automatic construction of the fault tree is briefly described. Code CAT makes possible simple modelling of components using decision tables, it accelerates the fault tree construction process, constructs fault trees of different complexity, and is capable of harmonized co-operation with programs PREPandKITT 1,2 for fault tree analysis. The efficiency of program CAT and thus the accuracy and completeness of fault trees constructed significantly depends on the compilation and sophistication of decision tables. Currently, program CAT is used in co-operation with programs PREPandKITT 1,2 in reliability analyses of nuclear power plant systems. (B.S.)

  20. Matchgate circuits and compressed quantum computation

    Boyajian, W.L.

    2015-01-01

    Simulating a quantum system with a classical computer seems to be an un- feasible task due to the exponential growths of the dimension of the Hilbert space as a function of the number of considered systems. This is why the classical simulation of quantum behavior is usually restricted to a few qubits, although the numerical methods became very powerful. However, as pointed out by [Feynman (1982)] and proven by [Llody (1996)] quantum systems can be used to simulate the behavior of the other. The former being such that constituents can be very precisely prepared, manipulated and measured. Many experiments are realizing such a simulation nowadays. Among them experiments utilizing ions in ion-traps, NMR or atoms in optical lattices (see for instance [Bloch et al. (2012); Lanyon et al. (2011); Houck et al. (2012)] and references therein). Here we are not concerned about this direct simulation of a quantum system. We are interested in a more economical way of simulating certain quantum behaviors. To this end, we are using the fact that some classes of quantum algorithms, among them those which are based on matchgates, can be simulated classically efficiently. Moreover, it can be shown that matchgate circuits can also be simulated by an exponentially smaller quantum computer [Jozsa et al. (2009)]. There, the classical computation is restricted in space such that the computation has to be performed by the quantum computer and cannot be performed by the classical computer. In fact, it has been shown that the computational power of matchgate circuits running on n qubits is equivalent to the one of space-bounded quantum computation with space restricted to being logarithmic in n [Jozsa et al. (2009)]. This thesis is organized as follows. In Part I, we recall some basic concepts of quantum mechanics, quantum computation and quantum simulation. Furthermore we discuss the main results of matchgate circuits and compressed quantum computation. We also recall the XY model and its

  1. Computer simulation of fatigue under diametrical compression

    Carmona, H. A.; Kun, F.; Andrade Jr., J. S.; Herrmann, H. J.

    2006-01-01

    We study the fatigue fracture of disordered materials by means of computer simulations of a discrete element model. We extend a two-dimensional fracture model to capture the microscopic mechanisms relevant for fatigue, and we simulate the diametric compression of a disc shape specimen under a constant external force. The model allows to follow the development of the fracture process on the macro- and micro-level varying the relative influence of the mechanisms of damage accumulation over the ...

  2. Hydrolysis kinetics of tulip tree xylan in hot compressed water.

    Yoon, Junho; Lee, Hun Wook; Sim, Seungjae; Myint, Aye Aye; Park, Hee Jeong; Lee, Youn-Woo

    2016-08-01

    Lignocellulosic biomass, a promising renewable resource, can be converted into numerous valuable chemicals post enzymatic saccharification. However, the efficacy of enzymatic saccharification of lignocellulosic biomass is low; therefore, pretreatment is necessary to improve the efficiency. Here, a kinetic analysis was carried out on xylan hydrolysis, after hot compressed water pretreatment of the lignocellulosic biomass conducted at 180-220°C for 5-30min, and on subsequent xylooligosaccharide hydrolysis. The weight ratio of fast-reacting xylan to slow-reacting xylan was 5.25 in tulip tree. Our kinetic results were applied to three different reaction systems to improve the pretreatment efficiency. We found that semi-continuous reactor is promising. Lower reaction temperatures and shorter space times in semi-continuous reactor are recommended for improving xylan conversion and xylooligosaccharide yield. In the theoretical calculation, 95% of xylooligosaccharide yield and xylan conversion were achieved simultaneously with high selectivity (desired product/undesired product) of 100 or more. Copyright © 2016. Published by Elsevier Ltd.

  3. Standard operating procedure for computing pangenome trees

    Snipen, L.; Ussery, David

    2010-01-01

    We present the pan-genome tree as a tool for visualizing similarities and differences between closely related microbial genomes within a species or genus. Distance between genomes is computed as a weighted relative Manhattan distance based on gene family presence/absence. The weights can be chose...

  4. Computer-aided Fault Tree Analysis

    Willie, R.R.

    1978-08-01

    A computer-oriented methodology for deriving minimal cut and path set families associated with arbitrary fault trees is discussed first. Then the use of the Fault Tree Analysis Program (FTAP), an extensive FORTRAN computer package that implements the methodology is described. An input fault tree to FTAP may specify the system state as any logical function of subsystem or component state variables or complements of these variables. When fault tree logical relations involve complements of state variables, the analyst may instruct FTAP to produce a family of prime implicants, a generalization of the minimal cut set concept. FTAP can also identify certain subsystems associated with the tree as system modules and provide a collection of minimal cut set families that essentially expresses the state of the system as a function of these module state variables. Another FTAP feature allows a subfamily to be obtained when the family of minimal cut sets or prime implicants is too large to be found in its entirety; this subfamily consists only of sets that are interesting to the analyst in a special sense

  5. Computer simulation of fatigue under diametrical compression

    Carmona, H. A.; Kun, F.; Andrade, J. S. Jr.; Herrmann, H. J.

    2007-01-01

    We study the fatigue fracture of disordered materials by means of computer simulations of a discrete element model. We extend a two-dimensional fracture model to capture the microscopic mechanisms relevant for fatigue and we simulate the diametric compression of a disc shape specimen under a constant external force. The model allows us to follow the development of the fracture process on the macrolevel and microlevel varying the relative influence of the mechanisms of damage accumulation over the load history and healing of microcracks. As a specific example we consider recent experimental results on the fatigue fracture of asphalt. Our numerical simulations show that for intermediate applied loads the lifetime of the specimen presents a power law behavior. Under the effect of healing, more prominent for small loads compared to the tensile strength of the material, the lifetime of the sample increases and a fatigue limit emerges below which no macroscopic failure occurs. The numerical results are in a good qualitative agreement with the experimental findings

  6. Prediction of the compression ratio for municipal solid waste using decision tree.

    Heshmati R, Ali Akbar; Mokhtari, Maryam; Shakiba Rad, Saeed

    2014-01-01

    The compression ratio of municipal solid waste (MSW) is an essential parameter for evaluation of waste settlement and landfill design. However, no appropriate model has been proposed to estimate the waste compression ratio so far. In this study, a decision tree method was utilized to predict the waste compression ratio (C'c). The tree was constructed using Quinlan's M5 algorithm. A reliable database retrieved from the literature was used to develop a practical model that relates C'c to waste composition and properties, including dry density, dry weight water content, and percentage of biodegradable organic waste using the decision tree method. The performance of the developed model was examined in terms of different statistical criteria, including correlation coefficient, root mean squared error, mean absolute error and mean bias error, recommended by researchers. The obtained results demonstrate that the suggested model is able to evaluate the compression ratio of MSW effectively.

  7. Computer program for compressible flow network analysis

    Wilton, M. E.; Murtaugh, J. P.

    1973-01-01

    Program solves problem of an arbitrarily connected one dimensional compressible flow network with pumping in the channels and momentum balancing at flow junctions. Program includes pressure drop calculations for impingement flow and flow through pin fin arrangements, as currently found in many air cooled turbine bucket and vane cooling configurations.

  8. A checkpoint compression study for high-performance computing systems

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  9. Computer programs for optical dendrometer measurements of standing tree profiles

    Jacob R. Beard; Thomas G. Matney; Emily B. Schultz

    2015-01-01

    Tree profile equations are effective volume predictors. Diameter data for building these equations are collected from felled trees using diameter tapes and calipers or from standing trees using optical dendrometers. Developing and implementing a profile function from the collected data is a tedious and error prone task. This study created a computer program, Profile...

  10. Global tree network for computing structures enabling global processing operations

    Blumrich; Matthias A.; Chen, Dong; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Heidelberger, Philip; Hoenicke, Dirk; Steinmacher-Burow, Burkhard D.; Takken, Todd E.; Vranas, Pavlos M.

    2010-01-19

    A system and method for enabling high-speed, low-latency global tree network communications among processing nodes interconnected according to a tree network structure. The global tree network enables collective reduction operations to be performed during parallel algorithm operations executing in a computer structure having a plurality of the interconnected processing nodes. Router devices are included that interconnect the nodes of the tree via links to facilitate performance of low-latency global processing operations at nodes of the virtual tree and sub-tree structures. The global operations performed include one or more of: broadcast operations downstream from a root node to leaf nodes of a virtual tree, reduction operations upstream from leaf nodes to the root node in the virtual tree, and point-to-point message passing from any node to the root node. The global tree network is configurable to provide global barrier and interrupt functionality in asynchronous or synchronized manner, and, is physically and logically partitionable.

  11. Computational model of a whole tree combustor

    Bryden, K.M.; Ragland, K.W. [Univ. of Wisconsin, Madison, WI (United States)

    1993-12-31

    A preliminary computational model has been developed for the whole tree combustor and compared to test results. In the simulation model presented hardwood logs, 15 cm in diameter are burned in a 4 m deep fuel bed. Solid and gas temperature, solid and gas velocity, CO, CO{sub 2}, H{sub 2}O, HC and O{sub 2} profiles are calculated. This deep, fixed bed combustor obtains high energy release rates per unit area due to the high inlet air velocity and extended reaction zone. The lowest portion of the overall bed is an oxidizing region and the remainder of the bed acts as a gasification and drying region. The overfire air region completes the combustion. Approximately 40% of the energy is released in the lower oxidizing region. The wood consumption rate obtained from the computational model is 4,110 kg/m{sup 2}-hr which matches well the consumption rate of 3,770 kg/m{sup 2}-hr observed during the peak test period of the Aurora, MN test. The predicted heat release rate is 16 MW/m{sup 2} (5.0*10{sup 6} Btu/hr-ft{sup 2}).

  12. Fast Tree: Computing Large Minimum-Evolution Trees with Profiles instead of a Distance Matrix

    N. Price, Morgan; S. Dehal, Paramvir; P. Arkin, Adam

    2009-07-31

    Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement neighbor-joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest-neighbor interchanges to reduce the length of the tree. For an alignment with N sequences, L sites, and a different characters, a distance matrix requires O(N^2) space and O(N^2 L) time, but FastTree requires just O( NLa + N sqrt(N) ) memory and O( N sqrt(N) log(N) L a ) time. To estimate the tree's reliability, FastTree uses local bootstrapping, which gives another 100-fold speedup over a distance matrix. For example, FastTree computed a tree and support values for 158,022 distinct 16S ribosomal RNAs in 17 hours and 2.4 gigabytes of memory. Just computing pairwise Jukes-Cantor distances and storing them, without inferring a tree or bootstrapping, would require 17 hours and 50 gigabytes of memory. In simulations, FastTree was slightly more accurate than neighbor joining, BIONJ, or FastME; on genuine alignments, FastTree's topologies had higher likelihoods. FastTree is available at http://microbesonline.org/fasttree.

  13. Cafts: computer aided fault tree analysis

    Poucet, A.

    1985-01-01

    The fault tree technique has become a standard tool for the analysis of safety and reliability of complex system. In spite of the costs, which may be high for a complete and detailed analysis of a complex plant, the fault tree technique is popular and its benefits are fully recognized. Due to this applications of these codes have mostly been restricted to simple academic examples and rarely concern complex, real world systems. In this paper an interactive approach to fault tree construction is presented. The aim is not to replace the analyst, but to offer him an intelligent tool which can assist him in modeling complex systems. Using the CAFTS-method, the analyst interactively constructs a fault tree in two phases: (1) In a first phase he generates an overall failure logic structure of the system; the macrofault tree. In this phase, CAFTS features an expert system approach to assist the analyst. It makes use of a knowledge base containing generic rules on the behavior of subsystems and components; (2) In a second phase the macrofault tree is further refined and transformed in a fully detailed and quantified fault tree. In this phase a library of plant-specific component failure models is used

  14. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  15. Computer aided fault tree construction for electrical systems

    Fussell, J.B.

    1975-01-01

    A technique is presented for automated construction of the Boolean failure logic diagram, called the fault tree, for electrical systems. The method is a technique for synthesizing a fault tree from system-independent component characteristics. Terminology is defined and heuristic examples are given for all phases of the model. The computer constructed fault trees are in conventional format, use conventional symbols, and are deductively constructed from the main failure of interest to the individual component failures. The synthesis technique is generally applicable to automated fault tree construction for other types of systems

  16. Adaptive compressive ghost imaging based on wavelet trees and sparse representation.

    Yu, Wen-Kai; Li, Ming-Fei; Yao, Xu-Ri; Liu, Xue-Feng; Wu, Ling-An; Zhai, Guang-Jie

    2014-03-24

    Compressed sensing is a theory which can reconstruct an image almost perfectly with only a few measurements by finding its sparsest representation. However, the computation time consumed for large images may be a few hours or more. In this work, we both theoretically and experimentally demonstrate a method that combines the advantages of both adaptive computational ghost imaging and compressed sensing, which we call adaptive compressive ghost imaging, whereby both the reconstruction time and measurements required for any image size can be significantly reduced. The technique can be used to improve the performance of all computational ghost imaging protocols, especially when measuring ultra-weak or noisy signals, and can be extended to imaging applications at any wavelength.

  17. Identifying failure in a tree network of a parallel computer

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  18. Efficient Computation of Popular Phylogenetic Tree Measures

    Tsirogiannis, Constantinos; Sandel, Brody Steven; Cheliotis, Dimitris

    2012-01-01

    Given a phylogenetic tree $\\mathcal{T}$ of n nodes, and a sample R of its tips (leaf nodes) a very common problem in ecological and evolutionary research is to evaluate a distance measure for the elements in R. Two of the most common measures of this kind are the Mean Pairwise Distance ($\\ensurem...

  19. A compendium of computer codes in fault tree analysis

    Lydell, B.

    1981-03-01

    In the past ten years principles and methods for a unified system reliability and safety analysis have been developed. Fault tree techniques serve as a central feature of unified system analysis, and there exists a specific discipline within system reliability concerned with the theoretical aspects of fault tree evaluation. Ever since the fault tree concept was established, computer codes have been developed for qualitative and quantitative analyses. In particular the presentation of the kinetic tree theory and the PREP-KITT code package has influenced the present use of fault trees and the development of new computer codes. This report is a compilation of some of the better known fault tree codes in use in system reliability. Numerous codes are available and new codes are continuously being developed. The report is designed to address the specific characteristics of each code listed. A review of the theoretical aspects of fault tree evaluation is presented in an introductory chapter, the purpose of which is to give a framework for the validity of the different codes. (Auth.)

  20. WAMCUT, a computer code for fault tree evaluation. Final report

    Erdmann, R.C.

    1978-06-01

    WAMCUT is a code in the WAM family which produces the minimum cut sets (MCS) for a given fault tree. The MCS are useful as they provide a qualitative evaluation of a system, as well as providing a means of determining the probability distribution function for the top of the tree. The program is very efficient and will produce all the MCS in a very short computer time span. 22 figures, 4 tables

  1. Computing all hybridization networks for multiple binary phylogenetic input trees.

    Albrecht, Benjamin

    2015-07-30

    The computation of phylogenetic trees on the same set of species that are based on different orthologous genes can lead to incongruent trees. One possible explanation for this behavior are interspecific hybridization events recombining genes of different species. An important approach to analyze such events is the computation of hybridization networks. This work presents the first algorithm computing the hybridization number as well as a set of representative hybridization networks for multiple binary phylogenetic input trees on the same set of taxa. To improve its practical runtime, we show how this algorithm can be parallelized. Moreover, we demonstrate the efficiency of the software Hybroscale, containing an implementation of our algorithm, by comparing it to PIRNv2.0, which is so far the best available software computing the exact hybridization number for multiple binary phylogenetic trees on the same set of taxa. The algorithm is part of the software Hybroscale, which was developed specifically for the investigation of hybridization networks including their computation and visualization. Hybroscale is freely available(1) and runs on all three major operating systems. Our simulation study indicates that our approach is on average 100 times faster than PIRNv2.0. Moreover, we show how Hybroscale improves the interpretation of the reported hybridization networks by adding certain features to its graphical representation.

  2. Parallel peak pruning for scalable SMP contour tree computation

    Carr, Hamish A. [Univ. of Leeds (United Kingdom); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Davis, CA (United States); Sewell, Christopher M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ahrens, James P. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-09

    As data sets grow to exascale, automated data analysis and visualisation are increasingly important, to intermediate human understanding and to reduce demands on disk storage via in situ analysis. Trends in architecture of high performance computing systems necessitate analysis algorithms to make effective use of combinations of massively multicore and distributed systems. One of the principal analytic tools is the contour tree, which analyses relationships between contours to identify features of more than local importance. Unfortunately, the predominant algorithms for computing the contour tree are explicitly serial, and founded on serial metaphors, which has limited the scalability of this form of analysis. While there is some work on distributed contour tree computation, and separately on hybrid GPU-CPU computation, there is no efficient algorithm with strong formal guarantees on performance allied with fast practical performance. Here in this paper, we report the first shared SMP algorithm for fully parallel contour tree computation, withfor-mal guarantees of O(lgnlgt) parallel steps and O(n lgn) work, and implementations with up to 10x parallel speed up in OpenMP and up to 50x speed up in NVIDIA Thrust.

  3. Computer-oriented approach to fault-tree construction

    Salem, S.L.; Apostolakis, G.E.; Okrent, D.

    1976-11-01

    A methodology for systematically constructing fault trees for general complex systems is developed and applied, via the Computer Automated Tree (CAT) program, to several systems. A means of representing component behavior by decision tables is presented. The method developed allows the modeling of components with various combinations of electrical, fluid and mechanical inputs and outputs. Each component can have multiple internal failure mechanisms which combine with the states of the inputs to produce the appropriate output states. The generality of this approach allows not only the modeling of hardware, but human actions and interactions as well. A procedure for constructing and editing fault trees, either manually or by computer, is described. The techniques employed result in a complete fault tree, in standard form, suitable for analysis by current computer codes. Methods of describing the system, defining boundary conditions and specifying complex TOP events are developed in order to set up the initial configuration for which the fault tree is to be constructed. The approach used allows rapid modifications of the decision tables and systems to facilitate the analysis and comparison of various refinements and changes in the system configuration and component modeling

  4. Model Checking Quantified Computation Tree Logic

    Rensink, Arend; Baier, C; Hermanns, H.

    2006-01-01

    Propositional temporal logic is not suitable for expressing properties on the evolution of dynamically allocated entities over time. In particular, it is not possible to trace such entities through computation steps, since this requires the ability to freely mix quantification and temporal

  5. Fractal approach to computer-analytical modelling of tree crown

    Berezovskaya, F.S.; Karev, G.P.; Kisliuk, O.F.; Khlebopros, R.G.; Tcelniker, Yu.L.

    1993-09-01

    In this paper we discuss three approaches to the modeling of a tree crown development. These approaches are experimental (i.e. regressive), theoretical (i.e. analytical) and simulation (i.e. computer) modeling. The common assumption of these is that a tree can be regarded as one of the fractal objects which is the collection of semi-similar objects and combines the properties of two- and three-dimensional bodies. We show that a fractal measure of crown can be used as the link between the mathematical models of crown growth and light propagation through canopy. The computer approach gives the possibility to visualize a crown development and to calibrate the model on experimental data. In the paper different stages of the above-mentioned approaches are described. The experimental data for spruce, the description of computer system for modeling and the variant of computer model are presented. (author). 9 refs, 4 figs

  6. A computer code for fault tree calculations: PATREC

    Blin, A.; Carnino, A.; Koen, B.V.; Duchemin, B.; Lanore, J.M.; Kalli, H.

    1978-01-01

    A computer code for evaluating the reliability of complex system by fault tree is described in this paper. It uses pattern recognition approach and programming techniques from IBM PL1 language. It can take account of many of the present day problems: multi-dependencies treatment, dispersion in the reliability data parameters, influence of common mode failures. The code is running currently since two years now in Commissariat a l'Energie Atomique Saclay center and shall be used in a future extension for automatic fault trees construction

  7. MFAULT: a computer program for analyzing fault trees

    Pelto, P.J.; Purcell, W.L.

    1977-11-01

    A description and user instructions are presented for MFAULT, a FORTRAN computer program for fault tree analysis. MFAULT identifies the cut sets of a fault tree, calculates their probabilities, and screens the cut sets on the basis of specified cut-offs on probability and/or cut set length. MFAULT is based on an efficient upward-working algorithm for cut set identification. The probability calculations are based on the assumption of small probabilities and constant hazard rates (i.e., exponential failure distributions). Cut sets consisting of repairable components (basic events) only, non-repairable components only, or mixtures of both types can be evaluated. Components can be on-line or standby. Unavailability contributions from pre-existing failures, failures on demand, and testing and maintenance down-time can be handled. MFAULT can analyze fault trees with AND gates, OR gates, inhibit gates, on switches (houses) and off switches. The code is presently capable of finding up to ten event cut sets from a fault tree with up to 512 basic events and 400 gates. It is operational on the CONTROL DATA CYBER 74 computer. 11 figures

  8. RAFT: a computer program for fault tree risk calculations

    Seybold, G.D.

    1977-11-01

    A description and user instructions are presented for RAFT, a FORTRAN computer code for calculation of a risk measure for fault tree cut sets. RAFT calculates release quantities and a risk measure based on the product of probability and release quantity for cut sets of fault trees modeling the accidental release of radioactive material from a nuclear fuel cycle facility. Cut sets and their probabilities are supplied as input to RAFT from an external fault tree analysis code. Using the total inventory available of radioactive material, along with release fractions for each event in a cut set, the release terms are calculated for each cut set. Each release term is multiplied by the cut set probability to yield the cut set risk measure. RAFT orders the dominant cut sets on the risk measure. The total risk measure of processed cut sets and their fractional contributions are supplied as output. Input options are available to eliminate redundant cut sets, apply threshold values on cut set probability and risk, and control the total number of cut sets output. Hash addressing is used to remove redundant cut sets from the analysis. Computer hardware and software restrictions are given along with a sample problem and cross-reference table of the code. Except for the use of file management utilities, RAFT is written exclusively in FORTRAN language and is operational on a Control Data, CYBER 74-18--series computer system. 4 figures

  9. PERFORMANCE ANALYSIS OF SET PARTITIONING IN HIERARCHICAL TREES (SPIHT ALGORITHM FOR A FAMILY OF WAVELETS USED IN COLOR IMAGE COMPRESSION

    A. Sreenivasa Murthy

    2014-11-01

    Full Text Available With the spurt in the amount of data (Image, video, audio, speech, & text available on the net, there is a huge demand for memory & bandwidth savings. One has to achieve this, by maintaining the quality & fidelity of the data acceptable to the end user. Wavelet transform is an important and practical tool for data compression. Set partitioning in hierarchal trees (SPIHT is a widely used compression algorithm for wavelet transformed images. Among all wavelet transform and zero-tree quantization based image compression algorithms SPIHT has become the benchmark state-of-the-art algorithm because it is simple to implement & yields good results. In this paper we present a comparative study of various wavelet families for image compression with SPIHT algorithm. We have conducted experiments with Daubechies, Coiflet, Symlet, Bi-orthogonal, Reverse Bi-orthogonal and Demeyer wavelet types. The resulting image quality is measured objectively, using peak signal-to-noise ratio (PSNR, and subjectively, using perceived image quality (human visual perception, HVP for short. The resulting reduction in the image size is quantified by compression ratio (CR.

  10. Real Time Animation of Trees Based on BBSC in Computer Games

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  11. Computed Quality Assessment of MPEG4-compressed DICOM Video Data.

    Frankewitsch, Thomas; Söhnlein, Sven; Müller, Marcel; Prokosch, Hans-Ulrich

    2005-01-01

    Digital Imaging and Communication in Medicine (DICOM) has become one of the most popular standards in medicine. This standard specifies the exact procedures in which digital images are exchanged between devices, either using a network or storage medium. Sources for images vary; therefore there exist definitions for the exchange for CR, CT, NMR, angiography, sonography and so on. With its spreading, with the increasing amount of sources included, data volume is increasing, too. This affects storage and traffic. While for long-time storage data compression is generally not accepted at the moment, there are many situations where data compression is possible: Telemedicine for educational purposes (e.g. students at home using low speed internet connections), presentations with standard-resolution video projectors, or even the supply on wards combined receiving written findings. DICOM comprises compression: for still image there is JPEG, for video MPEG-2 is adopted. Within the last years MPEG-2 has been evolved to MPEG-4, which squeezes data even better, but the risk of significant errors increases, too. Within the last years effects of compression have been analyzed for entertainment movies, but these are not comparable to videos of physical examinations (e.g. echocardiography). In medical videos an individual image plays a more important role. Erroneous single images affect total quality even more. Additionally, the effect of compression can not be generalized from one test series to all videos. The result depends strongly on the source. Some investigations have been presented, where different MPEG-4 algorithms compressed videos have been compared and rated manually. But they describe only the results in an elected testbed. In this paper some methods derived from video rating are presented and discussed for an automatically created quality control for the compression of medical videos, primary stored in DICOM containers.

  12. FastTree: Computing Large Minimum Evolution Trees with Profiles instead of a Distance Matrix

    Price, Morgan N.; Dehal, Paramvir S.; Arkin, Adam P.

    2009-01-01

    Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement Neighbor-Joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest neighbor in...

  13. PL-MOD: a computer code for modular fault tree analysis and evaluation

    Olmos, J.; Wolf, L.

    1978-01-01

    The computer code PL-MOD has been developed to implement the modular methodology to fault tree analysis. In the modular approach, fault tree structures are characterized by recursively relating the top tree event to all basic event inputs through a set of equations, each defining an independent modular event for the tree. The advantages of tree modularization lie in that it is a more compact representation than the minimal cut-set description and in that it is well suited for fault tree quantification because of its recursive form. In its present version, PL-MOD modularizes fault trees and evaluates top and intermediate event failure probabilities, as well as basic component and modular event importance measures, in a very efficient way. Thus, its execution time for the modularization and quantification of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using the computer code MOCUS

  14. Computational simulation of breast compression based on segmented breast and fibroglandular tissues on magnetic resonance images

    Shih, Tzu-Ching [Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 40402, Taiwan (China); Chen, Jeon-Hor; Nie Ke; Lin Muqing; Chang, Daniel; Nalcioglu, Orhan; Su, Min-Ying [Tu and Yuen Center for Functional Onco-Imaging and Radiological Sciences, University of California, Irvine, CA 92697 (United States); Liu Dongxu; Sun Lizhi, E-mail: shih@mail.cmu.edu.t [Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 (United States)

    2010-07-21

    This study presents a finite element-based computational model to simulate the three-dimensional deformation of a breast and fibroglandular tissues under compression. The simulation was based on 3D MR images of the breast, and craniocaudal and mediolateral oblique compression, as used in mammography, was applied. The geometry of the whole breast and the segmented fibroglandular tissues within the breast were reconstructed using triangular meshes by using the Avizo (registered) 6.0 software package. Due to the large deformation in breast compression, a finite element model was used to simulate the nonlinear elastic tissue deformation under compression, using the MSC.Marc (registered) software package. The model was tested in four cases. The results showed a higher displacement along the compression direction compared to the other two directions. The compressed breast thickness in these four cases at a compression ratio of 60% was in the range of 5-7 cm, which is a typical range of thickness in mammography. The projection of the fibroglandular tissue mesh at a compression ratio of 60% was compared to the corresponding mammograms of two women, and they demonstrated spatially matched distributions. However, since the compression was based on magnetic resonance imaging (MRI), which has much coarser spatial resolution than the in-plane resolution of mammography, this method is unlikely to generate a synthetic mammogram close to the clinical quality. Whether this model may be used to understand the technical factors that may impact the variations in breast density needs further investigation. Since this method can be applied to simulate compression of the breast at different views and different compression levels, another possible application is to provide a tool for comparing breast images acquired using different imaging modalities--such as MRI, mammography, whole breast ultrasound and molecular imaging--that are performed using different body positions and under

  15. SALP-PC, a computer program for fault tree analysis on personal computers

    Contini, S.; Poucet, A.

    1987-01-01

    The paper presents the main characteristics of the SALP-PC computer code for fault tree analysis. The program has been developed in Fortran 77 on an Olivetti M24 personal computer (IBM compatible) in order to reach a high degree of portability. It is composed of six processors implementing the different phases of the analysis procedure. This particular structure presents some advantages like, for instance, the restart facility and the possibility to develop an event tree analysis code. The set of allowed logical operators, i.e. AND, OR, NOT, K/N, XOR, INH, together with the possibility to define boundary conditions, make the SALP-PC code a powerful tool for risk assessment. (orig.)

  16. A computer-oriented approach to fault-tree construction. Topical report No. 1

    Chu, B.B.

    1976-11-01

    Fault Tree Analysis is one of the major tools for the safety and reliability analysis of large systems. A methodology for systematically constructing fault trees for general complex systems is developed and applied, via the computer program CAT, to several systems. First, a means of representing component behavior by decision tables is presented. In order to use these tables, a procedure for constructing and editing fault trees, either manually or by computer, is described. In order to verify the methodology the computer program CAT has been developed and used to construct fault trees for two systems

  17. Fast Tree: Computing Large Minimum-Evolution Trees with Profiles instead of a Distance Matrix

    N. Price, Morgan

    2009-01-01

    Gene families are growing rapidly, but standard methods for inferring phylogenies do not scale to alignments with over 10,000 sequences. We present FastTree, a method for constructing large phylogenies and for estimating their reliability. Instead of storing a distance matrix, FastTree stores sequence profiles of internal nodes in the tree. FastTree uses these profiles to implement neighbor-joining and uses heuristics to quickly identify candidate joins. FastTree then uses nearest-neighbor i...

  18. CAT: a computer code for the automated construction of fault trees

    Apostolakis, G.E.; Salem, S.L.; Wu, J.S.

    1978-03-01

    A computer code, CAT (Computer Automated Tree, is presented which applies decision table methods to model the behavior of components for systematic construction of fault trees. The decision tables for some commonly encountered mechanical and electrical components are developed; two nuclear subsystems, a Containment Spray Recirculation System and a Consequence Limiting Control System, are analyzed to demonstrate the applications of CAT code

  19. Characterization of cell mechanical properties by computational modeling of parallel plate compression.

    McGarry, J P

    2009-11-01

    A substantial body of work has been reported in which the mechanical properties of adherent cells were characterized using compression testing in tandem with computational modeling. However, a number of important issues remain to be addressed. In the current study, using computational analyses, the effect of cell compressibility on the force required to deform spread cells is investigated and the possibility that stiffening of the cell cytoplasm occurs during spreading is examined based on published experimental compression test data. The effect of viscoelasticity on cell compression is considered and difficulties in performing a complete characterization of the viscoelastic properties of a cell nucleus and cytoplasm by this method are highlighted. Finally, a non-linear force-deformation response is simulated using differing linear viscoelastic properties for the cell nucleus and the cell cytoplasm.

  20. An optimal algorithm for computing all subtree repeats in trees.

    Flouri, T; Kobert, K; Pissis, S P; Stamatakis, A

    2014-05-28

    Given a labelled tree T, our goal is to group repeating subtrees of T into equivalence classes with respect to their topologies and the node labels. We present an explicit, simple and time-optimal algorithm for solving this problem for unrooted unordered labelled trees and show that the running time of our method is linear with respect to the size of T. By unordered, we mean that the order of the adjacent nodes (children/neighbours) of any node of T is irrelevant. An unrooted tree T does not have a node that is designated as root and can also be referred to as an undirected tree. We show how the presented algorithm can easily be modified to operate on trees that do not satisfy some or any of the aforementioned assumptions on the tree structure; for instance, how it can be applied to rooted, ordered or unlabelled trees.

  1. A sub-cubic time algorithm for computing the quartet distance between two general trees

    Nielsen, Jesper; Kristensen, Anders Kabell; Mailund, Thomas

    2011-01-01

    Background When inferring phylogenetic trees different algorithms may give different trees. To study such effects a measure for the distance between two trees is useful. Quartet distance is one such measure, and is the number of quartet topologies that differ between two trees. Results We have...... derived a new algorithm for computing the quartet distance between a pair of general trees, i.e. trees where inner nodes can have any degree ≥ 3. The time and space complexity of our algorithm is sub-cubic in the number of leaves and does not depend on the degree of the inner nodes. This makes...... it the fastest algorithm so far for computing the quartet distance between general trees independent of the degree of the inner nodes. Conclusions We have implemented our algorithm and two of the best competitors. Our new algorithm is significantly faster than the competition and seems to run in close...

  2. Classification and Compression of Multi-Resolution Vectors: A Tree Structured Vector Quantizer Approach

    2002-01-01

    their expression profile and for classification of cells into tumerous and non- tumerous classes. Then we will present a parallel tree method for... cancerous cells. We will use the same dataset and use tree structured classifiers with multi-resolution analysis for classifying cancerous from non- cancerous ...cells. We have the expressions of 4096 genes from 98 different cell types. Of these 98, 72 are cancerous while 26 are non- cancerous . We are interested

  3. A Computational model for compressed sensing RNAi cellular screening

    Tan Hua

    2012-12-01

    Full Text Available Abstract Background RNA interference (RNAi becomes an increasingly important and effective genetic tool to study the function of target genes by suppressing specific genes of interest. This system approach helps identify signaling pathways and cellular phase types by tracking intensity and/or morphological changes of cells. The traditional RNAi screening scheme, in which one siRNA is designed to knockdown one specific mRNA target, needs a large library of siRNAs and turns out to be time-consuming and expensive. Results In this paper, we propose a conceptual model, called compressed sensing RNAi (csRNAi, which employs a unique combination of group of small interfering RNAs (siRNAs to knockdown a much larger size of genes. This strategy is based on the fact that one gene can be partially bound with several small interfering RNAs (siRNAs and conversely, one siRNA can bind to a few genes with distinct binding affinity. This model constructs a multi-to-multi correspondence between siRNAs and their targets, with siRNAs much fewer than mRNA targets, compared with the conventional scheme. Mathematically this problem involves an underdetermined system of equations (linear or nonlinear, which is ill-posed in general. However, the recently developed compressed sensing (CS theory can solve this problem. We present a mathematical model to describe the csRNAi system based on both CS theory and biological concerns. To build this model, we first search nucleotide motifs in a target gene set. Then we propose a machine learning based method to find the effective siRNAs with novel features, such as image features and speech features to describe an siRNA sequence. Numerical simulations show that we can reduce the siRNA library to one third of that in the conventional scheme. In addition, the features to describe siRNAs outperform the existing ones substantially. Conclusions This csRNAi system is very promising in saving both time and cost for large-scale RNAi

  4. Color Processing using Max-trees : A Comparison on Image Compression

    Tushabe, Florence; Wilkinson, M.H.F.

    2012-01-01

    This paper proposes a new method of processing color images using mathematical morphology techniques. It adapts the Max-tree image representation to accommodate color and other vectorial images. The proposed method introduces three new ways of transforming the color image into a gray scale image

  5. Using trees to compute approximate solutions to ordinary differential equations exactly

    Grossman, Robert

    1991-01-01

    Some recent work is reviewed which relates families of trees to symbolic algorithms for the exact computation of series which approximate solutions of ordinary differential equations. It turns out that the vector space whose basis is the set of finite, rooted trees carries a natural multiplication related to the composition of differential operators, making the space of trees an algebra. This algebraic structure can be exploited to yield a variety of algorithms for manipulating vector fields and the series and algebras they generate.

  6. Three-dimensional range data compression using computer graphics rendering pipeline.

    Zhang, Song

    2012-06-20

    This paper presents the idea of naturally encoding three-dimensional (3D) range data into regular two-dimensional (2D) images utilizing computer graphics rendering pipeline. The computer graphics pipeline provides a means to sample 3D geometry data into regular 2D images, and also to retrieve the depth information for each sampled pixel. The depth information for each pixel is further encoded into red, green, and blue color channels of regular 2D images. The 2D images can further be compressed with existing 2D image compression techniques. By this novel means, 3D geometry data obtained by 3D range scanners can be instantaneously compressed into 2D images, providing a novel way of storing 3D range data into its 2D counterparts. We will present experimental results to verify the performance of this proposed technique.

  7. A new methodology for the computer-aided construction of fault trees

    Salem, S.L.; Apostolakis, G.E.; Okrent, D.

    1977-01-01

    A methodology for systematically constructing fault trees for general complex systems is developed. A means of modeling component behaviour via decision tables is presented, and a procedure, and a procedure for constructing and editing fault trees, either manually or by computer, is developed. The techniques employed result in a complete fault tree in standard form. In order to demonstrate the methodology, the computer program CAT was developed and is used to construct trees for a nuclear system. By analyzing and comparing these fault trees, several conclusions are reached. First, such an approach can be used to produce fault trees that accurately describe system behaviour. Second, multiple trees can be rapidly produced by defining various TOP events, including system success. Finally, the accuracy and utility of such trees is shown to depend upon the careful development of the decision table models by the analyst, and of the overall system definition itself. Thus the method is seen to be a tool for assisting in the work of fault tree construction rather than a replacement for the careful work of the fault tree analyst. (author)

  8. ANCON: A code for the evaluation of complex fault trees in personal computers

    Napoles, J.G.; Salomon, J.; Rivero, J.

    1990-01-01

    Performing probabilistic safety analysis has been recognized worldwide as one of the more effective ways for further enhancing safety of Nuclear Power Plants. The evaluation of fault trees plays a fundamental role in these analysis. Some existing limitations in RAM and execution speed of personal computers (PC) has restricted so far their use in the analysis of complex fault trees. Starting from new approaches in the data structure and other possibilities the ANCON code can evaluate complex fault trees in a PC, allowing the user to do a more comprehensive analysis of the considered system in reduced computing time

  9. Computer model of copper resistivity will improve the efficiency of field-compression devices

    Burgess, T.J.

    1977-01-01

    By detonating a ring of high explosive around an existing magnetic field, we can, under certain conditions, compress the field and multiply its strength tremendously. In this way, we can duplicate for a fraction of a second the extreme pressures that normally exist only in the interior of stars and planets. Under such pressures, materials may exhibit behavior that will confirm or alter current notions about the fundamental structure of matter and the ongoing processes in planetary interiors. However, we cannot design an efficient field-compression device unless we can calculate the electrical resistivity of certain basic metal components, which interact with the field. To aid in the design effort, we have developed a computer code that calculates the resistivity of copper and other metals over the wide range of temperatures and pressures found in a field-compression device

  10. Trees

    Al-Khaja, Nawal

    2007-01-01

    This is a thematic lesson plan for young learners about palm trees and the importance of taking care of them. The two part lesson teaches listening, reading and speaking skills. The lesson includes parts of a tree; the modal auxiliary, can; dialogues and a role play activity.

  11. Efficient Algorithms for Computing the Triplet and Quartet Distance Between Trees of Arbitrary Degree

    Brodal, Gerth Stølting; Fagerberg, Rolf; Mailund, Thomas

    2013-01-01

    ), respectively, and counting how often the induced topologies in the two input trees are different. In this paper we present efficient algorithms for computing these distances. We show how to compute the triplet distance in time O(n log n) and the quartet distance in time O(d n log n), where d is the maximal......The triplet and quartet distances are distance measures to compare two rooted and two unrooted trees, respectively. The leaves of the two trees should have the same set of n labels. The distances are defined by enumerating all subsets of three labels (triplets) and four labels (quartets...... degree of any node in the two trees. Within the same time bounds, our framework also allows us to compute the parameterized triplet and quartet distances, where a parameter is introduced to weight resolved (binary) topologies against unresolved (non-binary) topologies. The previous best algorithm...

  12. Computer-assisted tree taxonomy by automated image recognition

    Pauwels, E.J.; Zeeuw, P.M.de; Ranguelova, E.B.

    2009-01-01

    We present an algorithm that performs image-based queries within the domain of tree taxonomy. As such, it serves as an example relevant to many other potential applications within the field of biodiversity and photo-identification. Unsupervised matching results are produced through a chain of

  13. Performance evaluation for compressible flow calculations on five parallel computers of different architectures

    Kimura, Toshiya.

    1997-03-01

    A two-dimensional explicit Euler solver has been implemented for five MIMD parallel computers of different machine architectures in Center for Promotion of Computational Science and Engineering of Japan Atomic Energy Research Institute. These parallel computers are Fujitsu VPP300, NEC SX-4, CRAY T94, IBM SP2, and Hitachi SR2201. The code was parallelized by several parallelization methods, and a typical compressible flow problem has been calculated for different grid sizes changing the number of processors. Their effective performances for parallel calculations, such as calculation speed, speed-up ratio and parallel efficiency, have been investigated and evaluated. The communication time among processors has been also measured and evaluated. As a result, the differences on the performance and the characteristics between vector-parallel and scalar-parallel computers can be pointed, and it will present the basic data for efficient use of parallel computers and for large scale CFD simulations on parallel computers. (author)

  14. About a method for compressing x-ray computed microtomography data

    Mancini, Lucia; Kourousias, George; Billè, Fulvio; De Carlo, Francesco; Fidler, Aleš

    2018-04-01

    The management of scientific data is of high importance especially for experimental techniques that produce big data volumes. Such a technique is x-ray computed tomography (CT) and its community has introduced advanced data formats which allow for better management of experimental data. Rather than the organization of the data and the associated meta-data, the main topic on this work is data compression and its applicability to experimental data collected from a synchrotron-based CT beamline at the Elettra-Sincrotrone Trieste facility (Italy) and studies images acquired from various types of samples. This study covers parallel beam geometry, but it could be easily extended to a cone-beam one. The reconstruction workflow used is the one currently in operation at the beamline. Contrary to standard image compression studies, this manuscript proposes a systematic framework and workflow for the critical examination of different compression techniques and does so by applying it to experimental data. Beyond the methodology framework, this study presents and examines the use of JPEG-XR in combination with HDF5 and TIFF formats providing insights and strategies on data compression and image quality issues that can be used and implemented at other synchrotron facilities and laboratory systems. In conclusion, projection data compression using JPEG-XR appears as a promising, efficient method to reduce data file size and thus to facilitate data handling and image reconstruction.

  15. A blended pressure/density based method for the computation of incompressible and compressible flows

    Rossow, C.-C.

    2003-01-01

    An alternative method to low speed preconditioning for the computation of nearly incompressible flows with compressible methods is developed. For this approach the leading terms of the flux difference splitting (FDS) approximate Riemann solver are analyzed in the incompressible limit. In combination with the requirement of the velocity field to be divergence-free, an elliptic equation to solve for a pressure correction to enforce the divergence-free velocity field on the discrete level is derived. The pressure correction equation established is shown to be equivalent to classical methods for incompressible flows. In order to allow the computation of flows at all speeds, a blending technique for the transition from the incompressible, pressure based formulation to the compressible, density based formulation is established. It is found necessary to use preconditioning with this blending technique to account for a remaining 'compressible' contribution in the incompressible limit, and a suitable matrix directly applicable to conservative residuals is derived. Thus, a coherent framework is established to cover the discretization of both incompressible and compressible flows. Compared with standard preconditioning techniques, the blended pressure/density based approach showed improved robustness for high lift flows close to separation

  16. Autumn Algorithm-Computation of Hybridization Networks for Realistic Phylogenetic Trees.

    Huson, Daniel H; Linz, Simone

    2018-01-01

    A minimum hybridization network is a rooted phylogenetic network that displays two given rooted phylogenetic trees using a minimum number of reticulations. Previous mathematical work on their calculation has usually assumed the input trees to be bifurcating, correctly rooted, or that they both contain the same taxa. These assumptions do not hold in biological studies and "realistic" trees have multifurcations, are difficult to root, and rarely contain the same taxa. We present a new algorithm for computing minimum hybridization networks for a given pair of "realistic" rooted phylogenetic trees. We also describe how the algorithm might be used to improve the rooting of the input trees. We introduce the concept of "autumn trees", a nice framework for the formulation of algorithms based on the mathematics of "maximum acyclic agreement forests". While the main computational problem is hard, the run-time depends mainly on how different the given input trees are. In biological studies, where the trees are reasonably similar, our parallel implementation performs well in practice. The algorithm is available in our open source program Dendroscope 3, providing a platform for biologists to explore rooted phylogenetic networks. We demonstrate the utility of the algorithm using several previously studied data sets.

  17. Computing the Quartet Distance Between Evolutionary Trees in Time O(n log n)

    Brodal, Gerth Sølfting; Fagerberg, Rolf; Pedersen, Christian Nørgaard Storm

    2003-01-01

    Evolutionary trees describing the relationship for a set of species are central in evolutionary biology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook, McMorris, and ...... unrooted evolutionary trees of n species, where all internal nodes have degree three, in time O(n log n. The previous best algorithm for the problem uses time O(n 2).......Evolutionary trees describing the relationship for a set of species are central in evolutionary biology, and quantifying differences between evolutionary trees is therefore an important task. The quartet distance is a distance measure between trees previously proposed by Estabrook, Mc......Morris, and Meacham. The quartet distance between two unrooted evolutionary trees is the number of quartet topology differences between the two trees, where a quartet topology is the topological subtree induced by four species. In this paper we present an algorithm for computing the quartet distance between two...

  18. From Greeks to Today: Cipher Trees and Computer Cryptography.

    Grady, M. Tim; Brumbaugh, Doug

    1988-01-01

    Explores the use of computers for teaching mathematical models of transposition ciphers. Illustrates the ideas, includes activities and extensions, provides a mathematical model and includes computer programs to implement these topics. (MVL)

  19. Symmetric and asymmetric hybrid cryptosystem based on compressive sensing and computer generated holography

    Ma, Lihong; Jin, Weimin

    2018-01-01

    A novel symmetric and asymmetric hybrid optical cryptosystem is proposed based on compressive sensing combined with computer generated holography. In this method there are six encryption keys, among which two decryption phase masks are different from the two random phase masks used in the encryption process. Therefore, the encryption system has the feature of both symmetric and asymmetric cryptography. On the other hand, because computer generated holography can flexibly digitalize the encrypted information and compressive sensing can significantly reduce data volume, what is more, the final encryption image is real function by phase truncation, the method favors the storage and transmission of the encryption data. The experimental results demonstrate that the proposed encryption scheme boosts the security and has high robustness against noise and occlusion attacks.

  20. OpenCL-based vicinity computation for 3D multiresolution mesh compression

    Hachicha, Soumaya; Elkefi, Akram; Ben Amar, Chokri

    2017-03-01

    3D multiresolution mesh compression systems are still widely addressed in many domains. These systems are more and more requiring volumetric data to be processed in real-time. Therefore, the performance is becoming constrained by material resources usage and an overall reduction in the computational time. In this paper, our contribution entirely lies on computing, in real-time, triangles neighborhood of 3D progressive meshes for a robust compression algorithm based on the scan-based wavelet transform(WT) technique. The originality of this latter algorithm is to compute the WT with minimum memory usage by processing data as they are acquired. However, with large data, this technique is considered poor in term of computational complexity. For that, this work exploits the GPU to accelerate the computation using OpenCL as a heterogeneous programming language. Experiments demonstrate that, aside from the portability across various platforms and the flexibility guaranteed by the OpenCL-based implementation, this method can improve performance gain in speedup factor of 5 compared to the sequential CPU implementation.

  1. Computing the stretch factor and maximum detour of paths, trees, and cycles in the normed space

    Wulff-Nilsen, Christian; Grüne, Ansgar; Klein, Rolf

    2012-01-01

    (n log n) in the algebraic computation tree model and describe a worst-case O(σn log 2 n) time algorithm for computing the stretch factor or maximum detour of a path embedded in the plane with a weighted fixed orientation metric defined by σ time algorithm to d...... time. We also obtain an optimal O(n) time algorithm for computing the maximum detour of a monotone rectilinear path in L 1 plane....

  2. Nutcracker or left renal vein compression phenomenon: multidetector computed tomography findings and clinical significance

    Cuellar i Calabria, Hug; Quiroga Gomez, Sergi; Sebastia Cerqueda, Carmen; Boye de la Presa, Rosa; Miranda, Americo; Alvarez-Castells, Agusti

    2005-01-01

    The use of multidetector computed tomography (MDCT) in routine abdominal explorations has increased the detection of the nutcracker phenomenon, defined as left renal vein (LRV) compression by adjacent anatomic structures. The embryology and anatomy of the nutcracker phenomenon are relevant as a background for the nutcracker syndrome, a rare cause of hematuria as well as other symptoms. MDCT examples of collateral renal vein circulation (gonadal, ureteric, azygous, lumbar, capsular) and aortomesenteric (anterior) and retroaortic (posterior) nutcracker phenomena in patients with no urologic complaint are shown as well as studies performed on patients with gross hematuria of uncertain origin. Incidental observation of collateral veins draining the LRV in abdominal MDCT explorations of asymptomatic patients may be a sign of a compensating nutcracker phenomenon. Imbalance between LRV compression and development of collateral circulation may lead to symptomatic nutcracker syndrome. (orig.)

  3. Nutcracker or left renal vein compression phenomenon: multidetector computed tomography findings and clinical significance

    Cuellar i Calabria, Hug; Quiroga Gomez, Sergi; Sebastia Cerqueda, Carmen; Boye de la Presa, Rosa; Miranda, Americo; Alvarez-Castells, Agusti [Hospitals Universitaris Vall D' Hebron, Institut de Diagnostic Per La Imatge, Servei De Radiodiagnostic, Barcelona (Spain)

    2005-08-01

    The use of multidetector computed tomography (MDCT) in routine abdominal explorations has increased the detection of the nutcracker phenomenon, defined as left renal vein (LRV) compression by adjacent anatomic structures. The embryology and anatomy of the nutcracker phenomenon are relevant as a background for the nutcracker syndrome, a rare cause of hematuria as well as other symptoms. MDCT examples of collateral renal vein circulation (gonadal, ureteric, azygous, lumbar, capsular) and aortomesenteric (anterior) and retroaortic (posterior) nutcracker phenomena in patients with no urologic complaint are shown as well as studies performed on patients with gross hematuria of uncertain origin. Incidental observation of collateral veins draining the LRV in abdominal MDCT explorations of asymptomatic patients may be a sign of a compensating nutcracker phenomenon. Imbalance between LRV compression and development of collateral circulation may lead to symptomatic nutcracker syndrome. (orig.)

  4. Computation of steady and unsteady compressible quasi-axisymmetric vortex flow and breakdown

    Kandil, Osama A.; Kandil, Hamdy A.; Liu, C. H.

    1991-01-01

    The unsteady, compressible Navier-Stokes equations are used to compute and analyze compressible quasi-axisymmetric isolated vortices. The Navier-Stokes equations are solved using an implicit, upwind, flux-difference splitting finite-volume scheme. The developed three-dimensional solver has been verified by comparing its solution profiles with those of a slender, quasi-axisymmetric vortex solver for a subsonic, isolated quasi-axisymmetric vortex in an unbounded domain. The Navier-Stokes solver is then used to solve for a supersonic quasi-axisymmetric vortex flow in a configured circular duct. Steady and unsteady vortex-shock interactions and breakdown have been captured. The problem has also been calculated using the Euler solver of the same code and the results are compared with those of the Navier-Stokes solver. The effect of the initial swirl has been tentatively studied.

  5. Computation of compressible quasi-axisymmetric slender vortex flow and breakdown

    Kandil, Osama A.; Kandil, Hamdy A.

    1991-01-01

    The unsteady, compressible Navier-Stokes equations are used to compute and analyze compressible quasi-axisymmetric isolated vortices. The Navier-Stokes equations are solved using an implicit, upwind, flux difference splitting finite volume scheme. The developed three dimensional solver was verified by comparing its solution profiles with those of a slender, quasi-axisymmetric vortex solver for a subsonic, quasi-axisymmetric vortex in an unbounded domain. The Navier-Stokes solver is then used to solve for a supersonic, quasi-axisymmetric vortex flow in a configured circular duct. Steady and unsteady vortex-shock interactions and breakdown were captured. The problem was also calculated using the Euler solver of the same code; the results were compared with those of the Navier-Stokes solver. The effect of the initial swirl was investigated.

  6. A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D

    2011-01-01

    Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)

  7. Computation of Universal Objects for Distributions Over Co-Trees

    Petersen, Henrik Densing; Topsøe, Flemming

    2012-01-01

    for the model or, equivalently, the corresponding universal code, can be determined exactly via an algorithm of low complexity. Natural relations to problems on the computation of capacity and on the determination of information projections are established. More surprisingly, a direct connection to a problem...

  8. Risk of vertebral insufficiency fractures in relation to compressive strength predicted by quantitative computed tomography

    Biggemann, M.; Hilweg, D.; Seidel, S.; Horst, M.; Brinckmann, P.

    1991-01-01

    Vertebral insufficiency fractures may result from excessive loading of normal and routine loading of osteoporotic spines. Fractures occur when the mechanical load exceeds the vertebral compressive strength, i.e., the maximum load a vertebra can tolerate. Vertebral compressive strength is determined by trabecular bone density and the size of end-plate area. Both parameters can be measured non-invasively by quanti-tative computed tomography (QCT). In 75 patients compressive strength (i.e., trabecular bone density and endplate area) of the vertebra L3 was determined using QCT. In addition, conventional radiographs of the spines were analysed for the prevalence of insufficiency fractures in each case. By relating fracture prevalence to strength, 3 fracture risk groups were found: a high-risk group with strength values of L3 5 kN and a fracture risk near 0 percent. Biomechanical measurements and model calculations indicate that spinal loads of 3 to 4 kN at L3/4 will be common in everyday activities. These data and the results described above suggest that spines with strength values of L3<3 kN are at an extremely high risk of insufficiency fractures in daily life. Advantages of fracture risk assessment by strength determination over risk estimation based on clinically used trabecular bone density measurements are discussed. (author). 18 refs.; 4 figs

  9. The GeoSteiner software package for computing Steiner trees in the plane

    Juhl, Daniel; Warme, David M.; Winter, Pawel

    The GeoSteiner software package has for more than 10 years been the fastest (publicly available) program for computing exact solutions to Steiner tree problems in the plane. The computational study by Warme, Winter and Zachariasen, published in 2000, documented the performance of the GeoSteiner...... approach --- allowing the exact solution of Steiner tree problems with more than a thousand terminals. Since then, a number of algorithmic enhancements have improved the performance of the software package significantly. In this computational study we run the current code on the largest problem instances...... from the 2000-study, and on a number of larger problem instances. The computational study is performed using both the publicly available GeoSteiner 3.1 code base, and the commercial GeoSteiner 4.0 code base....

  10. Grow--a computer subroutine that projects the growth of trees in the Lake States' forests.

    Gary J. Brand

    1981-01-01

    A computer subroutine, Grow, has been written in 1977 Standard FORTRAN to implement a distance-independent, individual tree growth model for Lake States' forests. Grow is a small and easy-to-use version of the growth model. All the user has to do is write a calling program to read initial conditions, call Grow, and summarize the results.

  11. Apps for Angiosperms: The Usability of Mobile Computers and Printed Field Guides for UK Wild Flower and Winter Tree Identification

    Stagg, Bethan C.; Donkin, Maria E.

    2017-01-01

    We investigated usability of mobile computers and field guide books with adult botanical novices, for the identification of wildflowers and deciduous trees in winter. Identification accuracy was significantly higher for wildflowers using a mobile computer app than field guide books but significantly lower for deciduous trees. User preference…

  12. Decision trees and integrated features for computer aided mammographic screening

    Kegelmeyer, W.P. Jr.; Groshong, B.; Allmen, M.; Woods, K.

    1997-02-01

    Breast cancer is a serious problem, which in the United States causes 43,000 deaths a year, eventually striking 1 in 9 women. Early detection is the only effective countermeasure, and mass mammography screening is the only reliable means for early detection. Mass screening has many shortcomings which could be addressed by a computer-aided mammographic screening system. Accordingly, we have applied the pattern recognition methods developed in earlier investigations of speculated lesions in mammograms to the detection of microcalcifications and circumscribed masses, generating new, more rigorous and uniform methods for the detection of both those signs. We have also improved the pattern recognition methods themselves, through the development of a new approach to combinations of multiple classifiers.

  13. VNAP2: a computer program for computation of two-dimensional, time-dependent, compressible, turbulent flow

    Cline, M.C.

    1981-08-01

    VNAP2 is a computer program for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow. VNAP2 solves the two-dimensional, time-dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing-length model, a one-equation model, or the Jones-Launder two-equation model. The geometry may be a single- or a dual-flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference-plane-characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free-jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet-powered afterbodies, airfoils, and free-jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.

  14. A New Minimum Trees-Based Approach for Shape Matching with Improved Time Computing: Application to Graphical Symbols Recognition

    Franco, Patrick; Ogier, Jean-Marc; Loonis, Pierre; Mullot, Rémy

    Recently we have developed a model for shape description and matching. Based on minimum spanning trees construction and specifics stages like the mixture, it seems to have many desirable properties. Recognition invariance in front shift, rotated and noisy shape was checked through median scale tests related to GREC symbol reference database. Even if extracting the topology of a shape by mapping the shortest path connecting all the pixels seems to be powerful, the construction of graph induces an expensive algorithmic cost. In this article we discuss on the ways to reduce time computing. An alternative solution based on image compression concepts is provided and evaluated. The model no longer operates in the image space but in a compact space, namely the Discrete Cosine space. The use of block discrete cosine transform is discussed and justified. The experimental results led on the GREC2003 database show that the proposed method is characterized by a good discrimination power, a real robustness to noise with an acceptable time computing.

  15. Scoring and ranking of metabolic trees to computationally ...

    Increasing awareness about endocrine disrupting chemicals (EDCs) in the environment has driven concern about their potential impact on human health and wildlife. Tens of thousands of natural and synthetic xenobiotics are presently in commerce with little to no toxicity data and therefore uncertainty about their impact on estrogen receptor (ER) signaling pathways and other toxicity endpoints. As such, there is a need for strategies that make use of available data to prioritize chemicals for testing. One of the major achievements within the EPA’s Endocrine Disruptor Screening Program (EDSP), was the network model combining 18 ER in vitro assays from ToxCast to predict in vivo estrogenic activity. This model overcomes the limitations of single in vitro assays at different steps of the ER pathway. However, it lacks many relevant features required to estimate safe exposure levels and the composite assays do not consider the complex metabolic processes that might produce bioactive entities in a living system. This problem is typically addressed using in vivo assays. The aim of this work is to design a computational and in vitro approach to prioritize compounds and perform a quantitative safety assessment. To this end, we pursue a tiered approach taking into account bioactivity and bioavailability of chemicals and their metabolites using a human uterine epithelial cell (Ishikawa)-based assay. This biologically relevant fit-for-purpose assay was designed to quantitati

  16. Investigations of Electrical Trees in the Inner Layer of XLPE Cable Insulation Using Computer-aided Image Recording Monitoring

    Xie, Ansheng; Zheng, Xiaoquan; Li, Shengtao; Chen, George

    2010-01-01

    Using a computer-aided image recording monitoring system, extensive measurements have been performed in the inner layer of 66 kV cross-linked polyethylene (XLPE)cables. It has been found that there are three kinds of electrical trees in the samples,the branch-like tree, the bush-like tree and the mixed tree that is a mixture of the above two kinds. When the applied voltage frequency is less than or equal to 250 Hz, only the mixed tree appears in XLPE samples, when the frequency is greater tha...

  17. A practical O(n log2 n) time algorithm for computing the triplet distance on binary trees

    Sand, Andreas; Pedersen, Christian Nørgaard Storm; Mailund, Thomas

    2013-01-01

    rooted binary trees in time O (n log2 n). The algorithm is related to an algorithm for computing the quartet distance between two unrooted binary trees in time O (n log n). While the quartet distance algorithm has a very severe overhead in the asymptotic time complexity that makes it impractical compared......The triplet distance is a distance measure that compares two rooted trees on the same set of leaves by enumerating all sub-sets of three leaves and counting how often the induced topologies of the tree are equal or different. We present an algorithm that computes the triplet distance between two...

  18. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    Brown, Justin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hund, Lauren [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesian model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.

  19. An enhanced technique for mobile cloudlet offloading with reduced computation using compression in the cloud

    Moro, A. C.; Nadesh, R. K.

    2017-11-01

    The cloud computing paradigm has transformed the way we do business in today’s world. Services on cloud have come a long way since just providing basic storage or software on demand. One of the fastest growing factor in this is mobile cloud computing. With the option of offloading now available to mobile users, mobile users can offload entire applications onto cloudlets. With the problems regarding availability and limited-storage capacity of these mobile cloudlets, it becomes difficult to decide for the mobile user when to use his local memory or the cloudlets. Hence, we take a look at a fast algorithm that decides whether the mobile user should go for cloudlet or rely on local memory based on an offloading probability. We have partially implemented the algorithm which decides whether the task can be carried out locally or given to a cloudlet. But as it becomes a burden on the mobile devices to perform the complete computation, so we look to offload this on to a cloud in our paper. Also further we use a file compression technique before sending the file onto the cloud to further reduce the load.

  20. Compression-recovery model of absorptive glass mat (AGM) separator guided by X-ray micro-computed tomography analysis

    Kameswara Rao, P. V.; Rawal, Amit; Kumar, Vijay; Rajput, Krishn Gopal

    2017-10-01

    Absorptive glass mat (AGM) separators play a key role in enhancing the cycle life of the valve regulated lead acid (VRLA) batteries by maintaining the elastic characteristics under a defined level of compression force with the plates of the electrodes. Inevitably, there are inherent challenges to maintain the required level of compression characteristics of AGM separators during the charge and discharge of the battery. Herein, we report a three-dimensional (3D) analytical model for predicting the compression-recovery behavior of AGM separators by formulating a direct relationship with the constituent fiber and structural parameters. The analytical model of compression-recovery behavior of AGM separators has successfully included the fiber slippage criterion and internal friction losses. The presented work uses, for the first time, 3D data of fiber orientation from X-ray micro-computed tomography, for predicting the compression-recovery behavior of AGM separators. A comparison has been made between the theoretical and experimental results of compression-recovery behavior of AGM samples with defined fiber orientation characteristics. In general, the theory agreed reasonably well with the experimental results of AGM samples in both dry and wet states. Through theoretical modeling, fiber volume fraction was established as one of the key structural parameters that modulates the compression hysteresis of an AGM separator.

  1. Composite self-expanding bioresorbable prototype stents with reinforced compression performance for congenital heart disease application: Computational and experimental investigation.

    Zhao, Fan; Xue, Wen; Wang, Fujun; Liu, Laijun; Shi, Haoqin; Wang, Lu

    2018-08-01

    Stents are vital devices to treat vascular stenosis in pediatric patients with congenital heart disease. Bioresorbable stents (BRSs) have been applied to reduce challenging complications caused by permanent metal stents. However, it remains almost a total lack of BRSs with satisfactory compression performance specifically for children with congenital heart disease, leading to importantly suboptimal effects. In this work, composite bioresorbable prototype stents with superior compression resistance were designed by braiding and annealing technology, incorporating poly (p-dioxanone) (PPDO) monofilaments and polycaprolactone (PCL) multifilament. Stent prototype compression properties were investigated. The results revealed that novel composite prototype stents showed superior compression force compared to the control ones, as well as recovery ability. Furthermore, deformation mechanisms were analyzed by computational simulation, which revealed bonded interlacing points among yarns play an important role. This research presents important clinical implications in bioresorbable stent manufacture and provides further study with an innovative stent design. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. An efficient computational method for global sensitivity analysis and its application to tree growth modelling

    Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie

    2012-01-01

    Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.

  3. Computational analyses of spectral trees from electrospray multi-stage mass spectrometry to aid metabolite identification.

    Cao, Mingshu; Fraser, Karl; Rasmussen, Susanne

    2013-10-31

    Mass spectrometry coupled with chromatography has become the major technical platform in metabolomics. Aided by peak detection algorithms, the detected signals are characterized by mass-over-charge ratio (m/z) and retention time. Chemical identities often remain elusive for the majority of the signals. Multi-stage mass spectrometry based on electrospray ionization (ESI) allows collision-induced dissociation (CID) fragmentation of selected precursor ions. These fragment ions can assist in structural inference for metabolites of low molecular weight. Computational investigations of fragmentation spectra have increasingly received attention in metabolomics and various public databases house such data. We have developed an R package "iontree" that can capture, store and analyze MS2 and MS3 mass spectral data from high throughput metabolomics experiments. The package includes functions for ion tree construction, an algorithm (distMS2) for MS2 spectral comparison, and tools for building platform-independent ion tree (MS2/MS3) libraries. We have demonstrated the utilization of the package for the systematic analysis and annotation of fragmentation spectra collected in various metabolomics platforms, including direct infusion mass spectrometry, and liquid chromatography coupled with either low resolution or high resolution mass spectrometry. Assisted by the developed computational tools, we have demonstrated that spectral trees can provide informative evidence complementary to retention time and accurate mass to aid with annotating unknown peaks. These experimental spectral trees once subjected to a quality control process, can be used for querying public MS2 databases or de novo interpretation. The putatively annotated spectral trees can be readily incorporated into reference libraries for routine identification of metabolites.

  4. Computer aided process planning system based on workflow technology and integrated bill of material tree

    LU Chun-guang; MENG Li-li

    2006-01-01

    It is extremely important for procedure of process design and management of process data for product life cycle in Computer Aided Process Planning (CAPP) system,but there are many shortcomings with traditional CAPP system in these respects.To solve these questions,application of workflow technology in CAPP system based on web-integrated Bill of Material (BOM) tree is discussed,and a concept of integrated BOM tree was brought forward.Taking integrated BOM as the thread,CAPP systematic technological process is analyzed.The function,system architecture,and implementation mechanism of CAPP system based on Browser/Server and Customer/Server model are expatiated.Based on it,the key technologies of workflow management device were analyzed.Eventually,the implementation mechanism of integrated BOM tree was analyzed from viewpoints of material information encoding,organization node design of integrated BOM tree,transformation from Engineering BOM (EBOM)to Process BOM (PBOM),and the programming implementation technology.

  5. High performance optical encryption based on computational ghost imaging with QR code and compressive sensing technique

    Zhao, Shengmei; Wang, Le; Liang, Wenqiang; Cheng, Weiwen; Gong, Longyan

    2015-10-01

    In this paper, we propose a high performance optical encryption (OE) scheme based on computational ghost imaging (GI) with QR code and compressive sensing (CS) technique, named QR-CGI-OE scheme. N random phase screens, generated by Alice, is a secret key and be shared with its authorized user, Bob. The information is first encoded by Alice with QR code, and the QR-coded image is then encrypted with the aid of computational ghost imaging optical system. Here, measurement results from the GI optical system's bucket detector are the encrypted information and be transmitted to Bob. With the key, Bob decrypts the encrypted information to obtain the QR-coded image with GI and CS techniques, and further recovers the information by QR decoding. The experimental and numerical simulated results show that the authorized users can recover completely the original image, whereas the eavesdroppers can not acquire any information about the image even the eavesdropping ratio (ER) is up to 60% at the given measurement times. For the proposed scheme, the number of bits sent from Alice to Bob are reduced considerably and the robustness is enhanced significantly. Meantime, the measurement times in GI system is reduced and the quality of the reconstructed QR-coded image is improved.

  6. Computing Maximum Cardinality Matchings in Parallel on Bipartite Graphs via Tree-Grafting

    Azad, Ariful; Buluc, Aydn; Pothen, Alex

    2016-01-01

    It is difficult to obtain high performance when computing matchings on parallel processors because matching algorithms explicitly or implicitly search for paths in the graph, and when these paths become long, there is little concurrency. In spite of this limitation, we present a new algorithm and its shared-memory parallelization that achieves good performance and scalability in computing maximum cardinality matchings in bipartite graphs. This algorithm searches for augmenting paths via specialized breadth-first searches (BFS) from multiple source vertices, hence creating more parallelism than single source algorithms. Algorithms that employ multiple-source searches cannot discard a search tree once no augmenting path is discovered from the tree, unlike algorithms that rely on single-source searches. We describe a novel tree-grafting method that eliminates most of the redundant edge traversals resulting from this property of multiple-source searches. We also employ the recent direction-optimizing BFS algorithm as a subroutine to discover augmenting paths faster. Our algorithm compares favorably with the current best algorithms in terms of the number of edges traversed, the average augmenting path length, and the number of iterations. Here, we provide a proof of correctness for our algorithm. Our NUMA-aware implementation is scalable to 80 threads of an Intel multiprocessor and to 240 threads on an Intel Knights Corner coprocessor. On average, our parallel algorithm runs an order of magnitude faster than the fastest algorithms available. The performance improvement is more significant on graphs with small matching number.

  7. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  8. A computer simulation approach to quantify the true area and true area compressibility modulus of biological membranes

    Chacón, Enrique; Tarazona, Pedro; Bresme, Fernando

    2015-01-01

    We present a new computational approach to quantify the area per lipid and the area compressibility modulus of biological membranes. Our method relies on the analysis of the membrane fluctuations using our recently introduced coupled undulatory (CU) mode [Tarazona et al., J. Chem. Phys. 139, 094902 (2013)], which provides excellent estimates of the bending modulus of model membranes. Unlike the projected area, widely used in computer simulations of membranes, the CU area is thermodynamically consistent. This new area definition makes it possible to accurately estimate the area of the undulating bilayer, and the area per lipid, by excluding any contributions related to the phospholipid protrusions. We find that the area per phospholipid and the area compressibility modulus features a negligible dependence with system size, making possible their computation using truly small bilayers, involving a few hundred lipids. The area compressibility modulus obtained from the analysis of the CU area fluctuations is fully consistent with the Hooke’s law route. Unlike existing methods, our approach relies on a single simulation, and no a priori knowledge of the bending modulus is required. We illustrate our method by analyzing 1-palmitoyl-2-oleoyl-sn-glycero-3-phosphocholine bilayers using the coarse grained MARTINI force-field. The area per lipid and area compressibility modulus obtained with our method and the MARTINI forcefield are consistent with previous studies of these bilayers

  9. Fast and low-dose computed laminography using compressive sensing based technique

    Abbas, Sajid; Park, Miran; Cho, Seungryong

    2015-03-01

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.

  10. Fast and low-dose computed laminography using compressive sensing based technique

    Abbas, Sajid, E-mail: scho@kaist.ac.kr; Park, Miran, E-mail: scho@kaist.ac.kr; Cho, Seungryong, E-mail: scho@kaist.ac.kr [Department of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon 305-701 (Korea, Republic of)

    2015-03-31

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT.

  11. Fast and low-dose computed laminography using compressive sensing based technique

    Abbas, Sajid; Park, Miran; Cho, Seungryong

    2015-01-01

    Computed laminography (CL) is well known for inspecting microstructures in the materials, weldments and soldering defects in high density packed components or multilayer printed circuit boards. The overload problem on x-ray tube and gross failure of the radio-sensitive electronics devices during a scan are among important issues in CL which needs to be addressed. The sparse-view CL can be one of the viable option to overcome such issues. In this work a numerical aluminum welding phantom was simulated to collect sparsely sampled projection data at only 40 views using a conventional CL scanning scheme i.e. oblique scan. A compressive-sensing inspired total-variation (TV) minimization algorithm was utilized to reconstruct the images. It is found that the images reconstructed using sparse view data are visually comparable with the images reconstructed using full scan data set i.e. at 360 views on regular interval. We have quantitatively confirmed that tiny structures such as copper and tungsten slags, and copper flakes in the reconstructed images from sparsely sampled data are comparable with the corresponding structure present in the fully sampled data case. A blurring effect can be seen near the edges of few pores at the bottom of the reconstructed images from sparsely sampled data, despite the overall image quality is reasonable for fast and low-dose NDT

  12. Graded compression ultrasonography and computed tomography in acute colonic diverticulitis: Meta-analysis of test accuracy

    Lameris, Wytze; Randen, Adrienne van; Bipat, Shandra; Stoker, Jaap; Bossuyt, Patrick M.M.; Boermeester, Marja A.

    2008-01-01

    The purpose was to investigate the diagnostic accuracy of graded compression ultrasonography (US) and computed tomography (CT) in diagnosing acute colonic diverticulitis (ACD) in suspected patients. We performed a systematic review and meta-analysis of the accuracy of CT and US in diagnosing ACD. Study quality was assessed with the QUADAS tool. Summary estimates of sensitivity and specificity were calculated using a bivariate random effects model. Six US studies evaluated 630 patients, and eight CT studies evaluated 684 patients. Overall, their quality was moderate. We did not identify meaningful sources of heterogeneity in the study results. Summary sensitivity estimates were 92% (95% CI: 80%-97%) for US versus 94% (95%CI: 87%-97%) for CT (p = 0.65). Summary specificity estimates were 90% (95%CI: 82%-95%) for US versus 99% (95%CI: 90%-100%) for CT (p = 0.07). For the identification of alternative diseases sensitivity ranged between 33% and 78% for US and between 50% and 100% for CT. The currently best available evidence shows no statistically significant difference in accuracy of US and CT in diagnosing ACD. Therefore, both US and CT can be used as initial diagnostic tool until new evidence is brought forward. However, CT is more likely to identify alternative diseases. (orig.)

  13. Computing the Stretch Factor of Paths, Trees, and Cycles in Weighted Fixed Orientation Metrics

    Wulff-Nilsen, Christian

    2008-01-01

    Let G be a graph embedded in the L_1-plane. The stretch factor of G is the maximum over all pairs of distinct vertices p and q of G of the ratio L_1^G(p,q)/L_1(p,q), where L_1^G(p,q) is the L_1-distance in G between p and q. We show how to compute the stretch factor of an n-vertex path in O(n*(log...... n)^2) worst-case time and O(n) space and we mention generalizations to trees and cycles, to general weighted fixed orientation metrics, and to higher dimensions....

  14. Binary Decision Trees for Preoperative Periapical Cyst Screening Using Cone-beam Computed Tomography.

    Pitcher, Brandon; Alaqla, Ali; Noujeim, Marcel; Wealleans, James A; Kotsakis, Georgios; Chrepa, Vanessa

    2017-03-01

    Cone-beam computed tomographic (CBCT) analysis allows for 3-dimensional assessment of periradicular lesions and may facilitate preoperative periapical cyst screening. The purpose of this study was to develop and assess the predictive validity of a cyst screening method based on CBCT volumetric analysis alone or combined with designated radiologic criteria. Three independent examiners evaluated 118 presurgical CBCT scans from cases that underwent apicoectomies and had an accompanying gold standard histopathological diagnosis of either a cyst or granuloma. Lesion volume, density, and specific radiologic characteristics were assessed using specialized software. Logistic regression models with histopathological diagnosis as the dependent variable were constructed for cyst prediction, and receiver operating characteristic curves were used to assess the predictive validity of the models. A conditional inference binary decision tree based on a recursive partitioning algorithm was constructed to facilitate preoperative screening. Interobserver agreement was excellent for volume and density, but it varied from poor to good for the radiologic criteria. Volume and root displacement were strong predictors for cyst screening in all analyses. The binary decision tree classifier determined that if the volume of the lesion was >247 mm 3 , there was 80% probability of a cyst. If volume was cyst probability was 60% (78% accuracy). The good accuracy and high specificity of the decision tree classifier renders it a useful preoperative cyst screening tool that can aid in clinical decision making but not a substitute for definitive histopathological diagnosis after biopsy. Confirmatory studies are required to validate the present findings. Published by Elsevier Inc.

  15. Study of three-dimensional Rayleigh--Taylor instability in compressible fluids through level set method and parallel computation

    Li, X.L.

    1993-01-01

    Computation of three-dimensional (3-D) Rayleigh--Taylor instability in compressible fluids is performed on a MIMD computer. A second-order TVD scheme is applied with a fully parallelized algorithm to the 3-D Euler equations. The computational program is implemented for a 3-D study of bubble evolution in the Rayleigh--Taylor instability with varying bubble aspect ratio and for large-scale simulation of a 3-D random fluid interface. The numerical solution is compared with the experimental results by Taylor

  16. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  17. A new fast algorithm for solving the minimum spanning tree problem based on DNA molecules computation.

    Wang, Zhaocai; Huang, Dongmei; Meng, Huajun; Tang, Chengpei

    2013-10-01

    The minimum spanning tree (MST) problem is to find minimum edge connected subsets containing all the vertex of a given undirected graph. It is a vitally important NP-complete problem in graph theory and applied mathematics, having numerous real life applications. Moreover in previous studies, DNA molecular operations usually were used to solve NP-complete head-to-tail path search problems, rarely for NP-hard problems with multi-lateral path solutions result, such as the minimum spanning tree problem. In this paper, we present a new fast DNA algorithm for solving the MST problem using DNA molecular operations. For an undirected graph with n vertex and m edges, we reasonably design flexible length DNA strands representing the vertex and edges, take appropriate steps and get the solutions of the MST problem in proper length range and O(3m+n) time complexity. We extend the application of DNA molecular operations and simultaneity simplify the complexity of the computation. Results of computer simulative experiments show that the proposed method updates some of the best known values with very short time and that the proposed method provides a better performance with solution accuracy over existing algorithms. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  18. Realization of multi-parameter and multi-state in fault tree computer-aided building software

    Guo Xiaoli; Tong Jiejuan; Xue Dazhi

    2004-01-01

    More than one parameter and more than one failed state of a parameter are often involved in building fault tree, so it is necessary for fault tree computer-aided building software to deal with multi-parameter and multi-state. Fault Tree Expert System (FTES) has the target of aiding the FT-building work of hydraulic systems. This paper expatiates on how to realize multi-parameter and multi-state in FTES with focus on Knowledge Base and Illation Engine. (author)

  19. A computational fluid dynamics (CFD) study of WEB-treated aneurysms: Can CFD predict WEB "compression" during follow-up?

    Caroff, Jildaz; Mihalea, Cristian; Da Ros, Valerio; Yagi, Takanobu; Iacobucci, Marta; Ikka, Léon; Moret, Jacques; Spelle, Laurent

    2017-07-01

    Recent reports have revealed a worsening of aneurysm occlusion between WEB treatment baseline and angiographic follow-up due to "compression" of the device. We utilized computational fluid dynamics (CFD) in order to determine whether the underlying mechanism of this worsening is flow related. We included data from all consecutive patients treated in our institution with a WEB for unruptured aneurysms located either at the middle cerebral artery or basilar tip. The CFD study was performed using pre-operative 3D rotational angiography. From digital subtraction follow-up angiographies patients were dichotomized into two groups: one with WEB "compression" and one without. We performed statistical analyses to determine a potential correlation between WEB compression and CFD inflow ratio. Between July 2012 and June 2015, a total of 22 unruptured middle cerebral artery or basilar tip aneurysms were treated with a WEB device in our department. Three patients were excluded from the analysis and the mean follow-up period was 17months. Eleven WEBs presented "compression" during follow-up. Interestingly, device "compression" was statistically correlated to the CFD inflow ratio (P=0.018), although not to aneurysm volume, aspect ratio or neck size. The mechanisms underlying the worsening of aneurysm occlusion in WEB-treated patients due to device compression are most likely complex as well as multifactorial. However, it is apparent from our pilot study that a high arterial inflow is, at least, partially involved. Further theoretical and animal research studies are needed to increase our understanding of this phenomenon. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. Measurement of spinal canal narrowing, interpedicular widening, and vertebral compression in spinal burst fractures: plain radiographs versus multidetector computed tomography

    Bensch, Frank V.; Koivikko, Mika P.; Koskinen, Seppo K.; Kiuru, Martti J.

    2009-01-01

    To assess the reliability of measurements of spinal canal narrowing, vertebral body compression, and interpedicular widening in burst fractures in radiography compared with multidetector computed tomography (MDCT). Patients who had confirmed acute vertebral burst fractures over an interval of 34 months underwent both MDCT and radiography. Measurements of spinal canal narrowing, vertebral body compression, and interpedicular widening from MDCT and radiography were compared. The 108 patients (30 female, 78 male, aged 16-79 years, mean 39 years) had 121 burst fractures. Eleven patients had multiple fractures, of which seven were not contiguous. Measurements showed a strong positive correlation between radiography and MDCT (Spearman's rank sum test: spinal canal narrowing k = 0.50-0.82, vertebral compression k = 0.55-0.72, and interpedicular widening k = 0.81-0.91, all P 0.25) and for interpedicular widening in the thoracic spine (k = 0.35, P = 0.115). The average difference in measurements between the modalities was 3 mm or fewer. Radiography demonstrates interpedicular widening, spinal canal narrowing and vertebral compression with acceptable precision, with the exception of those of the cervical spine. (orig.)

  1. Effects of polytetrafluoroethylene treatment and compression on gas diffusion layer microstructure using high-resolution X-ray computed tomography

    Khajeh-Hosseini-Dalasm, Navvab; Sasabe, Takashi; Tokumasu, Takashi; Pasaogullari, Ugur

    2014-11-01

    The microstructure of a TGP-H-120 Toray paper gas diffusion layer (GDL) was investigated using high resolution X-ray computed tomography (CT) technique, with a resolution of 1.8 μm and a field of view (FOV) of ∼1.8 × 1.8 mm. The images obtained from the tomography scans were further post processed, and image thresholding and binarization methodologies are presented. The validity of Otsu's thresholding method was examined. Detailed information on bulk porosity and porosity distribution of the GDL at various Polytetrafluoroethylene (PTFE) treatments and uniform/non-uniform compression pressures was provided. A sample holder was designed to investigate the effects of non-uniform compression pressure, which enabled regulating compression pressure between 0, and 3 MPa at a gas channel/current collector rib configuration. The results show the heterogeneous and anisotropic microstructure of the GDL, non-uniform distribution of PTFE, and significant microstructural change under uniform/non-uniform compression. These findings provide useful inputs for numerical models to include the effects of microstructural changes in the study of transport phenomena within the GDL and to increase the accuracy and predictability of cell performance.

  2. Computational Prediction of Blood-Brain Barrier Permeability Using Decision Tree Induction

    Jörg Huwyler

    2012-08-01

    Full Text Available Predicting blood-brain barrier (BBB permeability is essential to drug development, as a molecule cannot exhibit pharmacological activity within the brain parenchyma without first transiting this barrier. Understanding the process of permeation, however, is complicated by a combination of both limited passive diffusion and active transport. Our aim here was to establish predictive models for BBB drug permeation that include both active and passive transport. A database of 153 compounds was compiled using in vivo surface permeability product (logPS values in rats as a quantitative parameter for BBB permeability. The open source Chemical Development Kit (CDK was used to calculate physico-chemical properties and descriptors. Predictive computational models were implemented by machine learning paradigms (decision tree induction on both descriptor sets. Models with a corrected classification rate (CCR of 90% were established. Mechanistic insight into BBB transport was provided by an Ant Colony Optimization (ACO-based binary classifier analysis to identify the most predictive chemical substructures. Decision trees revealed descriptors of lipophilicity (aLogP and charge (polar surface area, which were also previously described in models of passive diffusion. However, measures of molecular geometry and connectivity were found to be related to an active drug transport component.

  3. Estimation of air void and aggregate spatial distributions in concrete under uniaxial compression using computer tomography scanning

    Wong, R.C.K.; Chau, K.T.

    2005-01-01

    Normal- and high-strength concrete cylinders (designed compressive strengths of 30 and 90 MPa at 28 days) were loaded uniaxially. Computer tomography (CT) scanning technique was used to examine the evolution of air voids inside the specimens at various loading states up to 85% of the ultimate compressive strength. The normal-strength concrete yielded a very different behaviour in changes of internal microstructure as compared to the high-strength concrete. There were significant instances of nucleation and growth in air voids in the normal-strength concrete specimen, while the increase in air voids in the high-strength concrete specimen was insignificant. In addition, CT images were used for mapping the aggregate spatial distributions within the specimens. No intrinsic anisotropy was detected from the fabric analysis

  4. Compression of Born ratio for fluorescence molecular tomography/x-ray computed tomography hybrid imaging: methodology and in vivo validation.

    Mohajerani, Pouyan; Ntziachristos, Vasilis

    2013-07-01

    The 360° rotation geometry of the hybrid fluorescence molecular tomography/x-ray computed tomography modality allows for acquisition of very large datasets, which pose numerical limitations on the reconstruction. We propose a compression method that takes advantage of the correlation of the Born-normalized signal among sources in spatially formed clusters to reduce the size of system model. The proposed method has been validated using an ex vivo study and an in vivo study of a nude mouse with a subcutaneous 4T1 tumor, with and without inclusion of a priori anatomical information. Compression rates of up to two orders of magnitude with minimum distortion of reconstruction have been demonstrated, resulting in large reduction in weight matrix size and reconstruction time.

  5. Computed tomography in the evaluation of the lower leg oedema treated by intermittent pneumatic compression

    Airaksinen, O.; Partanen, K.; Kolari, P.J.; Soimakallio, S.

    1990-01-01

    The intermittent pneumatic compression (IPC) therapy has been used in post-traumatic rehabilitation of fractures of crusis, and it has reduced the oedema as measured immediately after the treatment. The purpose of the present study was to assess the amount of oedema, and its distribution with CT in lower leg fracture patients before and after IPC treatment (author). 6 refs. 2 tabs

  6. Compressed Data Structures for Range Searching

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...

  7. Cationic agent contrast-enhanced computed tomography imaging of cartilage correlates with the compressive modulus and coefficient of friction.

    Lakin, B A; Grasso, D J; Shah, S S; Stewart, R C; Bansal, P N; Freedman, J D; Grinstaff, M W; Snyder, B D

    2013-01-01

    The aim of this study is to evaluate whether contrast-enhanced computed tomography (CECT) attenuation, using a cationic contrast agent (CA4+), correlates with the equilibrium compressive modulus (E) and coefficient of friction (μ) of ex vivo bovine articular cartilage. Correlations between CECT attenuation and E (Group 1, n = 12) and μ (Group 2, n = 10) were determined using 7 mm diameter bovine osteochondral plugs from the stifle joints of six freshly slaughtered, skeletally mature cows. The equilibrium compressive modulus was measured using a four-step, unconfined, compressive stress-relaxation test, and the coefficients of friction were determined from a torsional friction test. Following mechanical testing, samples were immersed in CA4+, imaged using μCT, rinsed, and analyzed for glycosaminoglycan (GAG) content using the 1,9-dimethylmethylene blue (DMMB) assay. The CECT attenuation was positively correlated with the GAG content of bovine cartilage (R(2) = 0.87, P coefficients of friction: CECT vs μ(static) (R(2) = 0.71, P = 0.002), CECT vs μ(static_equilibrium) (R(2) = 0.79, P coefficient of friction. Copyright © 2012 Osteoarthritis Research Society International. Published by Elsevier Ltd. All rights reserved.

  8. Multi-slice computed tomography assessment of bronchial compression with absent pulmonary valve

    Zhong, Yu-Min; Sun, Ai-Min; Wang, Qian; Zhu, Ming; Qiu, Hai-Sheng [Shanghai Children' s Medical Center and Shanghai Jiao Tong University Medical School, Department of Radiology, Shanghai (China); Jaffe, Richard B. [Primary Children' s Medical Center, Department of Medical Imaging, Salt Lake City, UT (United States); Liu, Jin-Fen [Shanghai Children' s Medical Center, Department of Cardiothoracic Surgery, Shanghai (China); Gao, Wei [Shanghai Children' s Medical Center and Shanghai Jiao Tong University Medical School, Department of Cardiology, Shanghai (China); Berdon, Walter E. [Children' s Hospital of New York, Department of Radiology, New York, NY (United States)

    2014-07-15

    Absent pulmonary valve is a rare cardiovascular anomaly that can result in profound tracheobronchial compression. To demonstrate the advantage of multi-slice CT in diagnosing tracheobronchial compression, its severity as related to the adjacent dilated pulmonary arteries, and associated lung and cardiac lesions. We included children with absent pulmonary valve who were reviewed by multi-slice CT during a 17-year period. The number and locations of stenoses and lung lesions were noted and the severity of stenosis was categorized. The diameter of the pulmonary artery was measured and associated cardiac defects were demonstrated. Thirty-one children (14 girls and 17 boys) were included. Of these, 29 had ventricular septal defect and 2 had an intact ventricular septum. Twenty-nine children (94%) had tracheobronchial compression, judged to be mild in nine children (31%), moderate in 10 (34%) and severe in 10 (34%). The different locations of the stenosis (carina, main bronchi, lobar and segmental bronchi) were observed. And the number and location of lung lesions demonstrated that the right middle and left upper and lower lobes were often affected. The diameter of the pulmonary artery in these children was well above normal published values, and Spearman rank correlation analysis showed a correlation between the size of the pulmonary artery and the severity of the tracheobronchial stenosis. Nineteen children (61%) underwent surgery and 4 of these children had a multi-slice CT post-operative follow-up study. Absent pulmonary valve can cause significant morbidity and mortality in children. Multi-slice CT can accurately depict areas of tracheobronchial compression, associated lung lesions and cardiac defects, helping to direct the surgeon. (orig.)

  9. Mesoscopic Numerical Computation of Compressive Strength and Damage Mechanism of Rubber Concrete

    Z. H. Xie

    2015-01-01

    Full Text Available Evaluations of both macroscopic and mesoscopic strengths of materials have been the topic of a great deal of recent research. This paper presents the results of a study, based on the Walraven equation of the production of a mesoscopic random aggregate structure containing various rubber contents and aggregate sizes. On a mesoscopic scale, the damage mechanism in the rubber concrete and the effects of the rubber content and aggregate-mortar interface on the rubber concrete’s compressive resistance property were studied. The results indicate that the random aggregate structural model very closely approximates the experimental results in terms of the fracture distribution and damage characteristics under uniaxial compression. The aggregate-mortar interface mechanical properties have a substantial impact on the test sample’s strength and fracture distribution. As the rubber content increases, the compressive strength and elastic modulus of the test sample decrease proportionally. This paper presents graphics of the entire process from fracture propagation to structural failure of the test piece by means of the mesoscopic finite-element method, which provides a theoretical reference for studying the damage mechanism in rubber concrete and performing parametric calculations.

  10. Arterial anomalies of the celiac trunk and median arcuate ligament compression in dogs and cats assessed by computed tomography angiography.

    Le Pommellet, Hélène M; Scansen, Brian A; Mathys, Dimitria A; Mollenkopf, Dixie F; Reeves, Lauren; Skinas, Melissa L; Patel, Mira

    2018-02-01

    To identify abnormalities of the celiac artery (CA) and major branches in dogs and cats by computed tomography angiography (CTA). Multi-institutional retrospective case series. Two hundred fifty-four dogs and 13 cats. Abdominal CTA images from 2009 to 2017 were reviewed. Logistic regression models were used to evaluate the relationship between CA abnormalities and sex, age, size of dog, concurrent venous anomaly, or presence of gastrointestinal signs. Abnormalities in the CA were observed in 32 animals (11.9%) including 9 with abnormal branching (3.4%) and 23 with CA compression (8.6%). A celiacomesenteric trunk was observed in 8 (2.9%; 6 dogs, 2 cats). The splenic artery originated from the cranial mesenteric artery in 1 dog; the hepatic arterial branches originated from the left gastric artery in another. Four out of 32 animals (12.5%) with an arterial anomaly had another vascular abnormality. Large breed dogs were more likely to have an arterial anomaly (OR 4.3, 95% CI: 1.18-15.5, P = .02) and 12 times more likely to have CA compression (OR 12.0, 95% CI: 1.4-97.7, P = .02) compared to small breed dogs. Dogs with CA compression were more likely to present for gastrointestinal signs (OR 3.6, 95% CI: 1.2-10.3, P = .01). Anomalies of the celiac trunk are apparent on CTA and may impact surgical or image-guided intervention. Compression at the origin of the CA was apparent on imaging, similar to the median arcuate ligament syndrome in people, although the significance of this finding in dogs is unknown. © 2017 The American College of Veterinary Surgeons.

  11. A Compressive Superresolution Display

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  12. A Compressive Superresolution Display

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  13. The entire dural sinus tree is compressed in patients with idiopathic intracranial hypertension: a longitudinal, volumetric magnetic resonance imaging study

    Rohr, Axel; Bindeballe, Jan; Riedel, Christian; Jansen, Olav [University Clinic of Schleswig-Holstein Campus Kiel, Department of Neuroradiology, Kiel (Germany); Baalen, Andreas van [University Clinic of Schleswig-Holstein Campus Kiel, Department of Neuropediatrics, Kiel (Germany); Bartsch, Thorsten [University Clinic of Schleswig-Holstein Campus Kiel, Department of Neurology, Kiel (Germany); Doerner, Lutz [University Clinic of Schleswig-Holstein Campus Kiel, Department of Neurosurgery, Kiel (Germany)

    2012-01-15

    The objective of this study was to explore the volumetric alterations of dural sinuses in patients with idiopathic intracranial hypertension (IIH). Standardized cranial magnetic resonance imaging (MRI) was used in 17 patients prior to and following treatment of IIH and in seven controls. Magnetic resonance venographies (MRV) were employed for (a) judgement of circumscript dural sinus stenoses and (b) computation of sinus volumes. Cross-sectional areas (CSA) of the superior sagittal sinuses (SSS) were measured on T2-weighted images. Results of the initial MRIs were compared to those on follow-up MRIs and to results of controls. Stenoses of the transverse sinuses (TS) resulting in cranial venous outflow obstruction (CVOO) were present in 15/17 (88%) patients, normalizing in 7/15 cases (47%) after treatment of IIH. CVOO was not detected in the control group. Segmentation of MRV revealed decreased dural sinus volumes in patients with IIH as compared to controls (P = 0.018). Sinus volumes increased significantly with normalization of intracranial pressure independent from disappearing of TS stenoses (P = 0.007). The CSA of the SSS were normal on the initial MRIs of patients with IIH and increased on follow-up after treatment (P < 0.001). However, volumetries displayed overlap in patients and controls. Patients with IIH not only exhibit bilateral stenoses of the TS as has been reported, but volume changes of their entire dural sinus system also occur. The potential etiopathological and diagnostic roles of these changes are discussed. (orig.)

  14. Soft computing methods for estimating the uniaxial compressive strength of intact rock from index tests

    Mishra, A. Deepak; Srigyan, M.; Basu, A.; Rokade, P. J.

    2015-01-01

    Roč. 80, December 2015 (2015), s. 418-424 ISSN 1365-1609 Institutional support: RVO:68145535 Keywords : uniaxial compressive strength * rock indices * fuzzy inference system * artificial neural network * adaptive neuro-fuzzy inference system Subject RIV: DH - Mining, incl. Coal Mining Impact factor: 2.010, year: 2015 http://ac.els-cdn.com/S1365160915300708/1-s2.0-S1365160915300708-main.pdf?_tid=318a7cec-8929-11e5-a3b8-00000aacb35f&acdnat=1447324752_2a9d947b573773f88da353a16f850eac

  15. Modeling of the Tension and Compression Behavior of Sintered 316L Using Micro Computed Tomography

    Doroszko Michał

    2015-06-01

    Full Text Available This paper describes the method of numerical modeling of the tension and compression behavior of sintered 316L. In order to take into account the shape of the mesostructures of materials in the numerical modeling, X-ray microtomography was used. Based on the micro-CT images, three-dimensional geometrical models mapped shapes of the porosity were generated. To the numerical calculations was used finite element method. Based on the received stress and strain fields was described the mechanism of deformation of the materials until fracture. The influence of material discontinuities at the mesoscopic scale on macromechanical properties of the porous materials was investigated.

  16. Improving reliability of state estimation programming and computing suite based on analyzing a fault tree

    Kolosok Irina

    2017-01-01

    Full Text Available Reliable information on the current state parameters obtained as a result of processing the measurements from systems of the SCADA and WAMS data acquisition and processing through methods of state estimation (SE is a condition that enables to successfully manage an energy power system (EPS. SCADA and WAMS systems themselves, as any technical systems, are subject to failures and faults that lead to distortion and loss of information. The SE procedure enables to find erroneous measurements, therefore, it is a barrier for the distorted information to penetrate into control problems. At the same time, the programming and computing suite (PCS implementing the SE functions may itself provide a wrong decision due to imperfection of the software algorithms and errors. In this study, we propose to use a fault tree to analyze consequences of failures and faults in SCADA and WAMS and in the very SE procedure. Based on the analysis of the obtained measurement information and on the SE results, we determine the state estimation PCS fault tolerance level featuring its reliability.

  17. Computed tomography during cardiopulmonary resuscitation using automated chest compression devices - an initial study

    Wirth, Stefan; Koerner, Markus; Treitl, Marcus; Linsenmaier, Ulrich; Reiser, Maximilian F.; Leidel, Bernd A.; Jaschkowitz, Thomas; Kanz, Karl G.

    2009-01-01

    The purpose of the study was to evaluate both CT image quality in a phantom study and feasibility in an initial case series using automated chest compression (A-CC) devices for cardiopulmonary resuscitation (CPR). Multidetector CT (MDCT) of a chest/heart phantom (Thorax-CCI, QRM, Germany) was performed with identical protocols of the phantom alone (S), the phantom together with two different A-CC devices (A: AutoPulse, Zoll, Germany; L: LUCAS, Jolife, Sweden), and the phantom with a LUCAS baseplate, but without the compression unit (L-bp). Nine radiologists evaluated image noise quantitatively (n=244 regions, Student's t-test) and also rated image quality subjectively (1-excellent to 6-inadequate, Mann-Whitney U-test). Additionally, three patients during prolonged CPR underwent CT with A-CC devices. Mean image noise of S was increased by 1.21 using L-bp, by 3.62 using A, and by 5.94 using L (p<0.01 each). Image quality was identical using S and L-bp (1.64 each), slightly worse with A (1.83), and significantly worse with L (2.97, p<0.001). In all patient cases the main lesions were identified, which led to clinical key decisions. Image quality was excellent with L-bp and good with A. Under CPR conditions initial cases indicate that MDCT diagnostics supports either focused treatment or the decision to terminate efforts. (orig.)

  18. HAlign-II: efficient ultra-large multiple sequence alignment and phylogenetic tree reconstruction with distributed and parallel computing.

    Wan, Shixiang; Zou, Quan

    2017-01-01

    Multiple sequence alignment (MSA) plays a key role in biological sequence analyses, especially in phylogenetic tree construction. Extreme increase in next-generation sequencing results in shortage of efficient ultra-large biological sequence alignment approaches for coping with different sequence types. Distributed and parallel computing represents a crucial technique for accelerating ultra-large (e.g. files more than 1 GB) sequence analyses. Based on HAlign and Spark distributed computing system, we implement a highly cost-efficient and time-efficient HAlign-II tool to address ultra-large multiple biological sequence alignment and phylogenetic tree construction. The experiments in the DNA and protein large scale data sets, which are more than 1GB files, showed that HAlign II could save time and space. It outperformed the current software tools. HAlign-II can efficiently carry out MSA and construct phylogenetic trees with ultra-large numbers of biological sequences. HAlign-II shows extremely high memory efficiency and scales well with increases in computing resource. THAlign-II provides a user-friendly web server based on our distributed computing infrastructure. HAlign-II with open-source codes and datasets was established at http://lab.malab.cn/soft/halign.

  19. Non-invasive imaging of myocardial bridge by coronary computed tomography angiography: the value of transluminal attenuation gradient to predict significant dynamic compression

    Li, Yuehua; Yu, Mengmeng; Zhang, Jiayin; Li, Minghua [Shanghai Jiao Tong University Affiliated Sixth People' s Hospital, Institute of Diagnostic and Interventional Radiology, Shanghai (China); Lu, Zhigang; Wei, Meng [Shanghai Jiao Tong University Affiliated Sixth People' s Hospital, Department of Cardiology, Shanghai (China)

    2017-05-15

    To study the diagnostic value of transluminal attenuation gradient (TAG) measured by coronary computed tomography angiography (CCTA) for identifying relevant dynamic compression of myocardial bridge (MB). Patients with confirmed MB who underwent both CCTA and ICA within one month were retrospectively included. TAG was defined as the linear regression coefficient between luminal attenuation and distance. The TAG of MB vessel, length and depth of MB were measured and correlated with the presence and degree of dynamic compression observed at ICA. Systolic compression ≥50 % was considered significant. 302 patients with confirmed MB lesions were included. TAG was lowest (-17.4 ± 6.7 HU/10 mm) in patients with significant dynamic compression and highest in patients without MB compression (-9.5 ± 4.3 HU/10 mm, p < 0.001). Linear correlation revealed relation between the percentage of systolic compression and TAG (Pearson correlation, r = -0.52, p < 0.001) and no significant relation between the percentage of systolic compression and MB depth or length. ROC curve analysis determined the best cut-off value of TAG as -14.8HU/10 mm (area under curve = 0.813, 95 % confidence interval = 0.764-0.855, p < 0.001), which yielded high diagnostic accuracy (82.1 %, 248/302). The degree of ICA-assessed systolic compression of MB significantly correlates with TAG but not MB depth or length. (orig.)

  20. Cpu/gpu Computing for AN Implicit Multi-Block Compressible Navier-Stokes Solver on Heterogeneous Platform

    Deng, Liang; Bai, Hanli; Wang, Fang; Xu, Qingxin

    2016-06-01

    CPU/GPU computing allows scientists to tremendously accelerate their numerical codes. In this paper, we port and optimize a double precision alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational Fluid Dynamics (CFD) software on heterogeneous platform. First, we implement a full GPU version of the ADI solver to remove a lot of redundant data transfers between CPU and GPU, and then design two fine-grain schemes, namely “one-thread-one-point” and “one-thread-one-line”, to maximize the performance. Second, we present a dual-level parallelization scheme using the CPU/GPU collaborative model to exploit the computational resources of both multi-core CPUs and many-core GPUs within the heterogeneous platform. Finally, considering the fact that memory on a single node becomes inadequate when the simulation size grows, we present a tri-level hybrid programming pattern MPI-OpenMP-CUDA that merges fine-grain parallelism using OpenMP and CUDA threads with coarse-grain parallelism using MPI for inter-node communication. We also propose a strategy to overlap the computation with communication using the advanced features of CUDA and MPI programming. We obtain speedups of 6.0 for the ADI solver on one Tesla M2050 GPU in contrast to two Xeon X5670 CPUs. Scalability tests show that our implementation can offer significant performance improvement on heterogeneous platform.

  1. An efficient finite differences method for the computation of compressible, subsonic, unsteady flows past airfoils and panels

    Colera, Manuel; Pérez-Saborid, Miguel

    2017-09-01

    A finite differences scheme is proposed in this work to compute in the time domain the compressible, subsonic, unsteady flow past an aerodynamic airfoil using the linearized potential theory. It improves and extends the original method proposed in this journal by Hariharan, Ping and Scott [1] by considering: (i) a non-uniform mesh, (ii) an implicit time integration algorithm, (iii) a vectorized implementation and (iv) the coupled airfoil dynamics and fluid dynamic loads. First, we have formulated the method for cases in which the airfoil motion is given. The scheme has been tested on well known problems in unsteady aerodynamics -such as the response to a sudden change of the angle of attack and to a harmonic motion of the airfoil- and has been proved to be more accurate and efficient than other finite differences and vortex-lattice methods found in the literature. Secondly, we have coupled our method to the equations governing the airfoil dynamics in order to numerically solve problems where the airfoil motion is unknown a priori as happens, for example, in the cases of the flutter and the divergence of a typical section of a wing or of a flexible panel. Apparently, this is the first self-consistent and easy-to-implement numerical analysis in the time domain of the compressible, linearized coupled dynamics of the (generally flexible) airfoil-fluid system carried out in the literature. The results for the particular case of a rigid airfoil show excellent agreement with those reported by other authors, whereas those obtained for the case of a cantilevered flexible airfoil in compressible flow seem to be original or, at least, not well-known.

  2. Computing and visualizing time-varying merge trees for high-dimensional data

    Oesterling, Patrick [Univ. of Leipzig (Germany); Heine, Christian [Univ. of Kaiserslautern (Germany); Weber, Gunther H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Morozov, Dmitry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Scheuermann, Gerik [Univ. of Leipzig (Germany)

    2017-06-03

    We introduce a new method that identifies and tracks features in arbitrary dimensions using the merge tree -- a structure for identifying topological features based on thresholding in scalar fields. This method analyzes the evolution of features of the function by tracking changes in the merge tree and relates features by matching subtrees between consecutive time steps. Using the time-varying merge tree, we present a structural visualization of the changing function that illustrates both features and their temporal evolution. We demonstrate the utility of our approach by applying it to temporal cluster analysis of high-dimensional point clouds.

  3. A compressive sensing-based computational method for the inversion of wide-band ground penetrating radar data

    Gelmini, A.; Gottardi, G.; Moriyama, T.

    2017-10-01

    This work presents an innovative computational approach for the inversion of wideband ground penetrating radar (GPR) data. The retrieval of the dielectric characteristics of sparse scatterers buried in a lossy soil is performed by combining a multi-task Bayesian compressive sensing (MT-BCS) solver and a frequency hopping (FH) strategy. The developed methodology is able to benefit from the regularization capabilities of the MT-BCS as well as to exploit the multi-chromatic informative content of GPR measurements. A set of numerical results is reported in order to assess the effectiveness of the proposed GPR inverse scattering technique, as well as to compare it to a simpler single-task implementation.

  4. Entropy Stable Summation-by-Parts Formulations for Compressible Computational Fluid Dynamics

    Carpenter, M.H.

    2016-11-09

    A systematic approach based on a diagonal-norm summation-by-parts (SBP) framework is presented for implementing entropy stable (SS) formulations of any order for the compressible Navier–Stokes equations (NSE). These SS formulations discretely conserve mass, momentum, energy and satisfy a mathematical entropy equality for smooth problems. They are also valid for discontinuous flows provided sufficient dissipation is added at shocks and discontinuities to satisfy an entropy inequality. Admissible SBP operators include all centred diagonal-norm finite-difference (FD) operators and Legendre spectral collocation-finite element methods (LSC-FEM). Entropy stable multiblock FD and FEM operators follows immediately via nonlinear coupling operators that ensure conservation, accuracy and preserve the interior entropy estimates. Nonlinearly stable solid wall boundary conditions are also available. Existing SBP operators that lack a stability proof (e.g. weighted essentially nonoscillatory) may be combined with an entropy stable operator using a comparison technique to guarantee nonlinear stability of the pair. All capabilities extend naturally to a curvilinear form of the NSE provided that the coordinate mappings satisfy a geometric conservation law constraint. Examples are presented that demonstrate the robustness of current state-of-the-art entropy stable SBP formulations.

  5. Film Cooling Optimization Using Numerical Computation of the Compressible Viscous Flow Equations and Simplex Algorithm

    Ahmed M. Elsayed

    2013-01-01

    Full Text Available Film cooling is vital to gas turbine blades to protect them from high temperatures and hence high thermal stresses. In the current work, optimization of film cooling parameters on a flat plate is investigated numerically. The effect of film cooling parameters such as inlet velocity direction, lateral and forward diffusion angles, blowing ratio, and streamwise angle on the cooling effectiveness is studied, and optimum cooling parameters are selected. The numerical simulation of the coolant flow through flat plate hole system is carried out using the “CFDRC package” coupled with the optimization algorithm “simplex” to maximize overall film cooling effectiveness. Unstructured finite volume technique is used to solve the steady, three-dimensional and compressible Navier-Stokes equations. The results are compared with the published numerical and experimental data of a cylindrically round-simple hole, and the results show good agreement. In addition, the results indicate that the average overall film cooling effectiveness is enhanced by decreasing the streamwise angle for high blowing ratio and by increasing the lateral and forward diffusion angles. Optimum geometry of the cooling hole on a flat plate is determined. In addition, numerical simulations of film cooling on actual turbine blade are performed using the flat plate optimal hole geometry.

  6. ESPRIT-Tree: hierarchical clustering analysis of millions of 16S rRNA pyrosequences in quasilinear computational time.

    Cai, Yunpeng; Sun, Yijun

    2011-08-01

    Taxonomy-independent analysis plays an essential role in microbial community analysis. Hierarchical clustering is one of the most widely employed approaches to finding operational taxonomic units, the basis for many downstream analyses. Most existing algorithms have quadratic space and computational complexities, and thus can be used only for small or medium-scale problems. We propose a new online learning-based algorithm that simultaneously addresses the space and computational issues of prior work. The basic idea is to partition a sequence space into a set of subspaces using a partition tree constructed using a pseudometric, then recursively refine a clustering structure in these subspaces. The technique relies on new methods for fast closest-pair searching and efficient dynamic insertion and deletion of tree nodes. To avoid exhaustive computation of pairwise distances between clusters, we represent each cluster of sequences as a probabilistic sequence, and define a set of operations to align these probabilistic sequences and compute genetic distances between them. We present analyses of space and computational complexity, and demonstrate the effectiveness of our new algorithm using a human gut microbiota data set with over one million sequences. The new algorithm exhibits a quasilinear time and space complexity comparable to greedy heuristic clustering algorithms, while achieving a similar accuracy to the standard hierarchical clustering algorithm.

  7. Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling

    Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas

    2013-01-01

    sparse signals, but requires computationally expensive reconstruction algorithms. This can be an obstacle for real-time applications. The reduction of complexity is achieved by applying a multi-coset sampling procedure. This proposed method reduces the size of the dictionary matrix, the size...

  8. Computer-aided event tree analysis by the impact vector method

    Lima, J.E.P.

    1984-01-01

    In the development of the Probabilistic Risk Analysis of Angra I, the ' large event tree/small fault tree' approach was adopted for the analysis of the plant behavior in an emergency situation. In this work, the event tree methodology is presented along with the adaptations which had to be made in order to attain a correct description of the safety system performances according to the selected analysis method. The problems appearing in the application of the methodology and their respective solutions are presented and discussed, with special emphasis to the impact vector technique. A description of the ETAP code ('Event Tree Analysis Program') developed for constructing and quantifying event trees is also given in this work. A preliminary version of the small-break LOCA analysis for Angra 1 is presented as an example of application of the methodology and of the code. It is shown that the use of the ETAP code sigmnificantly contributes to decreasing the time spent in event tree analyses, making it viable the practical application of the analysis approach referred above. (author) [pt

  9. Ptosis as partial oculomotor nerve palsy due to compression by infundibular dilatation of posterior communicating artery, visualized with three-dimensional computer graphics: case report.

    Fukushima, Yuta; Imai, Hideaki; Yoshino, Masanori; Kin, Taichi; Takasago, Megumi; Saito, Kuniaki; Nakatomi, Hirofumi; Saito, Nobuhito

    2014-01-01

    Oculomotor nerve palsy (ONP) due to internal carotid-posterior communicating artery (PcomA) aneurysm generally manifests as partial nerve palsy including pupillary dysfunction. In contrast, infundibular dilatation (ID) of the PcomA has no pathogenic significance, and mechanical compression of the cranial nerve is extremely rare. We describe a 60-year-old woman who presented with progressive ptosis due to mechanical compression of the oculomotor nerve by an ID of the PcomA. Three-dimensional computer graphics (3DCG) accurately visualized the mechanical compression by the ID, and her ptosis was improved after clipping of the ID. ID of the PcomA may cause ONP by mechanical compression and is treatable surgically. 3DCG are effective for the diagnosis and preoperative simulation.

  10. Phylogenetic trees

    Baños, Hector; Bushek, Nathaniel; Davidson, Ruth; Gross, Elizabeth; Harris, Pamela E.; Krone, Robert; Long, Colby; Stewart, Allen; Walker, Robert

    2016-01-01

    We introduce the package PhylogeneticTrees for Macaulay2 which allows users to compute phylogenetic invariants for group-based tree models. We provide some background information on phylogenetic algebraic geometry and show how the package PhylogeneticTrees can be used to calculate a generating set for a phylogenetic ideal as well as a lower bound for its dimension. Finally, we show how methods within the package can be used to compute a generating set for the join of any two ideals.

  11. A computational investigation of diesel and biodiesel combustion and NOx formation in a light-duty compression ignition engine

    Wang, Zihan [Mississippi State Univ., Mississippi State, MS (United States). Dept. of Mechanical Engineering; Srinivasan, Kalyan K. [Mississippi State Univ., Mississippi State, MS (United States). Dept. of Mechanical Engineering; Krishnan, Sundar R. [Mississippi State Univ., Mississippi State, MS (United States). Dept. of Mechanical Engineering; Som, Sibendu [Argonne National Lab. (ANL), Argonne, IL (United States). Center for Transportation Research

    2012-04-24

    Diesel and biodiesel combustion in a multi-cylinder light duty diesel engine were simulated during a closed cycle (from IVC to EVO), using a commercial computational fluid dynamics (CFD) code, CONVERGE, coupled with detailed chemical kinetics. The computational domain was constructed based on engine geometry and compression ratio measurements. A skeletal n-heptane-based diesel mechanism developed by researchers at Chalmers University of Technology and a reduced biodiesel mechanism derived and validated by Luo and co-workers were applied to model the combustion chemistry. The biodiesel mechanism contains 89 species and 364 reactions and uses methyl decanoate, methyl-9- decenoate, and n-heptane as the surrogate fuel mixture. The Kelvin-Helmholtz and Rayleigh-Taylor (KH-RT) spray breakup model for diesel and biodiesel was calibrated to account for the differences in physical properties of the fuels which result in variations in atomization and spray development characteristics. The simulations were able to capture the experimentally observed pressure and apparent heat release rate trends for both the fuels over a range of engine loads (BMEPs from 2.5 to 10 bar) and fuel injection timings (from 0° BTDC to 10° BTDC), thus validating the overall modeling approach as well as the chemical kinetic models of diesel and biodiesel surrogates. Moreover, quantitative NOx predictions for diesel combustion and qualitative NOx predictions for biodiesel combustion were obtained with the CFD simulations and the in-cylinder temperature trends were correlated to the NOx trends.

  12. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  13. Calculation of the number of branches of multi-valued decision trees in computer aided importance rank of parameters

    Tiszbierek Agnieszka

    2017-01-01

    Full Text Available An elaborated digital computer programme supporting the time-consuming process of selecting the importance rank of construction and operation parameters by means of stating optimum sets is based on the Quine – McCluskey algorithm of minimizing individual partial multi-valued logic functions. The example with real time data, calculated by means of the programme, showed that among the obtained optimum sets there were such which had a different number of real branches after being presented on the multi-valued logic decision tree. That is why an idea of elaborating another functionality of the programme – a module calculating the number of branches of real, multi-valued logic decision trees presenting optimum sets chosen by the programme was pursued. This paper presents the idea and the method for developing a module calculating the number of branches, real for each of optimum sets indicated by the programme, as well as to the calculation process.

  14. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery Using a Probabilistic Learning Framework

    Basu, Saikat; Ganguly, Sangram; Michaelis, Andrew; Votava, Petr; Roy, Anshuman; Mukhopadhyay, Supratik; Nemani, Ramakrishna

    2015-01-01

    Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets, which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

  15. A High Performance Computing Approach to Tree Cover Delineation in 1-m NAIP Imagery using a Probabilistic Learning Framework

    Basu, S.; Ganguly, S.; Michaelis, A.; Votava, P.; Roy, A.; Mukhopadhyay, S.; Nemani, R. R.

    2015-12-01

    Tree cover delineation is a useful instrument in deriving Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) airborne imagery data. Numerous algorithms have been designed to address this problem, but most of them do not scale to these datasets which are of the order of terabytes. In this paper, we present a semi-automated probabilistic framework for the segmentation and classification of 1-m National Agriculture Imagery Program (NAIP) for tree-cover delineation for the whole of Continental United States, using a High Performance Computing Architecture. Classification is performed using a multi-layer Feedforward Backpropagation Neural Network and segmentation is performed using a Statistical Region Merging algorithm. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field, which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by relabeling misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the whole state of California, spanning a total of 11,095 NAIP tiles covering a total geographical area of 163,696 sq. miles. The framework produced true positive rates of around 88% for fragmented forests and 74% for urban tree cover areas, with false positive rates lower than 2% for both landscapes. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR canopy height model (CHM) showed the effectiveness of our framework for generating accurate high-resolution tree-cover maps.

  16. A novel hybrid approach with multidimensional-like effects for compressible flow computations

    Kalita, Paragmoni; Dass, Anoop K.

    2017-07-01

    A multidimensional scheme achieves good resolution of strong and weak shocks irrespective of whether the discontinuities are aligned with or inclined to the grid. However, these schemes are computationally expensive. This paper achieves similar effects by hybridizing two schemes, namely, AUSM and DRLLF and coupling them through a novel shock switch that operates - unlike existing switches - on the gradient of the Mach number across the cell-interface. The schemes that are hybridized have contrasting properties. The AUSM scheme captures grid-aligned (and strong) shocks crisply but it is not so good for non-grid-aligned weaker shocks, whereas the DRLLF scheme achieves sharp resolution of non-grid-aligned weaker shocks, but is not as good for grid-aligned strong shocks. It is our experience that if conventional shock switches based on variables like density, pressure or Mach number are used to combine the schemes, the desired effect of crisp resolution of grid-aligned and non-grid-aligned discontinuities are not obtained. To circumvent this problem we design a shock switch based - for the first time - on the gradient of the cell-interface Mach number with very impressive results. Thus the strategy of hybridizing two carefully selected schemes together with the innovative design of the shock switch that couples them, affords a method that produces the effects of a multidimensional scheme with a lower computational cost. It is further seen that hybridization of the AUSM scheme with the recently developed DRLLFV scheme using the present shock switch gives another scheme that provides crisp resolution for both shocks and boundary layers. Merits of the scheme are established through a carefully selected set of numerical experiments.

  17. A review on compressed pattern matching

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  18. A quadratic kernel for computing the hybridization number of multiple trees

    L.J.J. van Iersel (Leo); S. Linz

    2012-01-01

    htmlabstractIt has recently been shown that the NP-hard problem of calculating the minimum number of hybridization events that is needed to explain a set of rooted binary phylogenetic trees by means of a hybridization network is fixed-parameter tractable if an instance of the problem consists of

  19. Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces

    Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel

    The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have...

  20. "Tree Investigators": Supporting Families' Scientific Talk in an Arboretum with Mobile Computers

    Zimmerman, Heather Toomey; Land, Susan M.; McClain, Lucy R.; Mohney, Michael R.; Choi, Gi Woong; Salman, Fariha H.

    2015-01-01

    This research examines the "Tree Investigators" project to support science learning with mobile devices during family public programmes in an arboretum. Using a case study methodology, researchers analysed video records of 10 families (25 people) using mobile technologies with naturalists at an arboretum to understand how mobile devices…

  1. Computational simulation of coupled nonequilibrium discharge and compressible flow phenomena in a microplasma thruster

    Deconinck, Thomas; Mahadevan, Shankar; Raja, Laxminarayan L.

    2009-01-01

    The microplasma thruster (MPT) concept is a simple extension of a cold gas micronozzle propulsion device, where a direct-current microdischarge is used to preheat the gas stream to improve the specific impulse of the device. Here we study a prototypical MPT device using a detailed, self-consistently coupled plasma and flow computational model. The model describes the microdischarge power deposition, plasma dynamics, gas-phase chemical kinetics, coupling of the plasma phenomena with high-speed flow, and overall propulsion system performance. Compared to a cold gas micronozzle, a significant increase in specific impulse is obtained from the power deposition in the diverging section of the MPT nozzle. For a discharge voltage of 750 V, a power input of 650 mW, and an argon mass flow rate of 5 SCCM (SCCM denotes cubic centimeter per minute at STP), the specific impulse of the device is increased by a factor of ∼1.5 to about 74 s. The microdischarge remains mostly confined inside the micronozzle and operates in an abnormal glow discharge regime. Gas heating, primarily due to ion Joule heating, is found to have a strong influence on the overall discharge behavior. The study provides a validation of the MPT concept as a simple and effective approach to improve the performance of micronozzle cold gas propulsion devices.

  2. Computer-assisted detection of colonic polyps with CT colonography using neural networks and binary classification trees

    Jerebko, Anna K.; Summers, Ronald M.; Malley, James D.; Franaszek, Marek; Johnson, C. Daniel

    2003-01-01

    Detection of colonic polyps in CT colonography is problematic due to complexities of polyp shape and the surface of the normal colon. Published results indicate the feasibility of computer-aided detection of polyps but better classifiers are needed to improve specificity. In this paper we compare the classification results of two approaches: neural networks and recursive binary trees. As our starting point we collect surface geometry information from three-dimensional reconstruction of the colon, followed by a filter based on selected variables such as region density, Gaussian and average curvature and sphericity. The filter returns sites that are candidate polyps, based on earlier work using detection thresholds, to which the neural nets or the binary trees are applied. A data set of 39 polyps from 3 to 25 mm in size was used in our investigation. For both neural net and binary trees we use tenfold cross-validation to better estimate the true error rates. The backpropagation neural net with one hidden layer trained with Levenberg-Marquardt algorithm achieved the best results: sensitivity 90% and specificity 95% with 16 false positives per study

  3. How do trees grow? Response from the graphical and quantitative analyses of computed tomography scanning data collected on stem sections.

    Dutilleul, Pierre; Han, Li Wen; Beaulieu, Jean

    2014-06-01

    Tree growth, as measured via the width of annual rings, is used for environmental impact assessment and climate back-forecasting. This fascinating natural process has been studied at various scales in the stem (from cell and fiber within a growth ring, to ring and entire stem) in one, two, and three dimensions. A new approach is presented to study tree growth in 3D from stem sections, at a scale sufficiently small to allow the delineation of reliable limits for annual rings and large enough to capture directional variation in growth rates. The technology applied is computed tomography scanning, which provides - for one stem section - millions of data (indirect measures of wood density) that can be mapped, together with a companion measure of dispersion and growth ring limits in filigree. Graphical and quantitative analyses are reported for white spruce trees with circular vs non-circular growth. Implications for dendroclimatological research are discussed. Copyright © 2014 Académie des sciences. Published by Elsevier SAS. All rights reserved.

  4. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    Malinowski, Jacek

    2004-05-01

    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  5. Non-binary decomposition trees - a method of reliability computation for systems with known minimal paths/cuts

    Malinowski, Jacek

    2004-01-01

    A coherent system with independent components and known minimal paths (cuts) is considered. In order to compute its reliability, a tree structure T is constructed whose nodes contain the modified minimal paths (cuts) and numerical values. The value of a non-leaf node is a function of its child nodes' values. The values of leaf nodes are calculated from a simple formula. The value of the root node is the system's failure probability (reliability). Subsequently, an algorithm computing the system's failure probability (reliability) is constructed. The algorithm scans all nodes of T using a stack structure for this purpose. The nodes of T are alternately put on and removed from the stack, their data being modified in the process. Once the algorithm has terminated, the stack contains only the final modification of the root node of T, and its value is equal to the system's failure probability (reliability)

  6. Classification and regression trees

    Breiman, Leo; Olshen, Richard A; Stone, Charles J

    1984-01-01

    The methodology used to construct tree structured rules is the focus of this monograph. Unlike many other statistical procedures, which moved from pencil and paper to calculators, this text's use of trees was unthinkable before computers. Both the practical and theoretical sides have been developed in the authors' study of tree methods. Classification and Regression Trees reflects these two sides, covering the use of trees as a data analysis method, and in a more mathematical framework, proving some of their fundamental properties.

  7. Applications of the computer codes FLUX2D and PHI3D for the electromagnetic analysis of compressed magnetic field generators and power flow channels

    Hodgdon, M.L.; Oona, H.; Martinez, A.R.; Salon, S.; Wendling, P.; Krahenbuhl, L.; Nicolas, A.; Nicolas, L.

    1990-01-01

    The authors present the results of three electromagnetic field problems for compressed magnetic field generators and their associated power flow channels. The first problem is the computation of the transient magnetic field in a two-dimensional model of a helical generator during loading. The second problem is the three-dimensional eddy current patterns in a section of an armature beneath a bifurcation point of a helical winding. The authors' third problem is the calculation of the three-dimensional electrostatic fields in a region known as the post-hole convolute in which a rod connects the inner and outer walls of a system of three concentric cylinders through a hole in the middle cylinder. While analytic solutions exist for many electromagnetic filed problems in cases of special and ideal geometries, the solution of these and similar problems for the proper analysis and design of compressed magnetic field generators and their related hardware require computer simulations

  8. SALP-3: A computer program for fault-tree analysis. Description and how-to-use. (Sensitivity analysis by list processing)

    Contini, S.; Astolfi, M.; Muysenberg, C.L. van den; Volta, G.

    1979-01-01

    The main characteristics and the how-to-use of the computer program SALP-3 for the analysis of coherent systems are described. The program is writen in PL/1 for the IBM/370-165. A syntactic analysis is made for the imput (fault-tree and data) and appropriate messages are supplied, should and error take place. The significant minimal cut sets (MCS) are searched by the use of algorithms based on the direct manipulation of the tree. The MCS, of whichever order, are supplied in output in order of importance with reference to a given probability threshold. The computer program SALP-3 represents only the intermediate results of a project whose objective is the implementation of a computer program for the analysis of both coherent and non-coherent structure functions, and, finally, for the automatic event tree analysis. The last part of the report illustrates the developments regarding the improvement in progress

  9. Tree Based Decision Strategies and Auctions in Computational Multi-Agent Systems

    Šlapák, M.; Neruda, Roman

    2017-01-01

    Roč. 38, č. 4 (2017), s. 335-342 ISSN 0257-4306 Institutional support: RVO:67985807 Keywords : auction systems * decision making * genetic programming * multi-agent system * task distribution Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) http://rev-inv-ope.univ-paris1.fr/fileadmin/rev-inv-ope/files/38417/38417-04.pdf

  10. The Manifestation of Stopping Sets and Absorbing Sets as Deviations on the Computation Trees of LDPC Codes

    Eric Psota

    2010-01-01

    Full Text Available The error mechanisms of iterative message-passing decoders for low-density parity-check codes are studied. A tutorial review is given of the various graphical structures, including trapping sets, stopping sets, and absorbing sets that are frequently used to characterize the errors observed in simulations of iterative decoding of low-density parity-check codes. The connections between trapping sets and deviations on computation trees are explored in depth using the notion of problematic trapping sets in order to bridge the experimental and analytic approaches to these error mechanisms. A new iterative algorithm for finding low-weight problematic trapping sets is presented and shown to be capable of identifying many trapping sets that are frequently observed during iterative decoding of low-density parity-check codes on the additive white Gaussian noise channel. Finally, a new method is given for characterizing the weight of deviations that result from problematic trapping sets.

  11. Pseudo-random Trees: Multiple Independent Sequence Generators for Parallel and Branching Computations

    Halton, John H.

    1989-09-01

    A class of families of linear congruential pseudo-random sequences is defined, for which it is possible to branch at any event without changing the sequence of random numbers used in the original random walk and for which the sequences in different branches show properties analogous to mutual statistical independence. This is a hitherto unavailable, and computationally desirable, tool.

  12. Fully automated reconstruction of three-dimensional vascular tree structures from two orthogonal views using computational algorithms and productionrules

    Liu, Iching; Sun, Ying

    1992-10-01

    A system for reconstructing 3-D vascular structure from two orthogonally projected images is presented. The formidable problem of matching segments between two views is solved using knowledge of the epipolar constraint and the similarity of segment geometry and connectivity. The knowledge is represented in a rule-based system, which also controls the operation of several computational algorithms for tracking segments in each image, representing 2-D segments with directed graphs, and reconstructing 3-D segments from matching 2-D segment pairs. Uncertain reasoning governs the interaction between segmentation and matching; it also provides a framework for resolving the matching ambiguities in an iterative way. The system was implemented in the C language and the C Language Integrated Production System (CLIPS) expert system shell. Using video images of a tree model, the standard deviation of reconstructed centerlines was estimated to be 0.8 mm (1.7 mm) when the view direction was parallel (perpendicular) to the epipolar plane. Feasibility of clinical use was shown using x-ray angiograms of a human chest phantom. The correspondence of vessel segments between two views was accurate. Computational time for the entire reconstruction process was under 30 s on a workstation. A fully automated system for two-view reconstruction that does not require the a priori knowledge of vascular anatomy is demonstrated.

  13. An efficient Adaptive Mesh Refinement (AMR) algorithm for the Discontinuous Galerkin method: Applications for the computation of compressible two-phase flows

    Papoutsakis, Andreas; Sazhin, Sergei S.; Begg, Steven; Danaila, Ionut; Luddens, Francky

    2018-06-01

    We present an Adaptive Mesh Refinement (AMR) method suitable for hybrid unstructured meshes that allows for local refinement and de-refinement of the computational grid during the evolution of the flow. The adaptive implementation of the Discontinuous Galerkin (DG) method introduced in this work (ForestDG) is based on a topological representation of the computational mesh by a hierarchical structure consisting of oct- quad- and binary trees. Adaptive mesh refinement (h-refinement) enables us to increase the spatial resolution of the computational mesh in the vicinity of the points of interest such as interfaces, geometrical features, or flow discontinuities. The local increase in the expansion order (p-refinement) at areas of high strain rates or vorticity magnitude results in an increase of the order of accuracy in the region of shear layers and vortices. A graph of unitarian-trees, representing hexahedral, prismatic and tetrahedral elements is used for the representation of the initial domain. The ancestral elements of the mesh can be split into self-similar elements allowing each tree to grow branches to an arbitrary level of refinement. The connectivity of the elements, their genealogy and their partitioning are described by linked lists of pointers. An explicit calculation of these relations, presented in this paper, facilitates the on-the-fly splitting, merging and repartitioning of the computational mesh by rearranging the links of each node of the tree with a minimal computational overhead. The modal basis used in the DG implementation facilitates the mapping of the fluxes across the non conformal faces. The AMR methodology is presented and assessed using a series of inviscid and viscous test cases. Also, the AMR methodology is used for the modelling of the interaction between droplets and the carrier phase in a two-phase flow. This approach is applied to the analysis of a spray injected into a chamber of quiescent air, using the Eulerian

  14. Operational procedure for computer program for design point characteristics of a compressed-air generator with through-flow combustor for V/STOL applications

    Krebs, R. P.

    1971-01-01

    The computer program described in this report calculates the design-point characteristics of a compressed-air generator for use in V/STOL applications such as systems with a tip-turbine-driven lift fan. The program computes the dimensions and mass, as well as the thermodynamic performance of a model air generator configuration which involves a straight through-flow combustor. Physical and thermodynamic characteristics of the air generator components are also given. The program was written in FORTRAN IV language. Provision has been made so that the program will accept input values in either SI units or U.S. customary units. Each air generator design-point calculation requires about 1.5 seconds of 7094 computer time for execution.

  15. FLUST-2D - A computer code for the calculation of the two-dimensional flow of a compressible medium in coupled retangular areas

    Enderle, G.

    1979-01-01

    The computer-code FLUST-2D is able to calculate the two-dimensional flow of a compressible fluid in arbitrary coupled rectangular areas. In a finite-difference scheme the program computes pressure, density, internal energy and velocity. Starting with a basic set of equations, the difference equations in a rectangular grid are developed. The computational cycle for coupled fluid areas is described. Results of test calculations are compared to analytical solutions and the influence of time step and mesh size are investigated. The program was used to precalculate the blowdown experiments of the HDR experimental program. Downcomer, plena, internal vessel region, blowdown pipe and a containment area have been modelled two-dimensionally. The major results of the precalculations are presented. This report also contains a description of the code structure and user information. (orig.) [de

  16. Performance Analysis of Embedded Zero Tree and Set Partitioning in Hierarchical Tree

    Pardeep Singh; Nivedita; Dinesh Gupta; Sugandha Sharma

    2012-01-01

    Compressing an image is significantly different than compressing raw binary data. For this different compression algorithm are used to compress images. Discrete wavelet transform has been widely used to compress the image. Wavelet transform are very powerful compared to other transform because its ability to describe any type of signals both in time and frequency domain simultaneously. The proposed schemes investigate the performance evaluation of embedded zero tree and wavelet based compress...

  17. Genomicus 2018: karyotype evolutionary trees and on-the-fly synteny computing.

    Nguyen, Nga Thi Thuy; Vincens, Pierre; Roest Crollius, Hugues; Louis, Alexandra

    2018-01-04

    Since 2010, the Genomicus web server is available online at http://genomicus.biologie.ens.fr/genomicus. This graphical browser provides access to comparative genomic analyses in four different phyla (Vertebrate, Plants, Fungi, and non vertebrate Metazoans). Users can analyse genomic information from extant species, as well as ancestral gene content and gene order for vertebrates and flowering plants, in an integrated evolutionary context. New analyses and visualization tools have recently been implemented in Genomicus Vertebrate. Karyotype structures from several genomes can now be compared along an evolutionary pathway (Multi-KaryotypeView), and synteny blocks can be computed and visualized between any two genomes (PhylDiagView). © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. Data compression with applications to digital radiology

    Elnahas, S.E.

    1985-01-01

    The structure of arithmetic codes is defined in terms of source parsing trees. The theoretical derivations of algorithms for the construction of optimal and sub-optimal structures are presented. The software simulation results demonstrate how arithmetic coding out performs variable-length to variable-length coding. Linear predictive coding is presented for the compression of digital diagnostic images from several imaging modalities including computed tomography, nuclear medicine, ultrasound, and magnetic resonance imaging. The problem of designing optimal predictors is formulated and alternative solutions are discussed. The results indicate that noiseless compression factors between 1.7 and 7.4 can be achieved. With nonlinear predictive coding, noisy and noiseless compression techniques are combined in a novel way that may have a potential impact on picture archiving and communication systems in radiology. Adaptive fast discrete cosine transform coding systems are used as nonlinear block predictors, and optimal delta modulation systems are used as nonlinear sequential predictors. The off-line storage requirements for archiving diagnostic images are reasonably reduced by the nonlinear block predictive coding. The online performance, however, seems to be bounded by that of the linear systems. The subjective quality of image imperfect reproductions from the cosine transform coding is promising and prompts future research on the compression of diagnostic images by transform coding systems and the clinical evaluation of these systems

  19. Decision-Tree Program

    Buntine, Wray

    1994-01-01

    IND computer program introduces Bayesian and Markov/maximum-likelihood (MML) methods and more-sophisticated methods of searching in growing trees. Produces more-accurate class-probability estimates important in applications like diagnosis. Provides range of features and styles with convenience for casual user, fine-tuning for advanced user or for those interested in research. Consists of four basic kinds of routines: data-manipulation, tree-generation, tree-testing, and tree-display. Written in C language.

  20. Fault tree handbook

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  1. DNABIT Compress - Genome compression algorithm.

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  2. Follow-up after stent insertion in the tracheobronchial tree: role of helical computed tomography in comparison with fiberoptic bronchoscopy

    Ferretti, G.R.; Kocier, M.; Calaque, O.; Coulomb, M. [Service Central de Radiologie et Imagerie Medicale, INSERM EMI 9924, CHU, BP 217, 38043, Grenoble Cedex 9 (France); Arbib, F.; Pison, C. [Departement de Medecine Aigue Specialisee (DMAS), CHU Grenoble, CHU, BP 217, 38043, Grenoble Cedex 9 (France); Righini, C. [Service d' Oto Rhino Laryngologie, CHU Grenoble, BP 217, 38043, Grenoble Cedex 9 (France)

    2003-05-01

    The aim of this study was to compare helical CT with fiberoptic bronchoscopy findings to appraise the medium-term results of proximal-airways stenting. Twenty-five patients with 28 endobronchial metallic stents inserted for local advanced malignancy (n=13) or benign diseases (n=12) underwent follow-up CT from 3 days to 50 months (mean 8 months). All studies were obtained using helical CT with subsequent multiplanar reformation and three-dimensional reconstruction including virtual bronchoscopy. The location, shape, and patency of stents and adjacent airway were assessed. The results of CT were compared with the results of fiberoptic bronchoscopy obtained with a mean delay of 2.5 days (SD 9 days) after CT scan. Twelve stents (43%) remained in their original position, patent and without deformity. Sixteen stents were associated with local complications: migration (n=6); external compression with persistent stenosis (n=4); local recurrence of malignancy (n=4); fracture (n=1); and non-congruence between the airway and the stent (n=1). The CT demonstrated all the significant abnormalities demonstrated at fiberoptic bronchoscopy except two moderate stenoses (20%) related to granulomata at the origin of the stent. Ten of 14 stents inserted for benign conditions were without complications as compared with 2 of 14 in malignant conditions (p=0.008). Computed tomography is an accurate noninvasive method for evaluating endobronchial stents. The CT is a useful technique for follow-up of patients who have undergone endobronchial stenting. (orig.)

  3. Follow-up after stent insertion in the tracheobronchial tree: role of helical computed tomography in comparison with fiberoptic bronchoscopy

    Ferretti, G.R.; Kocier, M.; Calaque, O.; Coulomb, M.; Arbib, F.; Pison, C.; Righini, C.

    2003-01-01

    The aim of this study was to compare helical CT with fiberoptic bronchoscopy findings to appraise the medium-term results of proximal-airways stenting. Twenty-five patients with 28 endobronchial metallic stents inserted for local advanced malignancy (n=13) or benign diseases (n=12) underwent follow-up CT from 3 days to 50 months (mean 8 months). All studies were obtained using helical CT with subsequent multiplanar reformation and three-dimensional reconstruction including virtual bronchoscopy. The location, shape, and patency of stents and adjacent airway were assessed. The results of CT were compared with the results of fiberoptic bronchoscopy obtained with a mean delay of 2.5 days (SD 9 days) after CT scan. Twelve stents (43%) remained in their original position, patent and without deformity. Sixteen stents were associated with local complications: migration (n=6); external compression with persistent stenosis (n=4); local recurrence of malignancy (n=4); fracture (n=1); and non-congruence between the airway and the stent (n=1). The CT demonstrated all the significant abnormalities demonstrated at fiberoptic bronchoscopy except two moderate stenoses (20%) related to granulomata at the origin of the stent. Ten of 14 stents inserted for benign conditions were without complications as compared with 2 of 14 in malignant conditions (p=0.008). Computed tomography is an accurate noninvasive method for evaluating endobronchial stents. The CT is a useful technique for follow-up of patients who have undergone endobronchial stenting. (orig.)

  4. Compression of deep convolutional neural network for computer-aided diagnosis of masses in digital breast tomosynthesis

    Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Helvie, Mark A.; Richter, Caleb; Cha, Kenny

    2018-02-01

    Deep-learning models are highly parameterized, causing difficulty in inference and transfer learning. We propose a layered pathway evolution method to compress a deep convolutional neural network (DCNN) for classification of masses in DBT while maintaining the classification accuracy. Two-stage transfer learning was used to adapt the ImageNet-trained DCNN to mammography and then to DBT. In the first-stage transfer learning, transfer learning from ImageNet trained DCNN was performed using mammography data. In the second-stage transfer learning, the mammography-trained DCNN was trained on the DBT data using feature extraction from fully connected layer, recursive feature elimination and random forest classification. The layered pathway evolution encapsulates the feature extraction to the classification stages to compress the DCNN. Genetic algorithm was used in an iterative approach with tournament selection driven by count-preserving crossover and mutation to identify the necessary nodes in each convolution layer while eliminating the redundant nodes. The DCNN was reduced by 99% in the number of parameters and 95% in mathematical operations in the convolutional layers. The lesion-based area under the receiver operating characteristic curve on an independent DBT test set from the original and the compressed network resulted in 0.88+/-0.05 and 0.90+/-0.04, respectively. The difference did not reach statistical significance. We demonstrated a DCNN compression approach without additional fine-tuning or loss of performance for classification of masses in DBT. The approach can be extended to other DCNNs and transfer learning tasks. An ensemble of these smaller and focused DCNNs has the potential to be used in multi-target transfer learning.

  5. Compression stockings

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  6. Microstructure and compression properties of 3D powder printed Ti-6Al-4V scaffolds with designed porosity: Experimental and computational analysis

    Barui, Srimanta; Chatterjee, Subhomoy; Mandal, Sourav [Laboratory for Biomaterials, Materials Research Centre, Indian Institute of Science, Bangalore (India); Center of Excellence and Innovation in Biotechnology-' Translational Centre on Biomaterials for Orthopaedic and Dental Applications' , Materials Research Center, Indian Institute of Science, Bangalore (India); Kumar, Alok [Laboratory for Biomaterials, Materials Research Centre, Indian Institute of Science, Bangalore (India); Basu, Bikramjit, E-mail: bikram@mrc.iisc.ernet.in [Laboratory for Biomaterials, Materials Research Centre, Indian Institute of Science, Bangalore (India); Centre for Biosystems Science and Engineering, Indian Institute of Science, Bangalore (India); Center of Excellence and Innovation in Biotechnology-' Translational Centre on Biomaterials for Orthopaedic and Dental Applications' , Materials Research Center, Indian Institute of Science, Bangalore (India)

    2017-01-01

    The osseointegration of metallic implants depends on an effective balance among designed porosity to facilitate angiogenesis, tissue in-growth and bone-mimicking elastic modulus with good strength properties. While addressing such twin requirements, the present study demonstrates a low temperature additive manufacturing based processing strategy to fabricate Ti-6Al-4V scaffolds with designed porosity using inkjet-based 3D powder printing (3DPP). A novel starch-based aqueous binder was prepared and the physico-chemical parameters such as pH, viscosity, and surface tension were optimized for drop-on-demand (DOD) based thermal inkjet printing. Micro-computed tomography (micro-CT) of sintered scaffolds revealed a 57% total porosity in homogeneously porous scaffold and 45% in the gradient porous scaffold with 99% interconnectivity among the micropores. Under uniaxial compression testing, the strength of homogeneously porous and gradient porous scaffolds were ~ 47 MPa and ~ 90 MPa, respectively. The progressive failure in homogeneously porous scaffold was recorded. In parallel to experimental measurements, finite element (FE) analyses have been performed to study the stress distribution globally and also locally around the designed pores. Consistent with FE analyses, a higher elastic modulus was recorded with gradient porous scaffolds (~ 3 GPa) than the homogenously porous scaffolds (~ 2 GPa). While comparing with the existing literature reports, the present work, for the first time, establishes ‘direct powder printing methodology’ of Ti-6Al-4V porous scaffolds with biomedically relevant microstructural and mechanical properties. Also, a new FE analysis approach, based on the critical understanding of the porous architecture using micro-CT results, is presented to realistically predict the compression response of porous scaffolds. - Highlights: • Binder physics and process parameters in inkjet 3D printing of Ti-6Al-4V • Phase assembly and detailed microstructure

  7. Microstructure and compression properties of 3D powder printed Ti-6Al-4V scaffolds with designed porosity: Experimental and computational analysis

    Barui, Srimanta; Chatterjee, Subhomoy; Mandal, Sourav; Kumar, Alok; Basu, Bikramjit

    2017-01-01

    The osseointegration of metallic implants depends on an effective balance among designed porosity to facilitate angiogenesis, tissue in-growth and bone-mimicking elastic modulus with good strength properties. While addressing such twin requirements, the present study demonstrates a low temperature additive manufacturing based processing strategy to fabricate Ti-6Al-4V scaffolds with designed porosity using inkjet-based 3D powder printing (3DPP). A novel starch-based aqueous binder was prepared and the physico-chemical parameters such as pH, viscosity, and surface tension were optimized for drop-on-demand (DOD) based thermal inkjet printing. Micro-computed tomography (micro-CT) of sintered scaffolds revealed a 57% total porosity in homogeneously porous scaffold and 45% in the gradient porous scaffold with 99% interconnectivity among the micropores. Under uniaxial compression testing, the strength of homogeneously porous and gradient porous scaffolds were ~ 47 MPa and ~ 90 MPa, respectively. The progressive failure in homogeneously porous scaffold was recorded. In parallel to experimental measurements, finite element (FE) analyses have been performed to study the stress distribution globally and also locally around the designed pores. Consistent with FE analyses, a higher elastic modulus was recorded with gradient porous scaffolds (~ 3 GPa) than the homogenously porous scaffolds (~ 2 GPa). While comparing with the existing literature reports, the present work, for the first time, establishes ‘direct powder printing methodology’ of Ti-6Al-4V porous scaffolds with biomedically relevant microstructural and mechanical properties. Also, a new FE analysis approach, based on the critical understanding of the porous architecture using micro-CT results, is presented to realistically predict the compression response of porous scaffolds. - Highlights: • Binder physics and process parameters in inkjet 3D printing of Ti-6Al-4V • Phase assembly and detailed microstructure

  8. Analytical computation of thermodynamic performance parameters of actual vapour compression refrigeration system with R22, R32, R134a, R152a, R290 and R1270

    Vali Shaik Sharmas

    2018-01-01

    Full Text Available The present work focuses on analytical computation of thermodynamic performance of actual vapour compression refrigeration system by using six pure refrigerants. The refrigerants are namely R22, R32, R134a, R152a, R290 and R1270 respectively. A MATLAB code is developed to compute the thermodynamic performance parameters of actual vapour compression system such as refrigeration effect, compressor work, COP, power per ton of refrigeration, compressor discharge temperature and volumetric refrigeration capacity at condensing and evaporating temperatures of 54.4oC and 7.2oC respectively. Analytical results exhibited that COP of both R32 and R134a are 15.95% and 11.71% higher among the six investigated refrigerants. However R32 and R134a cannot be replaced directly into R22 system. This is due to their higher compressor discharge temperature and poor volumetric capacity respectively. The discharge temperature of both R1270 and R290 are lower than R22 by 20-26oC. Volumetric refrigeration capacity of R1270 (3197 kJ/m3 is very close to that of volumetric capacity of R22 (3251 kJ/m3. Both R1270 and R290 shows good miscibility with R22 mineral oil. Overall R1270 would be a suitable ecofriendly refrigerant to replace R22 from the stand point of ODP, GWP, volumetric capacity, discharge temperature and miscibility with mineral oil although its COP is lower.

  9. In-Situ Observations of Longitudinal Compression Damage in Carbon-Epoxy Cross Ply Laminates Using Fast Synchrotron Radiation Computed Tomography

    Bergan, Andrew C.; Garcea, Serafina C.

    2017-01-01

    The role of longitudinal compressive failure mechanisms in notched cross-ply laminates is studied experimentally with in-situ synchrotron radiation based computed tomography. Carbon/epoxy specimens loaded monotonically in uniaxial compression exhibited a quasi-stable failure process, which was captured with computed tomography scans recorded continuously with a temporal resolutions of 2.4 seconds and a spatial resolution of 1.1 microns per voxel. A detailed chronology of the initiation and propagation of longitudinal matrix splitting cracks, in-plane and out-of-plane kink bands, shear-driven fiber failure, delamination, and transverse matrix cracks is provided with a focus on kink bands as the dominant failure mechanism. An automatic segmentation procedure is developed to identify the boundary surfaces of a kink band. The segmentation procedure enables 3-dimensional visualization of the kink band and conveys the orientation, inclination, and spatial variation of the kink band. The kink band inclination and length are examined using the segmented data revealing tunneling and spatial variations not apparent from studying the 2-dimensional section data.

  10. Refining discordant gene trees.

    Górecki, Pawel; Eulenstein, Oliver

    2014-01-01

    Evolutionary studies are complicated by discordance between gene trees and the species tree in which they evolved. Dealing with discordant trees often relies on comparison costs between gene and species trees, including the well-established Robinson-Foulds, gene duplication, and deep coalescence costs. While these costs have provided credible results for binary rooted gene trees, corresponding cost definitions for non-binary unrooted gene trees, which are frequently occurring in practice, are challenged by biological realism. We propose a natural extension of the well-established costs for comparing unrooted and non-binary gene trees with rooted binary species trees using a binary refinement model. For the duplication cost we describe an efficient algorithm that is based on a linear time reduction and also computes an optimal rooted binary refinement of the given gene tree. Finally, we show that similar reductions lead to solutions for computing the deep coalescence and the Robinson-Foulds costs. Our binary refinement of Robinson-Foulds, gene duplication, and deep coalescence costs for unrooted and non-binary gene trees together with the linear time reductions provided here for computing these costs significantly extends the range of trees that can be incorporated into approaches dealing with discordance.

  11. Crown traits of coniferous trees and their relation to shade tolerance can differ with leaf type: a biophysical demonstration using computed tomography scanning data

    Dutilleul, Pierre; Han, Liwen; Valladares, Fernando; Messier, Christian

    2015-01-01

    Plant light interception and shade tolerance are intrinsically related in that they involve structural, morphological and physiological adaptations to manage light capture for photosynthetic utilization, in order to sustain survival, development and reproduction. At the scale of small-size trees, crown traits related to structural geometry of branching pattern and space occupancy through phyllotaxis can be accurately evaluated in 3D, using computed tomography (CT) scanning data. We demonstrat...

  12. Crown traits of coniferous trees and their relation to shade tolerance can differ with leaf type: A biophysical demonstration using computed tomography scanning data

    Pierre eDutilleul; Liwen eHan; Fernando eValladeres; Christian eMessier; Christian eMessier

    2015-01-01

    Plant light interception and shade tolerance are intrinsically related in that they involve structural, morphological and physiological adaptations to manage light capture for photosynthetic utilization, in order to sustain survival, development and reproduction. At the scale of small-size trees, crown traits related to structural geometry of branching pattern and space occupancy through phyllotaxis can be accurately evaluated in 3D, using computed tomography (CT) scanning data. We demonstrat...

  13. Lossy image compression for digital medical imaging systems

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  14. A Semi-Automated Machine Learning Algorithm for Tree Cover Delineation from 1-m Naip Imagery Using a High Performance Computing Architecture

    Basu, S.; Ganguly, S.; Nemani, R. R.; Mukhopadhyay, S.; Milesi, C.; Votava, P.; Michaelis, A.; Zhang, G.; Cook, B. D.; Saatchi, S. S.; Boyda, E.

    2014-12-01

    Accurate tree cover delineation is a useful instrument in the derivation of Above Ground Biomass (AGB) density estimates from Very High Resolution (VHR) satellite imagery data. Numerous algorithms have been designed to perform tree cover delineation in high to coarse resolution satellite imagery, but most of them do not scale to terabytes of data, typical in these VHR datasets. In this paper, we present an automated probabilistic framework for the segmentation and classification of 1-m VHR data as obtained from the National Agriculture Imagery Program (NAIP) for deriving tree cover estimates for the whole of Continental United States, using a High Performance Computing Architecture. The results from the classification and segmentation algorithms are then consolidated into a structured prediction framework using a discriminative undirected probabilistic graphical model based on Conditional Random Field (CRF), which helps in capturing the higher order contextual dependencies between neighboring pixels. Once the final probability maps are generated, the framework is updated and re-trained by incorporating expert knowledge through the relabeling of misclassified image patches. This leads to a significant improvement in the true positive rates and reduction in false positive rates. The tree cover maps were generated for the state of California, which covers a total of 11,095 NAIP tiles and spans a total geographical area of 163,696 sq. miles. Our framework produced correct detection rates of around 85% for fragmented forests and 70% for urban tree cover areas, with false positive rates lower than 3% for both regions. Comparative studies with the National Land Cover Data (NLCD) algorithm and the LiDAR high-resolution canopy height model shows the effectiveness of our algorithm in generating accurate high-resolution tree cover maps.

  15. Tree Coding of Bilevel Images

    Martins, Bo; Forchhammer, Søren

    1998-01-01

    Presently, sequential tree coders are the best general purpose bilevel image coders and the best coders of halftoned images. The current ISO standard, Joint Bilevel Image Experts Group (JBIG), is a good example. A sequential tree coder encodes the data by feeding estimates of conditional...... is one order of magnitude slower than JBIG, obtains excellent and highly robust compression performance. A multipass free tree coding scheme produces superior compression results for all test images. A multipass free template coding scheme produces significantly better results than JBIG for difficult...... images such as halftones. By utilizing randomized subsampling in the template selection, the speed becomes acceptable for practical image coding...

  16. Analysis of Compression Algorithm in Ground Collision Avoidance Systems (Auto-GCAS)

    Schmalz, Tyler; Ryan, Jack

    2011-01-01

    Automatic Ground Collision Avoidance Systems (Auto-GCAS) utilizes Digital Terrain Elevation Data (DTED) stored onboard a plane to determine potential recovery maneuvers. Because of the current limitations of computer hardware on military airplanes such as the F-22 and F-35, the DTED must be compressed through a lossy technique called binary-tree tip-tilt. The purpose of this study is to determine the accuracy of the compressed data with respect to the original DTED. This study is mainly interested in the magnitude of the error between the two as well as the overall distribution of the errors throughout the DTED. By understanding how the errors of the compression technique are affected by various factors (topography, density of sampling points, sub-sampling techniques, etc.), modifications can be made to the compression technique resulting in better accuracy. This, in turn, would minimize unnecessary activation of A-GCAS during flight as well as maximizing its contribution to fighter safety.

  17. Discrete Wigner Function Reconstruction and Compressed Sensing

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  18. Tree growth visualization

    L. Linsen; B.J. Karis; E.G. McPherson; B. Hamann

    2005-01-01

    In computer graphics, models describing the fractal branching structure of trees typically exploit the modularity of tree structures. The models are based on local production rules, which are applied iteratively and simultaneously to create a complex branching system. The objective is to generate three-dimensional scenes of often many realistic- looking and non-...

  19. A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows

    Lei, Xin; Li, Jiequan

    2018-04-01

    This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.

  20. Compressive sensing in medical imaging.

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  1. A tree-decomposed transfer matrix for computing exact Potts model partition functions for arbitrary graphs, with applications to planar graph colourings

    Bedini, Andrea; Jacobsen, Jesper Lykke

    2010-01-01

    Combining tree decomposition and transfer matrix techniques provides a very general algorithm for computing exact partition functions of statistical models defined on arbitrary graphs. The algorithm is particularly efficient in the case of planar graphs. We illustrate it by computing the Potts model partition functions and chromatic polynomials (the number of proper vertex colourings using Q colours) for large samples of random planar graphs with up to N = 100 vertices. In the latter case, our algorithm yields a sub-exponential average running time of ∼ exp(1.516√N), a substantial improvement over the exponential running time ∼exp (0.245N) provided by the hitherto best-known algorithm. We study the statistics of chromatic roots of random planar graphs in some detail, comparing the findings with results for finite pieces of a regular lattice.

  2. Adjustable chain trees for proteins

    Winter, Pawel; Fonseca, Rasmus

    2012-01-01

    A chain tree is a data structure for changing protein conformations. It enables very fast detection of clashes and free energy potential calculations. A modified version of chain trees that adjust themselves to the changing conformations of folding proteins is introduced. This results in much...... tighter bounding volume hierarchies and therefore fewer intersection checks. Computational results indicate that the efficiency of the adjustable chain trees is significantly improved compared to the traditional chain trees....

  3. Feasibility study for application of the compressed-sensing framework to interior computed tomography (ICT) for low-dose, high-accurate dental x-ray imaging

    Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.

    2016-02-01

    In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.

  4. Full-field mapping of internal strain distribution in red sandstone specimen under compression using digital volumetric speckle photography and X-ray computed tomography

    Lingtao Mao

    2015-04-01

    Full Text Available It is always desirable to know the interior deformation pattern when a rock is subjected to mechanical load. Few experimental techniques exist that can represent full-field three-dimensional (3D strain distribution inside a rock specimen. And yet it is crucial that this information is available for fully understanding the failure mechanism of rocks or other geomaterials. In this study, by using the newly developed digital volumetric speckle photography (DVSP technique in conjunction with X-ray computed tomography (CT and taking advantage of natural 3D speckles formed inside the rock due to material impurities and voids, we can probe the interior of a rock to map its deformation pattern under load and shed light on its failure mechanism. We apply this technique to the analysis of a red sandstone specimen under increasing uniaxial compressive load applied incrementally. The full-field 3D displacement fields are obtained in the specimen as a function of the load, from which both the volumetric and the deviatoric strain fields are calculated. Strain localization zones which lead to the eventual failure of the rock are identified. The results indicate that both shear and tension are contributing factors to the failure mechanism.

  5. Notion Of Artificial Labs Slow Global Warming And Advancing Engine Studies Perspectives On A Computational Experiment On Dual-Fuel Compression-Ignition Engine Research

    Tonye K. Jack

    2017-06-01

    Full Text Available To appreciate clean energy applications of the dual-fuel internal combustion engine D-FICE with pilot Diesel fuel to aid public policy formulation in terms of present and future benefits to the modern transportation stationary power and promotion of oil and gas green- drilling the brief to an engine research team was to investigate the feasible advantages of dual-fuel compression-ignition engines guided by the following concerns i Sustainable fuel and engine power delivery ii The requirements for fuel flexibility iii Low exhausts emissions and environmental pollution iv Achieving low specific fuel consumption and economy for maximum power v The comparative advantages over the conventional Diesel engines vi Thermo-economic modeling and analysis for the optimal blend as basis for a benefitcost evaluation Planned in two stages for reduced cost and fast turnaround of results - initial preliminary stage with basic simple models and advanced stage with more detailed complex modeling. The paper describes a simplified MATLAB based computational experiment predictive model for the thermodynamic combustion and engine performance analysis of dual-fuel compression-ignition engine studies operating on the theoretical limited-pressure cycle with several alternative fuel-blends. Environmental implications for extreme temperature moderation are considered by finite-time thermodynamic modeling for maximum power with predictions for pollutants formation and control by reaction rates kinetics analysis of systematic reduced plausible coupled chemistry models through the NCN reaction pathway for the gas-phase reactions classes of interest. Controllable variables for engine-out pollutants emissions reduction and in particular NOx elimination are identified. Verifications and Validations VampV through Performance Comparisons were made using a clinical approach in selection of StrokeBore ratios greater-than and equal-to one amp88051 low-to-high engine speeds and medium

  6. Wellhead compression

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  7. Correlations between quality indexes of chest compression.

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  8. Studies on improvement of diagnostic ability of computed tomography (CT) in the parenchymatous organs in the upper abdomen, 1. Study on the upper abdominal compression method

    Kawata, Ryo [Gifu Univ. (Japan). Faculty of Medicine

    1982-07-01

    1) The upper abdominal compression method was easily applicable for CT examination in practically all the patients. It gave no harm and considerably improved CT diagnosis. 2) The materials used for compression were foamed polystyrene, the Mix-Dp and a water bag. When CT examination was performed to diagnose such lesions as a circumscribed tumor, compression with the Mix-Dp was most useful, and when it was performed for screening examination of upper abdominal diseases, compression with a water bag was most effective. 3) Improvement in contour-depicting ability of CT by the compression method was most marked at the body of the pancreas, followed by the head of the pancreas and the posterior surface of the left lobe of the liver. Slight improvement was seen also at the tail of the pancreas and the left adrenal gland. 4) Improvement in organ-depicting ability of CT by the compression method was estimated by a 4-category classification method. It was found that the improvement was most marked at the body and the head of the pancreas. Considerable improvement was observed also at the left lobe of the liver and the both adrenal glands. Little improvement was obtained at the spleen. When contrast enhancement was combined with the compression method, improvement at such organs which were liable to be enhanced, as the liver and the adrenal glands, was promoted, while the organ-depicting ability was decreased at the pancreas. 5) By comparing the CT image under compression with that without compression, continuous infiltrations of gastric cancer into the body and the tail of the pancreas in 2 cases and a retroperitoneal infiltration of pancreatic tumor in 1 case were diagnosed preoperatively.

  9. Compressing DNA sequence databases with coil

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  10. Speech Compression

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  11. Computer vision applied to herbarium specimens of German trees: testing the future utility of the millions of herbarium specimen images for automated identification.

    Unger, Jakob; Merhof, Dorit; Renner, Susanne

    2016-11-16

    Global Plants, a collaborative between JSTOR and some 300 herbaria, now contains about 2.48 million high-resolution images of plant specimens, a number that continues to grow, and collections that are digitizing their specimens at high resolution are allocating considerable recourses to the maintenance of computer hardware (e.g., servers) and to acquiring digital storage space. We here apply machine learning, specifically the training of a Support-Vector-Machine, to classify specimen images into categories, ideally at the species level, using the 26 most common tree species in Germany as a test case. We designed an analysis pipeline and classification system consisting of segmentation, normalization, feature extraction, and classification steps and evaluated the system in two test sets, one with 26 species, the other with 17, in each case using 10 images per species of plants collected between 1820 and 1995, which simulates the empirical situation that most named species are represented in herbaria and databases, such as JSTOR, by few specimens. We achieved 73.21% accuracy of species assignments in the larger test set, and 84.88% in the smaller test set. The results of this first application of a computer vision algorithm trained on images of herbarium specimens shows that despite the problem of overlapping leaves, leaf-architectural features can be used to categorize specimens to species with good accuracy. Computer vision is poised to play a significant role in future rapid identification at least for frequently collected genera or species in the European flora.

  12. Boolean Logic Tree of Label-Free Dual-Signal Electrochemical Aptasensor System for Biosensing, Three-State Logic Computation, and Keypad Lock Security Operation.

    Lu, Jiao Yang; Zhang, Xin Xing; Huang, Wei Tao; Zhu, Qiu Yan; Ding, Xue Zhi; Xia, Li Qiu; Luo, Hong Qun; Li, Nian Bing

    2017-09-19

    The most serious and yet unsolved problems of molecular logic computing consist in how to connect molecular events in complex systems into a usable device with specific functions and how to selectively control branchy logic processes from the cascading logic systems. This report demonstrates that a Boolean logic tree is utilized to organize and connect "plug and play" chemical events DNA, nanomaterials, organic dye, biomolecule, and denaturant for developing the dual-signal electrochemical evolution aptasensor system with good resettability for amplification detection of thrombin, controllable and selectable three-state logic computation, and keypad lock security operation. The aptasensor system combines the merits of DNA-functionalized nanoamplification architecture and simple dual-signal electroactive dye brilliant cresyl blue for sensitive and selective detection of thrombin with a wide linear response range of 0.02-100 nM and a detection limit of 1.92 pM. By using these aforementioned chemical events as inputs and the differential pulse voltammetry current changes at different voltages as dual outputs, a resettable three-input biomolecular keypad lock based on sequential logic is established. Moreover, the first example of controllable and selectable three-state molecular logic computation with active-high and active-low logic functions can be implemented and allows the output ports to assume a high impediment or nothing (Z) state in addition to the 0 and 1 logic levels, effectively controlling subsequent branchy logic computation processes. Our approach is helpful in developing the advanced controllable and selectable logic computing and sensing system in large-scale integration circuits for application in biomedical engineering, intelligent sensing, and control.

  13. Multiscale singularity trees

    Somchaipeng, Kerawit; Sporring, Jon; Johansen, Peter

    2007-01-01

    We propose MultiScale Singularity Trees (MSSTs) as a structure to represent images, and we propose an algorithm for image comparison based on comparing MSSTs. The algorithm is tested on 3 public image databases and compared to 2 state-of-theart methods. We conclude that the computational complexity...... of our algorithm only allows for the comparison of small trees, and that the results of our method are comparable with state-of-the-art using much fewer parameters for image representation....

  14. Flowering Trees

    Flowering Trees. Boswellia serrata Roxb. ex Colebr. (Indian Frankincense tree) of Burseraceae is a large-sized deciduous tree that is native to India. Bark is thin, greenish-ash-coloured that exfoliates into smooth papery flakes. Stem exudes pinkish resin ... Fruit is a three-valved capsule. A green gum-resin exudes from the ...

  15. Flowering Trees

    IAS Admin

    Flowering Trees. Ailanthus excelsa Roxb. (INDIAN TREE OF. HEAVEN) of Simaroubaceae is a lofty tree with large pinnately compound alternate leaves, which are ... inflorescences, unisexual and greenish-yellow. Fruits are winged, wings many-nerved. Wood is used in making match sticks. 1. Male flower; 2. Female flower.

  16. Flowering Trees

    Flowering Trees. Gyrocarpus americanus Jacq. (Helicopter Tree) of Hernandiaceae is a moderate size deciduous tree that grows to about 12 m in height with a smooth, shining, greenish-white bark. The leaves are ovate, rarely irregularly ... flowers which are unpleasant smelling. Fruit is a woody nut with two long thin wings.

  17. Flowering Trees

    More Details Fulltext PDF. Volume 8 Issue 8 August 2003 pp 112-112 Flowering Trees. Zizyphus jujuba Lam. of Rhamnaceae · More Details Fulltext PDF. Volume 8 Issue 9 September 2003 pp 97-97 Flowering Trees. Moringa oleifera · More Details Fulltext PDF. Volume 8 Issue 10 October 2003 pp 100-100 Flowering Trees.

  18. Compression of surface myoelectric signals using MP3 encoding.

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  19. Computational program to design heat pumps by compression (ciclo 1.0); Programa computacional para diseno de bombas de calor por compresion (ciclo 1.0)

    De Alba Rosano, Mauricio [CIE, UNAM, Temixco, Morelos (Mexico)

    2000-07-01

    A new computational program has been developed in order to design single stage compression heat pumps. This software, named CICLO 1.0 allows the design of water-water, water-air, air-water and air-air heat pumps, for industrial and residential applications. CICLO 1.0 simulates three types of compressors: reciprocating, screw and scroll. Also has a data base created with REFPROP software which includes eleven refrigerants. The condenser and evaporator simulation includes global conductance (UA) determination, and when one or both are shell and tube's type, this software shows the even number of tube passes by shell. The software determines the best compressor and refrigerant setup taking the COP as a parameter; in order to obtain this, is necessary to know the inlet/outlet conditions of the fluid to be heated, the inlet conditions of the fluid that gives heat, and the electric motor efficiency that drives the compressor. The afforded results by CICLO 1.0 are: operation conditions from compression cycle, that means, pressures and temperatures at the inlet/outlet from every heat pump component are determined: as well as refrigerant mass flux, COP, power required by compressor, volumetric and isentropic efficiencies, heat exchangers global conductance and more data. CICLO 1.0 has been executed with heat pump data that nowadays are operating, and the results from the simulation have been very similar each other with data reported from operational facilities. [Spanish] Se ha desarrollado un nuevo programa computacional para el diseno de bombas de calor por compresion de vapor de una sola etapa. Este programa, CICLO 1.0, permite el diseno de bombas de calor de tipo: agua-agua, agua-aire, aire-agua y aire-aire, que se utilicen para aplicaciones industriales, de servicios y residenciales. CICLO 1.0 simula tres tipos de compresores: reciprocante, de tornillo y scroll: cuenta con una base de datos de refrigerantes creada con el programa REFPROP la cual incluye once

  20. Understanding search trees via statistical physics

    ary search tree model (where stands for the number of branches of the search tree), an important problem for data storage in computer science, using a variety of statistical physics techniques that allow us to obtain exact asymptotic results.

  1. A bicriterion Steiner tree problem on graph

    Vujošević Mirko B.

    2003-01-01

    Full Text Available This paper presents a formulation of bicriterion Steiner tree problem which is stated as a task of finding a Steiner tree with maximal capacity and minimal length. It is considered as a lexicographic multicriteria problem. This means that the bottleneck Steiner tree problem is solved first. After that, the next optimization problem is stated as a classical minimums Steiner tree problem under the constraint on capacity of the tree. The paper also presents some computational experiments with the multicriteria problem.

  2. Toward a Better Compression for DNA Sequences Using Huffman Encoding.

    Al-Okaily, Anas; Almarri, Badar; Al Yami, Sultan; Huang, Chun-Hsi

    2017-04-01

    Due to the significant amount of DNA data that are being generated by next-generation sequencing machines for genomes of lengths ranging from megabases to gigabases, there is an increasing need to compress such data to a less space and a faster transmission. Different implementations of Huffman encoding incorporating the characteristics of DNA sequences prove to better compress DNA data. These implementations center on the concepts of selecting frequent repeats so as to force a skewed Huffman tree, as well as the construction of multiple Huffman trees when encoding. The implementations demonstrate improvements on the compression ratios for five genomes with lengths ranging from 5 to 50 Mbp, compared with the standard Huffman tree algorithm. The research hence suggests an improvement on all such DNA sequence compression algorithms that use the conventional Huffman encoding. The research suggests an improvement on all DNA sequence compression algorithms that use the conventional Huffman encoding. Accompanying software is publicly available (AL-Okaily, 2016 ).

  3. Crown traits of coniferous trees and their relation to shade tolerance can differ with leaf type: a biophysical demonstration using computed tomography scanning data.

    Dutilleul, Pierre; Han, Liwen; Valladares, Fernando; Messier, Christian

    2015-01-01

    Plant light interception and shade tolerance are intrinsically related in that they involve structural, morphological and physiological adaptations to manage light capture for photosynthetic utilization, in order to sustain survival, development and reproduction. At the scale of small-size trees, crown traits related to structural geometry of branching pattern and space occupancy through phyllotaxis can be accurately evaluated in 3D, using computed tomography (CT) scanning data. We demonstrate this by scrutinizing the crowns of 15 potted miniature conifers of different species or varieties, classified in two groups based on leaf type (10 needlelike, 5 scalelike); we also test whether mean values of crown traits measured from CT scanning data and correlations with a shade tolerance index (STI) differ between groups. Seven crown traits, including fractal dimensions (FD1: smaller scales, FD2: larger scales) and leaf areas, were evaluated for all 15 miniature conifers; an average silhouette-to-total-area ratio was also calculated for each of the 10 needlelike-leaf conifers. Between-group differences in mean values are significant (P conifers with scalelike leaves, which had lower STI and higher FD1 on average in our study; the positive correlation between STI and ĀD is significant (P < 0.05) for the scalelike-leaf group, and very moderate for the needlelike-leaf one. A contrasting physical attachment of the leaves to branches may explain part of the between-group differences. Our findings open new avenues for the understanding of fundamental plant growth processes; the information gained could be included in a multi-scale approach to tree crown modeling.

  4. Phylogenetic trees in bioinformatics

    Burr, Tom L [Los Alamos National Laboratory

    2008-01-01

    Genetic data is often used to infer evolutionary relationships among a collection of viruses, bacteria, animal or plant species, or other operational taxonomic units (OTU). A phylogenetic tree depicts such relationships and provides a visual representation of the estimated branching order of the OTUs. Tree estimation is unique for several reasons, including: the types of data used to represent each OTU; the use ofprobabilistic nucleotide substitution models; the inference goals involving both tree topology and branch length, and the huge number of possible trees for a given sample of a very modest number of OTUs, which implies that fmding the best tree(s) to describe the genetic data for each OTU is computationally demanding. Bioinformatics is too large a field to review here. We focus on that aspect of bioinformatics that includes study of similarities in genetic data from multiple OTUs. Although research questions are diverse, a common underlying challenge is to estimate the evolutionary history of the OTUs. Therefore, this paper reviews the role of phylogenetic tree estimation in bioinformatics, available methods and software, and identifies areas for additional research and development.

  5. Reconstructed and analyzed X-ray computed tomography data of investment-cast and additive-manufactured aluminum foam for visualizing ligament failure mechanisms and regions of contact during a compression test

    Kristoffer E. Matheson

    2018-02-01

    Full Text Available Three stochastic open-cell aluminum foam samples were incrementally compressed and imaged using X-ray Computed Tomography (CT. One of the samples was created using conventional investment casting methods and the other two were replicas of the same foam that were made using laser powder bed fusion. The reconstructed CT data were then examined in Paraview to identify and highlight the types of failure of individual ligaments. The accompanying sets of Paraview state files and STL files highlight the different ligament failure modes incrementally during compression for each foam. Ligament failure was classified as either “Fracture” (red or “Collapse” (blue. Also, regions of neighboring ligaments that came into contact that were not originally touching were colored yellow. For further interpretation and discussion of the data, please refer to Matheson et al. (2017 [1].

  6. Advances in compressible turbulent mixing

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  7. Advances in compressible turbulent mixing

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  8. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  9. Image quality (IQ) guided multispectral image compression

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  10. FRESCO: Referential compression of highly similar sequences.

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  11. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  12. Parallel Recursive State Compression for Free

    Laarman, Alfons; van de Pol, Jan Cornelis; Weber, M.

    2011-01-01

    This paper focuses on reducing memory usage in enumerative model checking, while maintaining the multi-core scalability obtained in earlier work. We present a tree-based multi-core compression method, which works by leveraging sharing among sub-vectors of state vectors. An algorithmic analysis of

  13. Tree Nut Allergies

    ... Blog Vision Awards Common Allergens Tree Nut Allergy Tree Nut Allergy Learn about tree nut allergy, how ... a Tree Nut Label card . Allergic Reactions to Tree Nuts Tree nuts can cause a severe and ...

  14. Mathematical foundations of event trees

    Papazoglou, Ioannis A.

    1998-01-01

    A mathematical foundation from first principles of event trees is presented. The main objective of this formulation is to offer a formal basis for developing automated computer assisted construction techniques for event trees. The mathematical theory of event trees is based on the correspondence between the paths of the tree and the elements of the outcome space of a joint event. The concept of a basic cylinder set is introduced to describe joint event outcomes conditional on specific outcomes of basic events or unconditional on the outcome of basic events. The concept of outcome space partition is used to describe the minimum amount of information intended to be preserved by the event tree representation. These concepts form the basis for an algorithm for systematic search for and generation of the most compact (reduced) form of an event tree consistent with the minimum amount of information the tree should preserve. This mathematical foundation allows for the development of techniques for automated generation of event trees corresponding to joint events which are formally described through other types of graphical models. Such a technique has been developed for complex systems described by functional blocks and it is reported elsewhere. On the quantification issue of event trees, a formal definition of a probability space corresponding to the event tree outcomes is provided. Finally, a short discussion is offered on the relationship of the presented mathematical theory with the more general use of event trees in reliability analysis of dynamic systems

  15. ERA: Efficient serial and parallel suffix tree construction for very long strings

    Mansour, Essam; Allam, Amin; Skiadopoulos, Spiros G.; Kalnis, Panos

    2011-01-01

    The suffix tree is a data structure for indexing strings. It is used in a variety of applications such as bioinformatics, time series analysis, clustering, text editing and data compression. However, when the string and the resulting suffix tree

  16. Colourings of (k-r,k-trees

    M. Borowiecki

    2017-01-01

    Full Text Available Trees are generalized to a special kind of higher dimensional complexes known as \\((j,k\\-trees ([L. W. Beineke, R. E. Pippert, On the structure of \\((m,n\\-trees, Proc. 8th S-E Conf. Combinatorics, Graph Theory and Computing, 1977, 75-80], and which are a natural extension of \\(k\\-trees for \\(j=k-1\\. The aim of this paper is to study\\((k-r,k\\-trees ([H. P. Patil, Studies on \\(k\\-trees and some related topics, PhD Thesis, University of Warsaw, Poland, 1984], which are a generalization of \\(k\\-trees (or usual trees when \\(k=1\\. We obtain the chromatic polynomial of \\((k-r,k\\-trees and show that any two \\((k-r,k\\-trees of the same order are chromatically equivalent. However, if \\(r\

  17. An efficient compression scheme for bitmap indices

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  18. DNABIT Compress – Genome compression algorithm

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  19. Flowering Trees

    medium-sized handsome tree with a straight bole that branches at the top. Leaves are once pinnate, with two to three pairs of leaflets. Young parts of the tree are velvety. Inflorescence is a branched raceme borne at the branch ends. Flowers are large, white, attractive, and fragrant. Corolla is funnel-shaped. Fruit is an ...

  20. Flowering Trees

    Cassia siamia Lamk. (Siamese tree senna) of Caesalpiniaceae is a small or medium size handsome tree. Leaves are alternate, pinnately compound and glandular, upto 18 cm long with 8–12 pairs of leaflets. Inflorescence is axillary or terminal and branched. Flowering lasts for a long period from March to February. Fruit is ...

  1. Flowering Trees

    Flowering Trees. Cerbera manghasL. (SEA MANGO) of Apocynaceae is a medium-sized evergreen coastal tree with milky latex. The bark is grey-brown, thick and ... Fruit is large. (5–10 cm long), oval containing two flattened seeds and resembles a mango, hence the name Mangas or. Manghas. Leaves and fruits contain ...

  2. Flowering Trees

    user

    Flowering Trees. Gliricidia sepium(Jacq.) Kunta ex Walp. (Quickstick) of Fabaceae is a small deciduous tree with. Pinnately compound leaves. Flower are prroduced in large number in early summer on terminal racemes. They are attractive, pinkish-white and typically like bean flowers. Fruit is a few-seeded flat pod.

  3. Flowering Trees

    Flowering Trees. Acrocarpus fraxinifolius Wight & Arn. (PINK CEDAR, AUSTRALIAN ASH) of. Caesalpiniaceae is a lofty unarmed deciduous native tree that attains a height of 30–60m with buttresses. Bark is thin and light grey. Leaves are compound and bright red when young. Flowers in dense, erect, axillary racemes.

  4. Talking Trees

    Tolman, Marvin

    2005-01-01

    Students love outdoor activities and will love them even more when they build confidence in their tree identification and measurement skills. Through these activities, students will learn to identify the major characteristics of trees and discover how the pace--a nonstandard measuring unit--can be used to estimate not only distances but also the…

  5. Drawing Trees

    Halkjær From, Andreas; Schlichtkrull, Anders; Villadsen, Jørgen

    2018-01-01

    We formally prove in Isabelle/HOL two properties of an algorithm for laying out trees visually. The first property states that removing layout annotations recovers the original tree. The second property states that nodes are placed at least a unit of distance apart. We have yet to formalize three...

  6. Flowering Trees

    Srimath

    Grevillea robusta A. Cunn. ex R. Br. (Sil- ver Oak) of Proteaceae is a daintily lacy ornamental tree while young and growing into a mighty tree (45 m). Young shoots are silvery grey and the leaves are fern- like. Flowers are golden-yellow in one- sided racemes (10 cm). Fruit is a boat- shaped, woody follicle.

  7. Whole-body low-dose computed tomography in multiple myeloma staging: Superior diagnostic performance in the detection of bone lesions, vertebral compression fractures, rib fractures and extraskeletal findings compared to radiography with similar radiation exposure.

    Lambert, Lukas; Ourednicek, Petr; Meckova, Zuzana; Gavelli, Giampaolo; Straub, Jan; Spicka, Ivan

    2017-04-01

    The primary objective of the present prospective study was to compare the diagnostic performance of conventional radiography (CR) and whole-body low-dose computed tomography (WBLDCT) with a comparable radiation dose reconstructed using hybrid iterative reconstruction technique, in terms of the detection of bone lesions, skeletal fractures, vertebral compressions and extraskeletal findings. The secondary objective was to evaluate lesion attenuation in relation to its size. A total of 74 patients underwent same-day skeletal survey by CR and WBLDCT. In CR and WBLDCT, two readers assessed the number of osteolytic lesions at each region and stage according to the International Myeloma Working Group (IMWG) criteria. A single reader additionally assessed extraskeletal findings and their significance, the number of vertebral compressions and bone fractures. The radiation exposure was 2.7±0.9 mSv for WBLDCT and 2.5±0.9 mSv for CR (P=0.054). CR detected bone involvement in 127 out of 486 regions (26%; Prib fractures compared with CR (188 vs. 47; Pfractures, vertebral compressions and extraskeletal findings, which results in up- or downstaging in 24% patients according to the IMWG criteria. The attenuation of osteolytic lesions can be measured with the avoidance of the partial volume effect.

  8. Compressed optimization of device architectures

    Frees, Adam [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Gamble, John King [Microsoft Research, Redmond, WA (United States). Quantum Architectures and Computation Group; Ward, Daniel Robert [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Blume-Kohout, Robin J [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Eriksson, M. A. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Friesen, Mark [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Coppersmith, Susan N. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics

    2014-09-01

    Recent advances in nanotechnology have enabled researchers to control individual quantum mechanical objects with unprecedented accuracy, opening the door for both quantum and extreme- scale conventional computation applications. As these devices become more complex, designing for facility of control becomes a daunting and computationally infeasible task. Here, motivated by ideas from compressed sensing, we introduce a protocol for the Compressed Optimization of Device Architectures (CODA). It leads naturally to a metric for benchmarking and optimizing device designs, as well as an automatic device control protocol that reduces the operational complexity required to achieve a particular output. Because this protocol is both experimentally and computationally efficient, it is readily extensible to large systems. For this paper, we demonstrate both the bench- marking and device control protocol components of CODA through examples of realistic simulations of electrostatic quantum dot devices, which are currently being developed experimentally for quantum computation.

  9. DNABIT Compress – Genome compression algorithm

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  10. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  11. Spinal cord compression at C1-C2 level due to tophaceous gout (magnetic resonance imaging and computed tomography cisternographic findings)

    Vries, C. de; Slegte, R.G.M. de; Valk, J.

    1987-01-01

    The authors report a case of spinal cord compression at the level of the foramen magnum due to tophaceous gout in a patient with no clinical history of gout. The presence of a foramen magnum mass due to urate crystal deposition in a patient without clinical history of gout or additional bone abnormalities has, to the best of the authors' knowledge, never been described before. In the case presented here, no bone changes were encountered with CT or MRI. Neither the presence of small high-density punctuations on the CT examination nor the signal intensities of the mass on T1- and T2-weighted images led to the radiological diagnois of tophaceous gout. The foramen magnum mass and the spinal cord compression were, however, beautifully depicted by both modalities. 14 refs.; 2 figs

  12. Crown traits of coniferous trees and their relation to shade tolerance can differ with leaf type: a biophysical demonstration using computed tomography scanning data

    Dutilleul, Pierre; Han, Liwen; Valladares, Fernando; Messier, Christian

    2015-01-01

    Plant light interception and shade tolerance are intrinsically related in that they involve structural, morphological and physiological adaptations to manage light capture for photosynthetic utilization, in order to sustain survival, development and reproduction. At the scale of small-size trees, crown traits related to structural geometry of branching pattern and space occupancy through phyllotaxis can be accurately evaluated in 3D, using computed tomography (CT) scanning data. We demonstrate this by scrutinizing the crowns of 15 potted miniature conifers of different species or varieties, classified in two groups based on leaf type (10 needlelike, 5 scalelike); we also test whether mean values of crown traits measured from CT scanning data and correlations with a shade tolerance index (STI) differ between groups. Seven crown traits, including fractal dimensions (FD1: smaller scales, FD2: larger scales) and leaf areas, were evaluated for all 15 miniature conifers; an average silhouette-to-total-area ratio was also calculated for each of the 10 needlelike-leaf conifers. Between-group differences in mean values are significant (P < 0.05) for STI, FD1, FD2, and the average leaf area displayed (ĀD). Between-group differences in sign and strength of correlations are observed. For example, the correlation between STI and FD1 is negative and significant (P < 0.10) for the needlelike-leaf group, but is positive and significant (P < 0.05) for the miniature conifers with scalelike leaves, which had lower STI and higher FD1 on average in our study; the positive correlation between STI and ĀD is significant (P < 0.05) for the scalelike-leaf group, and very moderate for the needlelike-leaf one. A contrasting physical attachment of the leaves to branches may explain part of the between-group differences. Our findings open new avenues for the understanding of fundamental plant growth processes; the information gained could be included in a multi-scale approach to tree crown

  13. Crown traits of coniferous trees and their relation to shade tolerance can differ with leaf type: A biophysical demonstration using computed tomography scanning data

    Pierre eDutilleul

    2015-03-01

    Full Text Available Plant light interception and shade tolerance are intrinsically related in that they involve structural, morphological and physiological adaptations to manage light capture for photosynthetic utilization, in order to sustain survival, development and reproduction. At the scale of small-size trees, crown traits related to structural geometry of branching pattern and space occupancy through phyllotaxis can be accurately evaluated in 3D, using computed tomography (CT scanning data. We demonstrate this by scrutinizing the crowns of 15 potted miniature conifers of different species or varieties, classified in two groups based on leaf type (10 needlelike, 5 scalelike; we also test whether mean values of crown traits measured from CT scanning data and correlations with a shade tolerance index (STI differ between groups. Seven crown traits, including fractal dimensions (FD1: smaller scales, FD2: larger scales and leaf areas, were evaluated for all 15 miniature conifers; an average silhouette-to-total-area ratio was also calculated for each of the 10 needlelike-leaf conifers. Between-group differences in mean values are significant (P < 0.05 for STI, FD1, FD2, and average leaf area displayed (A_D. Between-group differences in sign and strength of correlations are observed. For example, the correlation between STI and FD1 is negative and significant (P < 0.10 for the needlelike-leaf group, but is positive and significant (P < 0.05 for the miniature conifers with scalelike leaves, which had lower STI and higher FD1 on average in our study; the positive correlation between STI and A_D is significant (P < 0.05 for the scalelike-leaf group, and very moderate for the needlelike-leaf one. A contrasting physical attachment of leaves to branches may explain part of the between-group differences. Our findings open new avenues for the understanding of fundamental plant growth processes; the information gained could be included in a multi-scale approach to tree crown

  14. Lossless Compression of Digital Images

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive.......The feature vector of a bitmap initially constitutes a lossy representation of the contour(s) of the bitmap. The initial feature space is usually too large but can be reduced automatically by use ofa predictive code length or predictive error criterion....

  15. Radiologic image compression -- A review

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  16. On Tree-Based Phylogenetic Networks.

    Zhang, Louxin

    2016-07-01

    A large class of phylogenetic networks can be obtained from trees by the addition of horizontal edges between the tree edges. These networks are called tree-based networks. We present a simple necessary and sufficient condition for tree-based networks and prove that a universal tree-based network exists for any number of taxa that contains as its base every phylogenetic tree on the same set of taxa. This answers two problems posted by Francis and Steel recently. A byproduct is a computer program for generating random binary phylogenetic networks under the uniform distribution model.

  17. Electron Tree

    Appelt, Ane L; Rønde, Heidi S

    2013-01-01

    The photo shows a close-up of a Lichtenberg figure – popularly called an “electron tree” – produced in a cylinder of polymethyl methacrylate (PMMA). Electron trees are created by irradiating a suitable insulating material, in this case PMMA, with an intense high energy electron beam. Upon discharge......, during dielectric breakdown in the material, the electrons generate branching chains of fractures on leaving the PMMA, producing the tree pattern seen. To be able to create electron trees with a clinical linear accelerator, one needs to access the primary electron beam used for photon treatments. We...... appropriated a linac that was being decommissioned in our department and dismantled the head to circumvent the target and ion chambers. This is one of 24 electron trees produced before we had to stop the fun and allow the rest of the accelerator to be disassembled....

  18. Flowering Trees

    Srimath

    shaped corolla. Fruit is large, ellipsoidal, green with a hard and smooth shell containing numerous flattened seeds, which are embedded in fleshy pulp. Calabash tree is commonly grown in the tropical gardens of the world as a botanical oddity.

  19. Purely Functional Compressed Bit Vectors with Applications and Implementations

    Kaasinen, Joel

    2011-01-01

    The study of compressed data structures strives to represent information on a computer concisely — using as little space as possible. Compressed bit vectors are the simplest compressed data structure. They are used as a basis for more complex data structures with applications in, for example, computational biology. Functional programming is a programming paradigm that represents computation using functions without side-effects (such as mutation). Data structures that are representable in...

  20. The transposition distance for phylogenetic trees

    Rossello, Francesc; Valiente, Gabriel

    2006-01-01

    The search for similarity and dissimilarity measures on phylogenetic trees has been motivated by the computation of consensus trees, the search by similarity in phylogenetic databases, and the assessment of clustering results in bioinformatics. The transposition distance for fully resolved phylogenetic trees is a recent addition to the extensive collection of available metrics for comparing phylogenetic trees. In this paper, we generalize the transposition distance from fully resolved to arbi...

  1. Compressibility of the protein-water interface

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  2. Compressibility of the protein-water interface.

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  3. Fast Compressive Tracking.

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  4. Rectal perforation by compressed air.

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  5. Steiner trees in industry

    Du, Ding-Zhu

    2001-01-01

    This book is a collection of articles studying various Steiner tree prob­ lems with applications in industries, such as the design of electronic cir­ cuits, computer networking, telecommunication, and perfect phylogeny. The Steiner tree problem was initiated in the Euclidean plane. Given a set of points in the Euclidean plane, the shortest network interconnect­ ing the points in the set is called the Steiner minimum tree. The Steiner minimum tree may contain some vertices which are not the given points. Those vertices are called Steiner points while the given points are called terminals. The shortest network for three terminals was first studied by Fermat (1601-1665). Fermat proposed the problem of finding a point to minimize the total distance from it to three terminals in the Euclidean plane. The direct generalization is to find a point to minimize the total distance from it to n terminals, which is still called the Fermat problem today. The Steiner minimum tree problem is an indirect generalization. Sch...

  6. Perceptual Image Compression in Telemedicine

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  7. Flowering Trees

    deciduous tree with irregularly-shaped trunk, greyish-white scaly bark and milky latex. Leaves in opposite pairs are simple, oblong and whitish beneath. Flowers that occur in branched inflorescence are white, 2–. 3cm across and fragrant. Calyx is glandular inside. Petals bear numerous linear white scales, the corollary.

  8. Flowering Trees

    Berrya cordifolia (Willd.) Burret (Syn. B. ammonilla Roxb.) – Trincomali Wood of Tiliaceae is a tall evergreen tree with straight trunk, smooth brownish-grey bark and simple broad leaves. Inflorescence is much branched with white flowers. Stamens are many with golden yellow anthers. Fruit is a capsule with six spreading ...

  9. Flowering Trees

    Canthium parviflorum Lam. of Rubiaceae is a large shrub that often grows into a small tree with conspicuous spines. Leaves are simple, in pairs at each node and are shiny. Inflorescence is an axillary few-flowered cymose fascicle. Flowers are small (less than 1 cm across), 4-merous and greenish-white. Fruit is ellipsoid ...

  10. Flowering Trees

    sriranga

    Hook.f. ex Brandis (Yellow. Cadamba) of Rubiaceae is a large and handsome deciduous tree. Leaves are simple, large, orbicular, and drawn abruptly at the apex. Flowers are small, yellowish and aggregate into small spherical heads. The corolla is funnel-shaped with five stamens inserted at its mouth. Fruit is a capsule.

  11. Flowering Trees

    Celtis tetrandra Roxb. of Ulmaceae is a moderately large handsome deciduous tree with green branchlets and grayish-brown bark. Leaves are simple with three to four secondary veins running parallel to the mid vein. Flowers are solitary, male, female and bisexual and inconspicuous. Fruit is berry-like, small and globose ...

  12. Flowering Trees

    IAS Admin

    Aglaia elaeagnoidea (A.Juss.) Benth. of Meliaceae is a small-sized evergreen tree of both moist and dry deciduous forests. The leaves are alternate and pinnately compound, terminating in a single leaflet. Leaflets are more or less elliptic with entire margin. Flowers are small on branched inflorescence. Fruit is a globose ...

  13. Flowering Trees

    user

    Flowers are borne on stiff bunches terminally on short shoots. They are 2-3 cm across, white, sweet-scented with light-brown hairy sepals and many stamens. Loquat fruits are round or pear-shaped, 3-5 cm long and are edible. A native of China, Loquat tree is grown in parks as an ornamental and also for its fruits.

  14. Flowering Trees

    mid-sized slow-growing evergreen tree with spreading branches that form a dense crown. The bark is smooth, thick, dark and flakes off in large shreds. Leaves are thick, oblong, leathery and bright red when young. The female flowers are drooping and are larger than male flowers. Fruit is large, red in color and velvety.

  15. Flowering Trees

    Andira inermis (wright) DC. , Dog Almond of Fabaceae is a handsome lofty evergreen tree. Leaves are alternate and pinnately compound with 4–7 pairs of leaflets. Flowers are fragrant and are borne on compact branched inflorescences. Fruit is ellipsoidal one-seeded drupe that is peculiar to members of this family.

  16. Flowering Trees

    narrow towards base. Flowers are large and attrac- tive, but emit unpleasant foetid smell. They appear in small numbers on erect terminal clusters and open at night. Stamens are numerous, pink or white. Style is slender and long, terminating in a small stigma. Fruit is green, ovoid and indistinctly lobed. Flowering Trees.

  17. Flowering Trees

    Muntingia calabura L. (Singapore cherry) of. Elaeocarpaceae is a medium size handsome ever- green tree. Leaves are simple and alternate with sticky hairs. Flowers are bisexual, bear numerous stamens, white in colour and arise in the leaf axils. Fruit is a berry, edible with several small seeds embedded in a fleshy pulp ...

  18. ~{owering 'Trees

    . Stamens are fused into a purple staminal tube that is toothed. Fruit is about 0.5 in. across, nearly globose, generally 5-seeded, green but yellow when ripe, quite smooth at first but wrinkled in drying, remaining long on the tree ajier ripening.

  19. Tree Mortality

    Mark J. Ambrose

    2012-01-01

    Tree mortality is a natural process in all forest ecosystems. However, extremely high mortality also can be an indicator of forest health issues. On a regional scale, high mortality levels may indicate widespread insect or disease problems. High mortality may also occur if a large proportion of the forest in a particular region is made up of older, senescent stands....

  20. Flowering Trees

    Guaiacum officinale L. (LIGNUM-VITAE) of Zygophyllaceae is a dense-crowned, squat, knobbly, rough and twisted medium-sized ev- ergreen tree with mottled bark. The wood is very hard and resinous. Leaves are compound. The leaflets are smooth, leathery, ovate-ellipti- cal and appear in two pairs. Flowers (about 1.5.

  1. Linking and Cutting Spanning Trees

    Luís M. S. Russo

    2018-04-01

    Full Text Available We consider the problem of uniformly generating a spanning tree for an undirected connected graph. This process is useful for computing statistics, namely for phylogenetic trees. We describe a Markov chain for producing these trees. For cycle graphs, we prove that this approach significantly outperforms existing algorithms. For general graphs, experimental results show that the chain converges quickly. This yields an efficient algorithm due to the use of proper fast data structures. To obtain the mixing time of the chain we describe a coupling, which we analyze for cycle graphs and simulate for other graphs.

  2. Microbunching and RF Compression

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  3. Uniaxial Compression of Cellular Materials at a 10-1 s-1 Strain Rate Simultaneously with Synchrotron X-ray Computed Tomographic Imaging

    Patterson, Brian M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-03-01

    The topic is presented as a series of slides. Motivation for the work included the following: X-ray tomography is a fantastic technique for characterizing a material’s starting structure as well as for non-destructive, in situ experiments to investigate material response; 3D X-ray tomography is needed to fully characterize the morphology of cellular materials; and synchrotron micro-CT can capture 3D images without pausing experiment. Among the conclusions reached are these: High-rate radiographic and tomographic imaging (0.25 s 3D frame rate) using synchrotron CT can capture full 3D images of hyper-elastic materials at a 10-2 strain rate; dynamic true in situ uniaxial loading can be accurately captured; the three stages of compression can be imaged: bending, buckling, and breaking; implementation of linear modeling is completed; meshes have been imported into LANL modeling codes--testing and validation is underway and direct comparison and validation between in situ data and modeled mechanical response is possible.

  4. COMPUTING

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  5. Self-aligning and compressed autosophy video databases

    Holtz, Klaus E.

    1993-04-01

    Autosophy, an emerging new science, explains `self-assembling structures,' such as crystals or living trees, in mathematical terms. This research provides a new mathematical theory of `learning' and a new `information theory' which permits the growing of self-assembling data network in a computer memory similar to the growing of `data crystals' or `data trees' without data processing or programming. Autosophy databases are educated very much like a human child to organize their own internal data storage. Input patterns, such as written questions or images, are converted to points in a mathematical omni dimensional hyperspace. The input patterns are then associated with output patterns, such as written answers or images. Omni dimensional information storage will result in enormous data compression because each pattern fragment is only stored once. Pattern recognition in the text or image files is greatly simplified by the peculiar omni dimensional storage method. Video databases will absorb input images from a TV camera and associate them with textual information. The `black box' operations are totally self-aligning where the input data will determine their own hyperspace storage locations. Self-aligning autosophy databases may lead to a new generation of brain-like devices.

  6. Mining compressing sequential problems

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  7. ERA: Efficient serial and parallel suffix tree construction for very long strings

    Mansour, Essam

    2011-09-01

    The suffix tree is a data structure for indexing strings. It is used in a variety of applications such as bioinformatics, time series analysis, clustering, text editing and data compression. However, when the string and the resulting suffix tree are too large to fit into the main memory, most existing construction algorithms become very inefficient. This paper presents a disk-based suffix tree construction method, called Elastic Range (ERa), which works efficiently with very long strings that are much larger than the available memory. ERa partitions the tree construction process horizontally and vertically and minimizes I/Os by dynamically adjusting the horizontal partitions independently for each vertical partition, based on the evolving shape of the tree and the available memory. Where appropriate, ERa also groups vertical partitions together to amortize the I/O cost. We developed a serial version; a parallel version for shared-memory and shared-disk multi-core systems; and a parallel version for shared-nothing architectures. ERa indexes the entire human genome in 19 minutes on an ordinary desktop computer. For comparison, the fastest existing method needs 15 minutes using 1024 CPUs on an IBM BlueGene supercomputer.

  8. Mathematical transforms and image compression: A review

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  9. A Metric on Phylogenetic Tree Shapes.

    Colijn, C; Plazzotta, G

    2018-01-01

    The shapes of evolutionary trees are influenced by the nature of the evolutionary process but comparisons of trees from different processes are hindered by the challenge of completely describing tree shape. We present a full characterization of the shapes of rooted branching trees in a form that lends itself to natural tree comparisons. We use this characterization to define a metric, in the sense of a true distance function, on tree shapes. The metric distinguishes trees from random models known to produce different tree shapes. It separates trees derived from tropical versus USA influenza A sequences, which reflect the differing epidemiology of tropical and seasonal flu. We describe several metrics based on the same core characterization, and illustrate how to extend the metric to incorporate trees' branch lengths or other features such as overall imbalance. Our approach allows us to construct addition and multiplication on trees, and to create a convex metric on tree shapes which formally allows computation of average tree shapes. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists.

  10. COMPUTING

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  11. COMPUTING

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  12. COMPUTING

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  13. Compression of magnetized target in the magneto-inertial fusion

    Kuzenov, V. V.

    2017-12-01

    This paper presents a mathematical model, numerical method and results of the computer analysis of the compression process and the energy transfer in the target plasma, used in magneto-inertial fusion. The computer simulation of the compression process of magnetized cylindrical target by high-power laser pulse is presented.

  14. Compressibility, turbulence and high speed flow

    Gatski, Thomas B

    2009-01-01

    This book introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. For the computation of turbulent compressible flows, current methods of averaging and filtering are presented so that the reader is exposed to a consistent development of applicable equation sets for both the mean or resolved fields as well as the transport equations for the turbulent stress field. For the measurement of turbulent compressible flows, current techniques ranging from hot-wire anemometry to PIV are evaluated and limitations assessed. Characterizing dynamic features of free shear flows, including jets, mixing layers and wakes, and wall-bounded flows, including shock-turbulence and shock boundary-layer interactions, obtained from computations, experiments and simulations are discussed. Key features: * Describes prediction methodologies in...

  15. Compressive multi-mode superresolution display

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  16. Compression for radiological images

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  17. COMPUTING

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  18. Bi-Criteria Optimization of Decision Trees with Applications to Data Analysis

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2017-01-01

    : the study of relationships among depth, average depth and number of nodes for decision trees for corner point detection (such trees are used in computer vision for object tracking), study of systems of decision rules derived from decision trees

  19. Radiological Image Compression

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  20. Benefits and costs of street trees in Lisbon

    A.L. Soares; F.C. Rego; E.G. McPherson; J.R. Simpson; P.J. Peper; Q. Xiao

    2011-01-01

    It is well known that urban trees produce various types of benefits and costs. The computer tool i-Tree STRATUM helps quantify tree structure and function, as well as the value of some of these tree services in different municipalities. This study describes one of the first applications of STRATUM outside the U.S. Lisbon’s street trees are dominated by Celtis australis...

  1. Inferring species trees from incongruent multi-copy gene trees using the Robinson-Foulds distance

    2013-01-01

    Background Constructing species trees from multi-copy gene trees remains a challenging problem in phylogenetics. One difficulty is that the underlying genes can be incongruent due to evolutionary processes such as gene duplication and loss, deep coalescence, or lateral gene transfer. Gene tree estimation errors may further exacerbate the difficulties of species tree estimation. Results We present a new approach for inferring species trees from incongruent multi-copy gene trees that is based on a generalization of the Robinson-Foulds (RF) distance measure to multi-labeled trees (mul-trees). We prove that it is NP-hard to compute the RF distance between two mul-trees; however, it is easy to calculate this distance between a mul-tree and a singly-labeled species tree. Motivated by this, we formulate the RF problem for mul-trees (MulRF) as follows: Given a collection of multi-copy gene trees, find a singly-labeled species tree that minimizes the total RF distance from the input mul-trees. We develop and implement a fast SPR-based heuristic algorithm for the NP-hard MulRF problem. We compare the performance of the MulRF method (available at http://genome.cs.iastate.edu/CBL/MulRF/) with several gene tree parsimony approaches using gene tree simulations that incorporate gene tree error, gene duplications and losses, and/or lateral transfer. The MulRF method produces more accurate species trees than gene tree parsimony approaches. We also demonstrate that the MulRF method infers in minutes a credible plant species tree from a collection of nearly 2,000 gene trees. Conclusions Our new phylogenetic inference method, based on a generalized RF distance, makes it possible to quickly estimate species trees from large genomic data sets. Since the MulRF method, unlike gene tree parsimony, is based on a generic tree distance measure, it is appealing for analyses of genomic data sets, in which many processes such as deep coalescence, recombination, gene duplication and losses as

  2. Distributed Merge Trees

    Morozov, Dmitriy; Weber, Gunther

    2013-01-08

    Improved simulations and sensors are producing datasets whose increasing complexity exhausts our ability to visualize and comprehend them directly. To cope with this problem, we can detect and extract significant features in the data and use them as the basis for subsequent analysis. Topological methods are valuable in this context because they provide robust and general feature definitions. As the growth of serial computational power has stalled, data analysis is becoming increasingly dependent on massively parallel machines. To satisfy the computational demand created by complex datasets, algorithms need to effectively utilize these computer architectures. The main strength of topological methods, their emphasis on global information, turns into an obstacle during parallelization. We present two approaches to alleviate this problem. We develop a distributed representation of the merge tree that avoids computing the global tree on a single processor and lets us parallelize subsequent queries. To account for the increasing number of cores per processor, we develop a new data structure that lets us take advantage of multiple shared-memory cores to parallelize the work on a single node. Finally, we present experiments that illustrate the strengths of our approach as well as help identify future challenges.

  3. COMPUTING

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  4. COMPUTING

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  5. Coalescent methods for estimating phylogenetic trees.

    Liu, Liang; Yu, Lili; Kubatko, Laura; Pearl, Dennis K; Edwards, Scott V

    2009-10-01

    We review recent models to estimate phylogenetic trees under the multispecies coalescent. Although the distinction between gene trees and species trees has come to the fore of phylogenetics, only recently have methods been developed that explicitly estimate species trees. Of the several factors that can cause gene tree heterogeneity and discordance with the species tree, deep coalescence due to random genetic drift in branches of the species tree has been modeled most thoroughly. Bayesian approaches to estimating species trees utilizes two likelihood functions, one of which has been widely used in traditional phylogenetics and involves the model of nucleotide substitution, and the second of which is less familiar to phylogeneticists and involves the probability distribution of gene trees given a species tree. Other recent parametric and nonparametric methods for estimating species trees involve parsimony criteria, summary statistics, supertree and consensus methods. Species tree approaches are an appropriate goal for systematics, appear to work well in some cases where concatenation can be misleading, and suggest that sampling many independent loci will be paramount. Such methods can also be challenging to implement because of the complexity of the models and computational time. In addition, further elaboration of the simplest of coalescent models will be required to incorporate commonly known issues such as deviation from the molecular clock, gene flow and other genetic forces.

  6. Surface tree languages and parallel derivation trees

    Engelfriet, Joost

    1976-01-01

    The surface tree languages obtained by top-down finite state transformation of monadic trees are exactly the frontier-preserving homomorphic images of sets of derivation trees of ETOL systems. The corresponding class of tree transformation languages is therefore equal to the class of ETOL languages.

  7. Adiabatic compression of ion rings

    Larrabee, D.A.; Lovelace, R.V.

    1982-01-01

    A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

  8. Visualizing Contour Trees within Histograms

    Kraus, Martin

    2010-01-01

    Many of the topological features of the isosurfaces of a scalar volume field can be compactly represented by its contour tree. Unfortunately, the contour trees of most real-world volume data sets are too complex to be visualized by dot-and-line diagrams. Therefore, we propose a new visualization...... that is suitable for large contour trees and efficiently conveys the topological structure of the most important isosurface components. This visualization is integrated into a histogram of the volume data; thus, it offers strictly more information than a traditional histogram. We present algorithms...... to automatically compute the graph layout and to calculate appropriate approximations of the contour tree and the surface area of the relevant isosurface components. The benefits of this new visualization are demonstrated with the help of several publicly available volume data sets....

  9. [Medical image compression: a review].

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  10. An unusual case: right proximal ureteral compression by the ovarian vein and distal ureteral compression by the external iliac vein

    Halil Ibrahim Serin

    2015-12-01

    Full Text Available A 32-years old woman presented to the emergency room of Bozok University Research Hospital with right renal colic. Multidetector computed tomography (MDCT showed compression of the proximal ureter by the right ovarian vein and compression of the right distal ureter by the right external iliac vein. To the best of our knowledge, right proximal ureteral compression by the ovarian vein together with distal ureteral compression by the external iliac vein have not been reported in the literature. Ovarian vein and external iliac vein compression should be considered in patients presenting to the emergency room with renal colic or low back pain and a dilated collecting system.

  11. COMPUTING

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  12. COMPUTING

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  13. COMPUTING

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  14. COMPUTING

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  15. COMPUTING

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  16. COMPUTING

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  17. COMPUTING

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  18. Trees are good, but…

    E.G. McPherson; F. Ferrini

    2010-01-01

    We know that “trees are good,” and most people believe this to be true. But if this is so, why are so many trees neglected, and so many tree wells empty? An individual’s attitude toward trees may result from their firsthand encounters with specific trees. Understanding how attitudes about trees are shaped, particularly aversion to trees, is critical to the business of...

  19. Compressed sensing & sparse filtering

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  20. Tokamak plasma variations under rapid compression

    Holmes, J.A.; Peng, Y.K.M.; Lynch, S.J.

    1980-04-01

    Changes in plasmas undergoing large, rapid compressions are examined numerically over the following range of aspect ratios A:3 greater than or equal to A greater than or equal to 1.5 for major radius compressions of circular, elliptical, and D-shaped cross sections; and 3 less than or equal to A less than or equal to 6 for minor radius compressions of circular and D-shaped cross sections. The numerical approach combines the computation of fixed boundary MHD equilibria with single-fluid, flux-surface-averaged energy balance, particle balance, and magnetic flux diffusion equations. It is found that the dependences of plasma current I/sub p/ and poloidal beta anti β/sub p/ on the compression ratio C differ significantly in major radius compressions from those proposed by Furth and Yoshikawa. The present interpretation is that compression to small A dramatically increases the plasma current, which lowers anti β/sub p/ and makes the plasma more paramagnetic. Despite large values of toroidal beta anti β/sub T/ (greater than or equal to 30% with q/sub axis/ approx. = 1, q/sub edge/ approx. = 3), this tends to concentrate more toroidal flux near the magnetic axis, which means that a reduced minor radius is required to preserve the continuity of the toroidal flux function F at the plasma edge. Minor radius compressions to large aspect ratio agree well with the Furth-Yoshikawa scaling laws

  1. Cloud Optimized Image Format and Compression

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  2. Parallel family trees for transfer matrices in the Potts model

    Navarro, Cristobal A.; Canfora, Fabrizio; Hitschfeld, Nancy; Navarro, Gonzalo

    2015-02-01

    The computational cost of transfer matrix methods for the Potts model is related to the question in how many ways can two layers of a lattice be connected? Answering the question leads to the generation of a combinatorial set of lattice configurations. This set defines the configuration space of the problem, and the smaller it is, the faster the transfer matrix can be computed. The configuration space of generic (q , v) transfer matrix methods for strips is in the order of the Catalan numbers, which grows asymptotically as O(4m) where m is the width of the strip. Other transfer matrix methods with a smaller configuration space indeed exist but they make assumptions on the temperature, number of spin states, or restrict the structure of the lattice. In this paper we propose a parallel algorithm that uses a sub-Catalan configuration space of O(3m) to build the generic (q , v) transfer matrix in a compressed form. The improvement is achieved by grouping the original set of Catalan configurations into a forest of family trees, in such a way that the solution to the problem is now computed by solving the root node of each family. As a result, the algorithm becomes exponentially faster than the Catalan approach while still highly parallel. The resulting matrix is stored in a compressed form using O(3m ×4m) of space, making numerical evaluation and decompression to be faster than evaluating the matrix in its O(4m ×4m) uncompressed form. Experimental results for different sizes of strip lattices show that the parallel family trees (PFT) strategy indeed runs exponentially faster than the Catalan Parallel Method (CPM), especially when dealing with dense transfer matrices. In terms of parallel performance, we report strong-scaling speedups of up to 5.7 × when running on an 8-core shared memory machine and 28 × for a 32-core cluster. The best balance of speedup and efficiency for the multi-core machine was achieved when using p = 4 processors, while for the cluster

  3. COMPUTING

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  4. COMPUTING

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  5. COMPUTING

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  6. COMPUTING

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  7. COMPUTING

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  8. COMPUTING

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  9. COMPUTING

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  10. Modular tree automata

    Bahr, Patrick

    2012-01-01

    Tree automata are traditionally used to study properties of tree languages and tree transformations. In this paper, we consider tree automata as the basis for modular and extensible recursion schemes. We show, using well-known techniques, how to derive from standard tree automata highly modular...

  11. Simple street tree sampling

    David J. Nowak; Jeffrey T. Walton; James Baldwin; Jerry. Bond

    2015-01-01

    Information on street trees is critical for management of this important resource. Sampling of street tree populations provides an efficient means to obtain street tree population information. Long-term repeat measures of street tree samples supply additional information on street tree changes and can be used to report damages from catastrophic events. Analyses of...

  12. Compression and archiving of digital images

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  13. A Distributed Spanning Tree Algorithm

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...

  14. A distributed spanning tree algorithm

    Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge

    1988-01-01

    We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well as comm...

  15. Application of content-based image compression to telepathology

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  16. Anisotropic Concrete Compressive Strength

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  17. Experiments with automata compression

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  18. COMPUTING

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  19. Bitshuffle: Filter for improving compression of typed binary data

    Masui, Kiyoshi

    2017-12-01

    Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.

  20. City of Pittsburgh Trees

    Allegheny County / City of Pittsburgh / Western PA Regional Data Center — Trees cared for and managed by the City of Pittsburgh Department of Public Works Forestry Division. Tree Benefits are calculated using the National Tree Benefit...

  1. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  2. Compressibility of air in fibrous materials

    Tarnow, Viggo

    1996-01-01

    The dynamic compressibility of air in fibrous materials has been computed for two assumed configurations of fibers which are close to the geometry of real fiber materials. Models with parallel cylinders placed in a regular square lattice and placed randomly are treated. For these models...... the compressibility is computed approximately from the diameter and mean distances between cylinders. This requires calculation of the air temperature, which is calculated for cylinders in a regular lattive by the Wigner-Seitz cell approximation. In the case of random placement, the calculation is done by a summation...... over thermal waves from all fibers, and by a self-consistent procedure. Figuren of the compressibility in the frequency range 10-100 000 Hz, are given for diameter of the cylinders of 6.8 µm, and mean distances between them from 50 to 110 µm, which corresponds to glass wool with a density of 40 to 16...

  3. COMPUTING

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  4. Million Trees Los Angeles: Carbon dioxide sink or source?

    E.G. McPherson; A. Kendall; S. Albers

    2015-01-01

    This study seeks to answer the question, 'Will the Million Trees LA (MTLA) programme be a CO2 sink or source?' Using surveys, interviews, field sampling and computer simulation of tree growth and survival over a 40-year period, we developed the first process-based life cycle inventory of CO2 for a large tree...

  5. Dissimilarity between two skeletal trees in a context

    Baseski, Emre; Erdem, Aykut; Tari, Sibel

    2009-01-01

    Skeletal trees are commonly used in order to express geometric properties of the shape. Accordingly, tree-edit distance is used to compute a dissimilarity between two given shapes. We present a new tree-edit based shape matching method which uses a recent coarse skeleton representation. The coars...

  6. The decision tree approach to classification

    Wu, C.; Landgrebe, D. A.; Swain, P. H.

    1975-01-01

    A class of multistage decision tree classifiers is proposed and studied relative to the classification of multispectral remotely sensed data. The decision tree classifiers are shown to have the potential for improving both the classification accuracy and the computation efficiency. Dimensionality in pattern recognition is discussed and two theorems on the lower bound of logic computation for multiclass classification are derived. The automatic or optimization approach is emphasized. Experimental results on real data are reported, which clearly demonstrate the usefulness of decision tree classifiers.

  7. Video Coding Technique using MPEG Compression Standards

    Akorede

    The two dimensional discrete cosine transform (2-D DCT) is an integral part of video and image compression ... solution for the optimum trade-off by applying rate-distortion theory has been ..... Int. J. the computer, the internet and management,.

  8. Compressive multi-mode superresolution display

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh D.; Heidrich, Wolfgang

    2014-01-01

    consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived

  9. Deterministic Differential Properties of the Compression Function of BMW

    Guo, Jian; Thomsen, Søren Steffen

    2011-01-01

    In this paper, we give some determinstic differential properties for the compression function of SHA-3 candidate Blue Midnight Wish (tweaked version for round 2). The computational complexity is about 20 compression function calls. This applies to security parameters 0/16, 1/15, and 2/14. The eff...

  10. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  11. Image splitting and remapping method for radiological image compression

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  12. Cooperative Media Streaming Using Adaptive Network Compression

    Møller, Janus Heide; Sørensen, Jesper Hemming; Krigslund, Rasmus

    2008-01-01

    as an adaptive hybrid between LC and MDC. In order to facilitate the use of MDC-CC, a new overlay network approach is proposed, using tree of meshes. A control system for managing description distribution and compression in a small mesh is implemented in the discrete event simulator NS-2. The two traditional...... approaches, MDC and LC, are used as references for the performance evaluation of the proposed scheme. The system is simulated in a heterogeneous network environment, where packet errors are introduced. Moreover, a test is performed at different network loads. Performance gain is shown over both LC and MDC....

  13. Compressive laser ranging.

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  14. The compressed word problem for groups

    Lohrey, Markus

    2014-01-01

    The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...

  15. Random Access to Grammar-Compressed Strings and Trees

    Bille, Philip; Landau, Gad M.; Raman, Rajeev

    2015-01-01

    representations of S achieving O(log N) random access time, and either O(n · αk(n)) construction time and space on the pointer machine model, or O(n) construction time and space on the RAM. Here, αk(n) is the inverse of the kth row of Ackermann's function. Our representations also efficiently support...

  16. NRGC: a novel referential genome compression algorithm.

    Saha, Subrata; Rajasekaran, Sanguthevar

    2016-11-15

    Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Categorizing ideas about trees: a tree of trees.

    Fisler, Marie; Lecointre, Guillaume

    2013-01-01

    The aim of this study is to explore whether matrices and MP trees used to produce systematic categories of organisms could be useful to produce categories of ideas in history of science. We study the history of the use of trees in systematics to represent the diversity of life from 1766 to 1991. We apply to those ideas a method inspired from coding homologous parts of organisms. We discretize conceptual parts of ideas, writings and drawings about trees contained in 41 main writings; we detect shared parts among authors and code them into a 91-characters matrix and use a tree representation to show who shares what with whom. In other words, we propose a hierarchical representation of the shared ideas about trees among authors: this produces a "tree of trees." Then, we categorize schools of tree-representations. Classical schools like "cladists" and "pheneticists" are recovered but others are not: "gradists" are separated into two blocks, one of them being called here "grade theoreticians." We propose new interesting categories like the "buffonian school," the "metaphoricians," and those using "strictly genealogical classifications." We consider that networks are not useful to represent shared ideas at the present step of the study. A cladogram is made for showing who is sharing what with whom, but also heterobathmy and homoplasy of characters. The present cladogram is not modelling processes of transmission of ideas about trees, and here it is mostly used to test for proximity of ideas of the same age and for categorization.

  18. Total Correlation Function Integrals and Isothermal Compressibilities from Molecular Simulations

    Wedberg, Rasmus; Peters, Günther H.j.; Abildskov, Jens

    2008-01-01

    Generation of thermodynamic data, here compressed liquid density and isothermal compressibility data, using molecular dynamics simulations is investigated. Five normal alkane systems are simulated at three different state points. We compare two main approaches to isothermal compressibilities: (1...... in approximately the same amount of time. This suggests that computation of total correlation function integrals is a route to isothermal compressibility, as accurate and fast as well-established benchmark techniques. A crucial step is the integration of the radial distribution function. To obtain sensible results...

  19. Compressibility and thermal expansion of cubic silicon nitride

    Jiang, Jianzhong; Lindelov, H.; Gerward, Leif

    2002-01-01

    The compressibility and thermal expansion of the cubic silicon nitride (c-Si3N4) phase have been investigated by performing in situ x-ray powder-diffraction measurements using synchrotron radiation, complemented with computer simulations by means of first-principles calculations. The bulk...... compressibility of the c-Si3N4 phase originates from the average of both Si-N tetrahedral and octahedral compressibilities where the octahedral polyhedra are less compressible than the tetrahedral ones. The origin of the unit cell expansion is revealed to be due to the increase of the octahedral Si-N and N-N bond...

  20. Urban tree growth modeling

    E. Gregory McPherson; Paula J. Peper

    2012-01-01

    This paper describes three long-term tree growth studies conducted to evaluate tree performance because repeated measurements of the same trees produce critical data for growth model calibration and validation. Several empirical and process-based approaches to modeling tree growth are reviewed. Modeling is more advanced in the fields of forestry and...

  1. Keeping trees as assets

    Kevin T. Smith

    2009-01-01

    Landscape trees have real value and contribute to making livable communities. Making the most of that value requires providing trees with the proper care and attention. As potentially large and long-lived organisms, trees benefit from commitment to regular care that respects the natural tree system. This system captures, transforms, and uses energy to survive, grow,...

  2. Multiband and Lossless Compression of Hyperspectral Images

    Raffaele Pizzolante

    2016-02-01

    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  3. Automated Generation of Attack Trees

    Vigo, Roberto; Nielson, Flemming; Nielson, Hanne Riis

    2014-01-01

    Attack trees are widely used to represent threat scenarios in a succinct and intuitive manner, suitable for conveying security information to non-experts. The manual construction of such objects relies on the creativity and experience of specialists, and therefore it is error-prone and impractica......Attack trees are widely used to represent threat scenarios in a succinct and intuitive manner, suitable for conveying security information to non-experts. The manual construction of such objects relies on the creativity and experience of specialists, and therefore it is error......-prone and impracticable for large systems. Nonetheless, the automated generation of attack trees has only been explored in connection to computer networks and levering rich models, whose analysis typically leads to an exponential blow-up of the state space. We propose a static analysis approach where attack trees...... are automatically inferred from a process algebraic specification in a syntax-directed fashion, encompassing a great many application domains and avoiding incurring systematically an exponential explosion. Moreover, we show how the standard propositional denotation of an attack tree can be used to phrase...

  4. Modular representation and analysis of fault trees

    Olmos, J; Wolf, L [Massachusetts Inst. of Tech., Cambridge (USA). Dept. of Nuclear Engineering

    1978-08-01

    An analytical method to describe fault tree diagrams in terms of their modular compositions is developed. Fault tree structures are characterized by recursively relating the top tree event to all its basic component inputs through a set of equations defining each of the modulus for the fault tree. It is shown that such a modular description is an extremely valuable tool for making a quantitative analysis of fault trees. The modularization methodology has been implemented into the PL-MOD computer code, written in PL/1 language, which is capable of modularizing fault trees containing replicated components and replicated modular gates. PL-MOD in addition can handle mutually exclusive inputs and explicit higher order symmetric (k-out-of-n) gates. The step-by-step modularization of fault trees performed by PL-MOD is demonstrated and it is shown how this procedure is only made possible through an extensive use of the list processing tools available in PL/1. A number of nuclear reactor safety system fault trees were analyzed. PL-MOD performed the modularization and evaluation of the modular occurrence probabilities and Vesely-Fussell importance measures for these systems very efficiently. In particular its execution time for the modularization of a PWR High Pressure Injection System reduced fault tree was 25 times faster than that necessary to generate its equivalent minimal cut-set description using MOCUS, a code considered to be fast by present standards.

  5. MeshTree: A Delay optimised Overlay Multicast Tree Building Protocol

    Tan, Su-Wei; Waters, A. Gill; Crawford, John

    2005-01-01

    We study decentralised low delay degree-constrained overlay multicast tree construction for single source real-time applications. This optimisation problem is NP-hard even if computed centrally. We identify two problems in traditional distributed solutions, namely the greedy problem and delay-cost trade-off. By offering solutions to these problems, we propose a new self-organising distributed tree building protocol called MeshTree. The main idea is to embed the delivery tree in a degree-bound...

  6. The mathematical theory of signal processing and compression-designs

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  7. Optical pulse compression

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  8. Music analysis and point-set compression

    Meredith, David

    2015-01-01

    COSIATEC, SIATECCompress and Forth’s algorithm are point-set compression algorithms developed for discovering repeated patterns in music, such as themes and motives that would be of interest to a music analyst. To investigate their effectiveness and versatility, these algorithms were evaluated...... on three analytical tasks that depend on the discovery of repeated patterns: classifying folk song melodies into tune families, discovering themes and sections in polyphonic music, and discovering subject and countersubject entries in fugues. Each algorithm computes a compressed encoding of a point......-set representation of a musical object in the form of a list of compact patterns, each pattern being given with a set of vectors indicating its occurrences. However, the algorithms adopt different strategies in their attempts to discover encodings that maximize compression.The best-performing algorithm on the folk...

  9. Isentropic Compression of Argon

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  10. Pulsed Compression Reactor

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  11. Medullary compression syndrome

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  12. There's Life in Hazard Trees

    Mary Torsello; Toni McLellan

    The goals of hazard tree management programs are to maximize public safety and maintain a healthy sustainable tree resource. Although hazard tree management frequently targets removal of trees or parts of trees that attract wildlife, it can take into account a diversity of tree values. With just a little extra planning, hazard tree management can be highly beneficial...

  13. Representing Boolean Functions by Decision Trees

    Chikalov, Igor

    2011-01-01

    A Boolean or discrete function can be represented by a decision tree. A compact form of decision tree named binary decision diagram or branching program is widely known in logic design [2, 40]. This representation is equivalent to other forms, and in some cases it is more compact than values table or even the formula [44]. Representing a function in the form of decision tree allows applying graph algorithms for various transformations [10]. Decision trees and branching programs are used for effective hardware [15] and software [5] implementation of functions. For the implementation to be effective, the function representation should have minimal time and space complexity. The average depth of decision tree characterizes the expected computing time, and the number of nodes in branching program characterizes the number of functional elements required for implementation. Often these two criteria are incompatible, i.e. there is no solution that is optimal on both time and space complexity. © Springer-Verlag Berlin Heidelberg 2011.

  14. Design of data structures for mergeable trees

    Georgiadis, Loukas; Tarjan, Robert Endre; Werneck, Renato Fonseca F.

    2006-01-01

    merge operation can change many arcs. In spite of this, we develop a data structure that supports merges and all other standard tree operations in O(log2 n) amortized time on an n-node forest. For the special case that occurs in the motivating application, in which arbitrary arc deletions...... are not allowed, we give a data structure with an O(log n) amortized time bound per operation, which is asymptotically optimal. The analysis of both algorithms is not straightforward and requires ideas not previously used in the study of dynamic trees. We explore the design space of algorithms for the problem......Motivated by an application in computational topology, we consider a novel variant of the problem of efficiently maintaining dynamic rooted trees. This variant allows an operation that merges two tree paths. In contrast to the standard problem, in which only one tree arc at a time changes, a single...

  15. Spanning Tree Based Attribute Clustering

    Zeng, Yifeng; Jorge, Cordero Hernandez

    2009-01-01

    Attribute clustering has been previously employed to detect statistical dependence between subsets of variables. We propose a novel attribute clustering algorithm motivated by research of complex networks, called the Star Discovery algorithm. The algorithm partitions and indirectly discards...... inconsistent edges from a maximum spanning tree by starting appropriate initial modes, therefore generating stable clusters. It discovers sound clusters through simple graph operations and achieves significant computational savings. We compare the Star Discovery algorithm against earlier attribute clustering...

  16. Spatial compression algorithm for the analysis of very large multivariate images

    Keenan, Michael R [Albuquerque, NM

    2008-07-15

    A method for spatially compressing data sets enables the efficient analysis of very large multivariate images. The spatial compression algorithms use a wavelet transformation to map an image into a compressed image containing a smaller number of pixels that retain the original image's information content. Image analysis can then be performed on a compressed data matrix consisting of a reduced number of significant wavelet coefficients. Furthermore, a block algorithm can be used for performing common operations more efficiently. The spatial compression algorithms can be combined with spectral compression algorithms to provide further computational efficiencies.

  17. Efficient fault tree handling - the Asea-Atom approach

    Ericsson, G.; Knochenhauer, M.; Mills, R.

    1985-01-01

    In recent years there has been a trend in Swedish Probabilistic Safety Analysis (PSA) work towards coordination of the tools and methods used, in order to facilitate exchange of information and review. Thus, standardized methods for fault tree drawing and basic event coding have been developed as well as a number of computer codes for fault tree handling. The computer code used by Asea-Atom is called SUPER-TREE. As indicated by the name, the key feature is the concept of one super tree containing all the information necessary in the fault tree analysis, i.e. system fault trees, sequence fault trees and component data base. The code has proved to allow great flexibility in the choice of level of detail in the analysis

  18. Graph Compression by BFS

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  19. Covering tree with stars

    Baumbach, Jan; Guo, Jiong; Ibragimov, Rashid

    2015-01-01

    We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting...... tree is isomorphic to T? We prove that in the general setting, CST is NP-complete, which implies that the tree edit distance considered here is also NP-hard, even when both input trees having diameters bounded by 10. We also show that, when the number of distinct stars is bounded by a constant k, CTS...

  20. Compressed Air/Vacuum Transportation Techniques

    Guha, Shyamal

    2011-03-01

    General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.

  1. Lagrangian statistics in compressible isotropic homogeneous turbulence

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  2. Modelado y experimentación computacional de la etapa de compresión en motores de pistones libres//Modeling and computer experiments of the compression stage in free piston engines

    Genovevo Morejón-Vizcaino

    2012-09-01

    Full Text Available En este articulo se alcanzó la obtención de un prototipo analítico de la etapa de combustión de un motor de pistones libres para realizar experimentos virtuales con el propósito de arribar al nuevo conocimiento, necesario para desarrollar un motor de pistones libres multicilindro con una bomba volumétrica, que hagala función, del acumulador hidráulico que emplean los diseños actuales para la carrera de compresión, con la finalidad de mejorar la densidad de potencia y disminuir las exigencias al comportamiento dinámico de los agregados. El método empleado es el “Desarrollo de nuevos productos mecatrónicos”. Se dedujo elmodelo matemático para la etapa de la compresión y aplicando el método de los grafos dicromáticos se obtuvo un algoritmo y el prototipo analítico. Los resultados de los experimentos virtuales muestran diferentes restricciones en la geometría y los materiales a utilizar así como las tendencias en el comportamiento de los diferentes parámetros hidráulicos.Palabras claves: motores de pistones libres, experimentos computacionales, prototipos analíticos, modelos matemáticos, oleohidráulica._____________________________________________________________________________AbstractThe goal of the investigation is the development of a free piston engine with an auxiliary hydraulic bomb that substitutes the function of the hydraulic accumulator with the objective of to improve the density of power and to diminish the demands in the dynamic behavior of the components. The used method is the“Development of new products mecatrónicos". In the investigation to arrive to the new knowledge an analytic prototype is developed to carry out PC-experiments. The mathematical model is deduced for the stage of the compression, the algorithm and the analytic prototype was obtained. The results of theexperiments show different restrictions in the geometry and the materials to use as well as the tendencies in the behavior of the

  3. Live phylogeny with polytomies: Finding the most compact parsimonious trees.

    Papamichail, D; Huang, A; Kennedy, E; Ott, J-L; Miller, A; Papamichail, G

    2017-08-01

    Construction of phylogenetic trees has traditionally focused on binary trees where all species appear on leaves, a problem for which numerous efficient solutions have been developed. Certain application domains though, such as viral evolution and transmission, paleontology, linguistics, and phylogenetic stemmatics, often require phylogeny inference that involves placing input species on ancestral tree nodes (live phylogeny), and polytomies. These requirements, despite their prevalence, lead to computationally harder algorithmic solutions and have been sparsely examined in the literature to date. In this article we prove some unique properties of most parsimonious live phylogenetic trees with polytomies, and their mapping to traditional binary phylogenetic trees. We show that our problem reduces to finding the most compact parsimonious tree for n species, and describe a novel efficient algorithm to find such trees without resorting to exhaustive enumeration of all possible tree topologies. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Efficient chaining of seeds in ordered trees

    Allali, Julien; Chauve, Cédric; Ferraro, Pascal; Gaillard, Anne-Laure

    2010-01-01

    International audience; We consider here the problem of chaining seeds in ordered trees. Seeds are mappings between two trees Q and T and a chain is a subset of non overlapping seeds that is consistent with respect to postfix order and ancestrality. This problem is a natural extension of a similar problem for sequences, and has applications in computational biology, such as mining a database of RNA secondary structures. For the chaining problem with a set of m constant size seeds, we describe...

  5. Stratified B-trees and versioning dictionaries

    Twigg, Andy; Byde, Andrew; Milos, Grzegorz; Moreton, Tim; Wilkes, John; Wilkie, Tom

    2011-01-01

    A classic versioned data structure in storage and computer science is the copy-on-write (CoW) B-tree -- it underlies many of today's file systems and databases, including WAFL, ZFS, Btrfs and more. Unfortunately, it doesn't inherit the B-tree's optimality properties; it has poor space utilization, cannot offer fast updates, and relies on random IO to scale. Yet, nothing better has been developed since. We describe the `stratified B-tree', which beats all known semi-external memory versioned B...

  6. Compressible generalized Newtonian fluids

    Málek, Josef; Rajagopal, K.R.

    2010-01-01

    Roč. 61, č. 6 (2010), s. 1097-1110 ISSN 0044-2275 Institutional research plan: CEZ:AV0Z20760514 Keywords : power law fluid * uniform temperature * compressible fluid Subject RIV: BJ - Thermodynamics Impact factor: 1.290, year: 2010

  7. Temporal compressive sensing systems

    Reed, Bryan W.

    2017-12-12

    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  8. Compression of Infrared images

    Mantel, Claire; Forchhammer, Søren

    2017-01-01

    best for bits-per-pixel rates below 1.4 bpp, while HEVC obtains best performance in the range 1.4 to 6.5 bpp. The compression performance is also evaluated based on maximum errors. These results also show that HEVC can achieve a precision of 1°C with an average of 1.3 bpp....

  9. Gas compression infrared generator

    Hug, W.F.

    1980-01-01

    A molecular gas is compressed in a quasi-adiabatic manner to produce pulsed radiation during each compressor cycle when the pressure and temperature are sufficiently high, and part of the energy is recovered during the expansion phase, as defined in U.S. Pat. No. 3,751,666; characterized by use of a cylinder with a reciprocating piston as a compressor

  10. Trees and highway safety.

    2011-03-01

    To minimize the severity of run-off-road collisions of vehicles with trees, departments of transportation (DOTs) : commonly establish clear zones for trees and other fixed objects. Caltrans clear zone on freeways is 30 feet : minimum (40 feet pref...

  11. Density ratios in compressions driven by radiation pressure

    Lee, S.

    1988-01-01

    It has been suggested that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the 'brute force' of the radiation pressure. For such a radiation-driven compression, an energy balance method is applied to give an equation fixing the radius compression ratio K which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 10 16 W from 6 for a square pulse power with 0.5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0.4 MJ assuming perfect coupling efficiency. (author)

  12. Fixed-Rate Compressed Floating-Point Arrays.

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  13. High Bit-Depth Medical Image Compression With HEVC.

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  14. On Chinese and Western Family Trees: Mechanism and Performance

    Elton S SIQUEIRA

    2016-10-01

    Full Text Available Family tree is an efficient data structure to store the kinship information in a family. There are basically two kinds of trees: Western Family Tree (WFT and Oriental Family Tree such as Chinese Family Tree (CFT. To get an insight of their efficiency in the context of family kinship presentation and information extraction, in this paper we develop WFT and CFT presentation models and search algorithms, comparing their search performance and inherent mechanism. The study reveals that the computational cost is higher in CFT model, but it provides a greater gain in information retrieval and produces more details of the kinship between individuals in the family.

  15. Detection, diagnosis, and treatment of accident conditions using response trees

    Nelson, W.R.

    1980-01-01

    Response Trees were developed at the LOFT facility in 1978 and included in the Plant Operating Manual (POM) to assist reactor operators in selecting emergency procedures. In an emergency situation the operator would manually gather data and evaluate the trees to select the appropriate procedures. As a portion of the LOFT Augmented Operator Capability (AOC) Program, the response tree methodology has been extended so that a computer can be used to evaluate the trees and recommend an appropriate response for an accident. Techniques for diagnosing failures within a cooling mode have also been investigated. This paper summarizes these additions to the response tree methodology

  16. First megafossil evidence of Cyatheaceous tree fern from the Indian

    A part of the compressed tree fern axis with leaf and adventitious root scars in unusual arrangement from Plio–Pleistocene sediments of Arunachal Pradesh, India is described as Cyathea siwalika sp. nov. This record suggests that Cyathea was an important component of tropical evergreen forest in the area during the ...

  17. Investigation of the physical and mechanical properties of Shea Tree ...

    Investigation of the physical and mechanical properties of Shea Tree timber ( Vitellaria paradoxa ) used for structural applications in Kwara State, Nigeria. ... strength parallel to grain of 24.7 (N/mm2), compressive strength perpendicular to grain of 8.99 (N/mm2), shear strength of 2.01 (N/mm2), and tensile strength parallel to ...

  18. Minnesota's Forest Trees. Revised.

    Miles, William R.; Fuller, Bruce L.

    This bulletin describes 46 of the more common trees found in Minnesota's forests and windbreaks. The bulletin contains two tree keys, a summer key and a winter key, to help the reader identify these trees. Besides the two keys, the bulletin includes an introduction, instructions for key use, illustrations of leaf characteristics and twig…

  19. D2-tree

    Brodal, Gerth Stølting; Sioutas, Spyros; Pantazos, Kostas

    2015-01-01

    We present a new overlay, called the Deterministic Decentralized tree (D2-tree). The D2-tree compares favorably to other overlays for the following reasons: (a) it provides matching and better complexities, which are deterministic for the supported operations; (b) the management of nodes (peers...

  20. Covering tree with stars

    Baumbach, Jan; Guo, Jian-Ying; Ibragimov, Rashid

    2013-01-01

    We study the tree edit distance problem with edge deletions and edge insertions as edit operations. We reformulate a special case of this problem as Covering Tree with Stars (CTS): given a tree T and a set of stars, can we connect the stars in by adding edges between them such that the resulting ...

  1. Winter Birch Trees

    Sweeney, Debra; Rounds, Judy

    2011-01-01

    Trees are great inspiration for artists. Many art teachers find themselves inspired and maybe somewhat obsessed with the natural beauty and elegance of the lofty tree, and how it changes through the seasons. One such tree that grows in several regions and always looks magnificent, regardless of the time of year, is the birch. In this article, the…

  2. Total well dominated trees

    Finbow, Arthur; Frendrup, Allan; Vestergaard, Preben D.

    cardinality then G is a total well dominated graph. In this paper we study composition and decomposition of total well dominated trees. By a reversible process we prove that any total well dominated tree can both be reduced to and constructed from a family of three small trees....

  3. Speeding up Online POMDP planning - unification of observation branches by belief-state compression via expected feature values

    Rens, G

    2015-01-01

    Full Text Available A novel algorithm to speed up online planning in partially observable Markov decision processes (POMDPs) is introduced. I propose a method for compressing nodes in belief-decision-trees while planning occurs. Whereas belief-decision-trees branch...

  4. Compressible Fluid Suspension Performance Testing

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  5. TreePics: visualizing trees with pictures

    Nicolas Puillandre

    2017-09-01

    Full Text Available While many programs are available to edit phylogenetic trees, associating pictures with branch tips in an efficient and automatic way is not an available option. Here, we present TreePics, a standalone software that uses a web browser to visualize phylogenetic trees in Newick format and that associates pictures (typically, pictures of the voucher specimens to the tip of each branch. Pictures are visualized as thumbnails and can be enlarged by a mouse rollover. Further, several pictures can be selected and displayed in a separate window for visual comparison. TreePics works either online or in a full standalone version, where it can display trees with several thousands of pictures (depending on the memory available. We argue that TreePics can be particularly useful in a preliminary stage of research, such as to quickly detect conflicts between a DNA-based phylogenetic tree and morphological variation, that may be due to contamination that needs to be removed prior to final analyses, or the presence of species complexes.

  6. Cartesian anisotropic mesh adaptation for compressible flow

    Keats, W.A.; Lien, F.-S.

    2004-01-01

    Simulating transient compressible flows involving shock waves presents challenges to the CFD practitioner in terms of the mesh quality required to resolve discontinuities and prevent smearing. This paper discusses a novel two-dimensional Cartesian anisotropic mesh adaptation technique implemented for compressible flow. This technique, developed for laminar flow by Ham, Lien and Strong, is efficient because it refines and coarsens cells using criteria that consider the solution in each of the cardinal directions separately. In this paper the method will be applied to compressible flow. The procedure shows promise in its ability to deliver good quality solutions while achieving computational savings. The convection scheme used is the Advective Upstream Splitting Method (Plus), and the refinement/ coarsening criteria are based on work done by Ham et al. Transient shock wave diffraction over a backward step and shock reflection over a forward step are considered as test cases because they demonstrate that the quality of the solution can be maintained as the mesh is refined and coarsened in time. The data structure is explained in relation to the computational mesh, and the object-oriented design and implementation of the code is presented. Refinement and coarsening algorithms are outlined. Computational savings over uniform and isotropic mesh approaches are shown to be significant. (author)

  7. LZ-Compressed String Dictionaries

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  8. HVS-based medical image compression

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  9. HVS-based medical image compression

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  10. Role of Compressibility on Tsunami Propagation

    Abdolali, Ali; Kirby, James T.

    2017-12-01

    In the present paper, we aim to reduce the discrepancies between tsunami arrival times evaluated from tsunami models and real measurements considering the role of ocean compressibility. We perform qualitative studies to reveal the phase speed reduction rate via a modified version of the Mild Slope Equation for Weakly Compressible fluid (MSEWC) proposed by Sammarco et al. (2013). The model is validated against a 3-D computational model. Physical properties of surface gravity waves are studied and compared with those for waves evaluated from an incompressible flow solver over realistic geometry for 2011 Tohoku-oki event, revealing reduction in phase speed.Plain Language SummarySubmarine earthquakes and submarine mass failures (SMFs), can generate long gravitational waves (or tsunamis) that propagate at the free surface. Tsunami waves can travel long distances and are known for their dramatic effects on coastal areas. Nowadays, numerical models are used to reconstruct the tsunamigenic events for many scientific and socioeconomic aspects i.e. Tsunami Early Warning Systems, inundation mapping, risk and hazard analysis, etc. A number of typically neglected parameters in these models cause discrepancies between model outputs and observations. Most of the tsunami models predict tsunami arrival times at distant stations slightly early in comparison to observations. In this study, we show how ocean compressibility would affect the tsunami wave propagation speed. In this framework, an efficient two-dimensional model equation for the weakly compressible ocean has been developed, validated and tested for simplified and real cases against three dimensional and incompressible solvers. Taking the effect of compressibility, the phase speed of surface gravity waves is reduced compared to that of an incompressible fluid. Then, we used the model for the case of devastating Tohoku-Oki 2011 tsunami event, improving the model accuracy. This study sheds light for future model development

  11. Algorithms and data structures for grammar-compressed strings

    Cording, Patrick Hagge

    Textual databases for e.g. biological or web-data are growing rapidly, and it is often only feasible to store the data in compressed form. However, compressing the data comes at a price. Traditional algorithms for e.g. pattern matching requires all data to be decompressed - a computationally...... demanding task. In this thesis we design data structures for accessing and searching compressed data efficiently. Our results can be divided into two categories. In the first category we study problems related to pattern matching. In particular, we present new algorithms for counting and comparing...... substrings, and a new algorithm for finding all occurrences of a pattern in which we may insert gaps. In the other category we deal with accessing and decompressing parts of the compressed string. We show how to quickly access a single character of the compressed string, and present a data structure...

  12. 2D-RBUC for efficient parallel compression of residuals

    Đurđević, Đorđe M.; Tartalja, Igor I.

    2018-02-01

    In this paper, we present a method for lossless compression of residuals with an efficient SIMD parallel decompression. The residuals originate from lossy or near lossless compression of height fields, which are commonly used to represent models of terrains. The algorithm is founded on the existing RBUC method for compression of non-uniform data sources. We have adapted the method to capture 2D spatial locality of height fields, and developed the data decompression algorithm for modern GPU architectures already present even in home computers. In combination with the point-level SIMD-parallel lossless/lossy high field compression method HFPaC, characterized by fast progressive decompression and seamlessly reconstructed surface, the newly proposed method trades off small efficiency degradation for a non negligible compression ratio (measured up to 91%) benefit.

  13. Spectra of chemical trees

    Balasubramanian, K.

    1982-01-01

    A method is developed for obtaining the spectra of trees of NMR and chemical interests. The characteristic polynomials of branched trees can be obtained in terms of the characteristic polynomials of unbranched trees and branches by pruning the tree at the joints. The unbranched trees can also be broken down further until a tree containing just two vertices is obtained. The effectively reduces the order of the secular determinant of the tree used at the beginning to determinants of orders atmost equal to the number of vertices in the branch containing the largest number of vertices. An illustrative example of a NMR graph is given for which the 22 x 22 secular determinant is reduced to determinants of orders atmost 4 x 4 in just the second step of the algorithm. The tree pruning algorithm can be applied even to trees with no symmetry elements and such a factoring can be achieved. Methods developed here can be elegantly used to find if two trees are cospectral and to construct cospectral trees

  14. Digital cinema video compression

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  15. Fingerprints in compressed strings

    Bille, Philip; Gørtz, Inge Li; Cording, Patrick Hagge

    2017-01-01

    In this paper we show how to construct a data structure for a string S of size N compressed into a context-free grammar of size n that supports efficient Karp–Rabin fingerprint queries to any substring of S. That is, given indices i and j, the answer to a query is the fingerprint of the substring S......[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(log⁡N) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log⁡log⁡N) query time...

  16. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  17. Compressed sensing electron tomography

    Leary, Rowan; Saghi, Zineb; Midgley, Paul A.; Holland, Daniel J.

    2013-01-01

    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform

  18. Compressive Sensing for Spread Spectrum Receivers

    Fyhn, Karsten; Jensen, Tobias Lindstrøm; Larsen, Torben

    2013-01-01

    With the advent of ubiquitous computing there are two design parameters of wireless communication devices that become very important: power efficiency and production cost. Compressive sensing enables the receiver in such devices to sample below the Shannon-Nyquist sampling rate, which may lead...... the bit error rate performance is degraded by the subsampling in the CS-enabled receivers, this may be remedied by including quantization in the receiver model.We also study the computational complexity of the proposed receiver design under different sparsity and measurement ratios. Our work shows...

  19. Compressive Detection Using Sub-Nyquist Radars for Sparse Signals

    Ying Sun

    2016-01-01

    Full Text Available This paper investigates the compression detection problem using sub-Nyquist radars, which is well suited to the scenario of high bandwidths in real-time processing because it would significantly reduce the computational burden and save power consumption and computation time. A compressive generalized likelihood ratio test (GLRT detector for sparse signals is proposed for sub-Nyquist radars without ever reconstructing the signal involved. The performance of the compressive GLRT detector is analyzed and the theoretical bounds are presented. The compressive GLRT detection performance of sub-Nyquist radars is also compared to the traditional GLRT detection performance of conventional radars, which employ traditional analog-to-digital conversion (ADC at Nyquist sampling rates. Simulation results demonstrate that the former can perform almost as well as the latter with a very small fraction of the number of measurements required by traditional detection in relatively high signal-to-noise ratio (SNR cases.

  20. Efficient two-dimensional compressive sensing in MIMO radar

    Shahbazi, Nafiseh; Abbasfar, Aliazam; Jabbarian-Jahromi, Mohammad

    2017-12-01

    Compressive sensing (CS) has been a way to lower sampling rate leading to data reduction for processing in multiple-input multiple-output (MIMO) radar systems. In this paper, we further reduce the computational complexity of a pulse-Doppler collocated MIMO radar by introducing a two-dimensional (2D) compressive sensing. To do so, we first introduce a new 2D formulation for the compressed received signals and then we propose a new measurement matrix design for our 2D compressive sensing model that is based on minimizing the coherence of sensing matrix using gradient descent algorithm. The simulation results show that our proposed 2D measurement matrix design using gradient decent algorithm (2D-MMDGD) has much lower computational complexity compared to one-dimensional (1D) methods while having better performance in comparison with conventional methods such as Gaussian random measurement matrix.

  1. Consequences of Common Topological Rearrangements for Partition Trees in Phylogenomic Inference.

    Chernomor, Olga; Minh, Bui Quang; von Haeseler, Arndt

    2015-12-01

    In phylogenomic analysis the collection of trees with identical score (maximum likelihood or parsimony score) may hamper tree search algorithms. Such collections are coined phylogenetic terraces. For sparse supermatrices with a lot of missing data, the number of terraces and the number of trees on the terraces can be very large. If terraces are not taken into account, a lot of computation time might be unnecessarily spent to evaluate many trees that in fact have identical score. To save computation time during the tree search, it is worthwhile to quickly identify such cases. The score of a species tree is the sum of scores for all the so-called induced partition trees. Therefore, if the topological rearrangement applied to a species tree does not change the induced partition trees, the score of these partition trees is unchanged. Here, we provide the conditions under which the three most widely used topological rearrangements (nearest neighbor interchange, subtree pruning and regrafting, and tree bisection and reconnection) change the topologies of induced partition trees. During the tree search, these conditions allow us to quickly identify whether we can save computation time on the evaluation of newly encountered trees. We also introduce the concept of partial terraces and demonstrate that they occur more frequently than the original "full" terrace. Hence, partial terrace is the more important factor of timesaving compared to full terrace. Therefore, taking into account the above conditions and the partial terrace concept will help to speed up the tree search in phylogenomic inference.

  2. Compressive failure with interacting cracks

    Yang Guoping; Liu Xila

    1993-01-01

    The failure processes in concrete and other brittle materials are just the results of the propagation, coalescence and interaction of many preexisting microcracks or voids. To understand the real behaviour of the brittle materials, it is necessary to bridge the gap from the relatively matured one crack behaviour to the stochastically distributed imperfections, that is, to concern the crack propagation and interaction of microscopic mechanism with macroscopic parameters of brittle materials. Brittle failure in compression has been studied theoretically by Horii and Nemat-Nasser (1986), in which a closed solution was obtained for a preexisting flaw or some special regular flaws. Zaitsev and Wittmann (1981) published a paper on crack propagation in compression, which is so-called numerical concrete, but they did not take account of the interaction among the microcracks. As for the modelling of the influence of crack interaction on fracture parameters, many studies have also been reported. Up till now, some researcher are working on crack interaction considering the ratios of SIFs with and without consideration of the interaction influences, there exist amplifying or shielding effects of crack interaction which are depending on the relative positions of these microcracks. The present paper attempts to simulate the whole failure process of brittle specimen in compression, which includes the complicated coupling effects between the interaction and propagation of randomly distributed or other typical microcrack configurations step by step. The lengths, orientations and positions of microcracks are all taken as random variables. The crack interaction among many preexisting random microcracks is evaluated with the help of a simple interaction matrix (Yang and Liu, 1991). For the subcritically stable propagation of microcracks in mixed mode fracture, fairly known maximum hoop stress criterion is adopted to compute branching lengths and directions at each tip of the crack

  3. Halftoning processing on a JPEG-compressed image

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  4. Adiabatic compression of elongated field-reversed configurations

    Spencer, R.L.; Tuszewski, M.; Linford, R.K.

    1983-06-01

    The adiabatic compression of an elongated field-reversed configuration (FRC) is computed by using a one-dimensional approximation. The one-dimensional results are checked against a two-dimensional equilibrium code. For ratios of FRC separatrix length to separatrix radius greater than about ten, the one-dimensional results are accurate within 10%. To this accuracy, the adiabatic compression of FRC's can be described by simple analytic formulas.

  5. Adiabatic compression of elongated field-reversed configurations

    Spencer, R.L.; Tuszewski, M.; Linford, R.K.

    1983-01-01

    The adiabatic compression of an elongated field-reversed configuration (FRC) is computed by using a one-dimensional approximation. The one-dimensional results are checked against a two-dimensional equilibrium code. For ratios of FRC separatrix length to separatrix radius greater than about ten, the one-dimensional results are accurate within 10%. To this accuracy, the adiabatic compression of FRC's can be described by simple analytic formulas

  6. Robust steganographic method utilizing properties of MJPEG compression standard

    Jakub Oravec

    2015-06-01

    Full Text Available This article presents design of steganographic method, which uses video container as cover data. Video track was recorded by webcam and was further encoded by compression standard MJPEG. Proposed method also takes in account effects of lossy compression. The embedding process is realized by switching places of transform coefficients, which are computed by Discrete Cosine Transform. The article contains possibilities, used techniques, advantages and drawbacks of chosen solution. The results are presented at the end of the article.

  7. Unstable oscillation of tubular cantilevered beams conveying a compressible fluid

    Johnson, R.O.; Stoneking, J.E.; Carley, T.G.

    1986-01-01

    This paper is concerned with establishing the conditions of stability of a cantilevered tube conveying a compressible fluid. Solutions to Niordson's eigenvalue problem associated with the equations of motion are computed using Muller's method. The effects on critical velocity of compressibility which are accommodated by specifying the tube aspect ratio and fluid sonic velocity are parametrically studied. Aspect ratio is found to have a more pronounced effect on critical velocity than sonic velocity over the parameter range that was considered. (orig.)

  8. Dynamic mode decomposition for compressive system identification

    Bai, Zhe; Kaiser, Eurika; Proctor, Joshua L.; Kutz, J. Nathan; Brunton, Steven L.

    2017-11-01

    Dynamic mode decomposition has emerged as a leading technique to identify spatiotemporal coherent structures from high-dimensional data. In this work, we integrate and unify two recent innovations that extend DMD to systems with actuation and systems with heavily subsampled measurements. When combined, these methods yield a novel framework for compressive system identification, where it is possible to identify a low-order model from limited input-output data and reconstruct the associated full-state dynamic modes with compressed sensing, providing interpretability of the state of the reduced-order model. When full-state data is available, it is possible to dramatically accelerate downstream computations by first compressing the data. We demonstrate this unified framework on simulated data of fluid flow past a pitching airfoil, investigating the effects of sensor noise, different types of measurements (e.g., point sensors, Gaussian random projections, etc.), compression ratios, and different choices of actuation (e.g., localized, broadband, etc.). This example provides a challenging and realistic test-case for the proposed method, and results indicate that the dominant coherent structures and dynamics are well characterized even with heavily subsampled data.

  9. Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.

    Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.

  10. Decision Tree Technique for Particle Identification

    Quiller, Ryan

    2003-01-01

    Particle identification based on measurements such as the Cerenkov angle, momentum, and the rate of energy loss per unit distance (-dE/dx) is fundamental to the BaBar detector for particle physics experiments. It is particularly important to separate the charged forms of kaons and pions. Currently, the Neural Net, an algorithm based on mapping input variables to an output variable using hidden variables as intermediaries, is one of the primary tools used for identification. In this study, a decision tree classification technique implemented in the computer program, CART, was investigated and compared to the Neural Net over the range of momenta, 0.25 GeV/c to 5.0 GeV/c. For a given subinterval of momentum, three decision trees were made using different sets of input variables. The sensitivity and specificity were calculated for varying kaon acceptance thresholds. This data was used to plot Receiver Operating Characteristic curves (ROC curves) to compare the performance of the classification methods. Also, input variables used in constructing the decision trees were analyzed. It was found that the Neural Net was a significant contributor to decision trees using dE/dx and the Cerenkov angle as inputs. Furthermore, the Neural Net had poorer performance than the decision tree technique, but tended to improve decision tree performance when used as an input variable. These results suggest that the decision tree technique using Neural Net input may possibly increase accuracy of particle identification in BaBar

  11. Topological and categorical properties of binary trees

    H. Pajoohesh

    2008-04-01

    Full Text Available Binary trees are very useful tools in computer science for estimating the running time of so-called comparison based algorithms, algorithms in which every action is ultimately based on a prior comparison between two elements. For two given algorithms A and B where the decision tree of A is more balanced than that of B, it is known that the average and worst case times of A will be better than those of B, i.e., ₸A(n ≤₸B(n and TWA (n≤TWB (n. Thus the most balanced and the most imbalanced binary trees play a main role. Here we consider them as semilattices and characterize the most balanced and the most imbalanced binary trees by topological and categorical properties. Also we define the composition of binary trees as a commutative binary operation, *, such that for binary trees A and B, A * B is the binary tree obtained by attaching a copy of B to any leaf of A. We show that (T,* is a commutative po-monoid and investigate its properties.

  12. Coalescent-based species tree inference from gene tree topologies under incomplete lineage sorting by maximum likelihood.

    Wu, Yufeng

    2012-03-01

    Incomplete lineage sorting can cause incongruence between the phylogenetic history of genes (the gene tree) and that of the species (the species tree), which can complicate the inference of phylogenies. In this article, I present a new coalescent-based algorithm for species tree inference with maximum likelihood. I first describe an improved method for computing the probability of a gene tree topology given a species tree, which is much faster than an existing algorithm by Degnan and Salter (2005). Based on this method, I develop a practical algorithm that takes a set of gene tree topologies and infers species trees with maximum likelihood. This algorithm searches for the best species tree by starting from initial species trees and performing heuristic search to obtain better trees with higher likelihood. This algorithm, called STELLS (which stands for Species Tree InfErence with Likelihood for Lineage Sorting), has been implemented in a program that is downloadable from the author's web page. The simulation results show that the STELLS algorithm is more accurate than an existing maximum likelihood method for many datasets, especially when there is noise in gene trees. I also show that the STELLS algorithm is efficient and can be applied to real biological datasets. © 2011 The Author. Evolution© 2011 The Society for the Study of Evolution.

  13. Tree-root control of shallow landslides

    Cohen, Denis; Schwarz, Massimiliano

    2017-08-01

    Tree roots have long been recognized to increase slope stability by reinforcing the strength of soils. Slope stability models usually include the effects of roots by adding an apparent cohesion to the soil to simulate root strength. No model includes the combined effects of root distribution heterogeneity, stress-strain behavior of root reinforcement, or root strength in compression. Recent field observations, however, indicate that shallow landslide triggering mechanisms are characterized by differential deformation that indicates localized activation of zones in tension, compression, and shear in the soil. Here we describe a new model for slope stability that specifically considers these effects. The model is a strain-step discrete element model that reproduces the self-organized redistribution of forces on a slope during rainfall-triggered shallow landslides. We use a conceptual sigmoidal-shaped hillslope with a clearing in its center to explore the effects of tree size, spacing, weak zones, maximum root-size diameter, and different root strength configurations. Simulation results indicate that tree roots can stabilize slopes that would otherwise fail without them and, in general, higher root density with higher root reinforcement results in a more stable slope. The variation in root stiffness with diameter can, in some cases, invert this relationship. Root tension provides more resistance to failure than root compression but roots with both tension and compression offer the best resistance to failure. Lateral (slope-parallel) tension can be important in cases when the magnitude of this force is comparable to the slope-perpendicular tensile force. In this case, lateral forces can bring to failure tree-covered areas with high root reinforcement. Slope failure occurs when downslope soil compression reaches the soil maximum strength. When this occurs depends on the amount of root tension upslope in both the slope-perpendicular and slope-parallel directions. Roots

  14. Tree-root control of shallow landslides

    D. Cohen

    2017-08-01

    Full Text Available Tree roots have long been recognized to increase slope stability by reinforcing the strength of soils. Slope stability models usually include the effects of roots by adding an apparent cohesion to the soil to simulate root strength. No model includes the combined effects of root distribution heterogeneity, stress-strain behavior of root reinforcement, or root strength in compression. Recent field observations, however, indicate that shallow landslide triggering mechanisms are characterized by differential deformation that indicates localized activation of zones in tension, compression, and shear in the soil. Here we describe a new model for slope stability that specifically considers these effects. The model is a strain-step discrete element model that reproduces the self-organized redistribution of forces on a slope during rainfall-triggered shallow landslides. We use a conceptual sigmoidal-shaped hillslope with a clearing in its center to explore the effects of tree size, spacing, weak zones, maximum root-size diameter, and different root strength configurations. Simulation results indicate that tree roots can stabilize slopes that would otherwise fail without them and, in general, higher root density with higher root reinforcement results in a more stable slope. The variation in root stiffness with diameter can, in some cases, invert this relationship. Root tension provides more resistance to failure than root compression but roots with both tension and compression offer the best resistance to failure. Lateral (slope-parallel tension can be important in cases when the magnitude of this force is comparable to the slope-perpendicular tensile force. In this case, lateral forces can bring to failure tree-covered areas with high root reinforcement. Slope failure occurs when downslope soil compression reaches the soil maximum strength. When this occurs depends on the amount of root tension upslope in both the slope-perpendicular and slope

  15. Steiner trees for fixed orientation metrics

    Brazil, Marcus; Zachariasen, Martin

    2009-01-01

    We consider the problem of constructing Steiner minimum trees for a metric defined by a polygonal unit circle (corresponding to s = 2 weighted legal orientations in the plane). A linear-time algorithm to enumerate all angle configurations for degree three Steiner points is given. We provide...... a simple proof that the angle configuration for a Steiner point extends to all Steiner points in a full Steiner minimum tree, such that at most six orientations suffice for edges in a full Steiner minimum tree. We show that the concept of canonical forms originally introduced for the uniform orientation...... metric generalises to the fixed orientation metric. Finally, we give an O(s n) time algorithm to compute a Steiner minimum tree for a given full Steiner topology with n terminal leaves....

  16. Fault tree analysis of a research reactor

    Hall, J.A.; O'Dacre, D.F.; Chenier, R.J.; Arbique, G.M.

    1986-08-01

    Fault Tree Analysis Techniques have been used to assess the safety system of the ZED-2 Research Reactor at the Chalk River Nuclear Laboratories. This turned out to be a strong test of the techniques involved. The resulting fault tree was large and because of inter-links in the system structure the tree was not modularized. In addition, comprehensive documentation was required. After a brief overview of the reactor and the analysis, this paper concentrates on the computer tools that made the job work. Two types of tools were needed; text editing and forms management capability for large volumes of component and system data, and the fault tree codes themselves. The solutions (and failures) are discussed along with the tools we are already developing for the next analysis

  17. PROGRAM HTVOL: The Determination of Tree Crown Volume by Layers

    Joseph C. Mawson; Jack Ward Thomas; Richard M. DeGraaf

    1976-01-01

    A FORTRAN IV computer program calculates, from a few field measurements, the volume of tree crowns. This volume is in layers of a specified thickness of trees or large shrubs. Each tree is assigned one of 15 solid forms, formed by using one of five side shapes (a circle, an ellipse, a neiloid, a triangle, or a parabolalike shape), and one of three bottom shapes (a...

  18. Introduction to parallel algorithms and architectures arrays, trees, hypercubes

    Leighton, F Thomson

    1991-01-01

    Introduction to Parallel Algorithms and Architectures: Arrays Trees Hypercubes provides an introduction to the expanding field of parallel algorithms and architectures. This book focuses on parallel computation involving the most popular network architectures, namely, arrays, trees, hypercubes, and some closely related networks.Organized into three chapters, this book begins with an overview of the simplest architectures of arrays and trees. This text then presents the structures and relationships between the dominant network architectures, as well as the most efficient parallel algorithms for

  19. Fast Image Texture Classification Using Decision Trees

    Thompson, David R.

    2011-01-01

    Texture analysis would permit improved autonomous, onboard science data interpretation for adaptive navigation, sampling, and downlink decisions. These analyses would assist with terrain analysis and instrument placement in both macroscopic and microscopic image data products. Unfortunately, most state-of-the-art texture analysis demands computationally expensive convolutions of filters involving many floating-point operations. This makes them infeasible for radiation- hardened computers and spaceflight hardware. A new method approximates traditional texture classification of each image pixel with a fast decision-tree classifier. The classifier uses image features derived from simple filtering operations involving integer arithmetic. The texture analysis method is therefore amenable to implementation on FPGA (field-programmable gate array) hardware. Image features based on the "integral image" transform produce descriptive and efficient texture descriptors. Training the decision tree on a set of training data yields a classification scheme that produces reasonable approximations of optimal "texton" analysis at a fraction of the computational cost. A decision-tree learning algorithm employing the traditional k-means criterion of inter-cluster variance is used to learn tree structure from training data. The result is an efficient and accurate summary of surface morphology in images. This work is an evolutionary advance that unites several previous algorithms (k-means clustering, integral images, decision trees) and applies them to a new problem domain (morphology analysis for autonomous science during remote exploration). Advantages include order-of-magnitude improvements in runtime, feasibility for FPGA hardware, and significant improvements in texture classification accuracy.

  20. Algorithmically specialized parallel computers

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  1. Parallel compression of data chunks of a shared data object using a log-structured file system

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2016-10-25

    Techniques are provided for parallel compression of data chunks being written to a shared object. A client executing on a compute node or a burst buffer node in a parallel computing system stores a data chunk generated by the parallel computing system to a shared data object on a storage node by compressing the data chunk; and providing the data compressed data chunk to the storage node that stores the shared object. The client and storage node may employ Log-Structured File techniques. The compressed data chunk can be de-compressed by the client when the data chunk is read. A storage node stores a data chunk as part of a shared object by receiving a compressed version of the data chunk from a compute node; and storing the compressed version of the data chunk to the shared data object on the storage node.

  2. Compressive Transient Imaging

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  3. SeqCompress: an algorithm for biological sequence compression.

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Comparative data compression techniques and multi-compression results

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  5. TREDRA, Minimal Cut Sets Fault Tree Plot Program

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: TREDRA is a computer program for drafting report-quality fault trees. The input to TREDRA is similar to input for standard computer programs that find minimal cut sets from fault trees. Output includes fault tree plots containing all standard fault tree logic and event symbols, gate and event labels, and an output description for each event in the fault tree. TREDRA contains the following features: a variety of program options that allow flexibility in the program output; capability for automatic pagination of the output fault tree, when necessary; input groups which allow labeling of gates, events, and their output descriptions; a symbol library which includes standard fault tree symbols plus several less frequently used symbols; user control of character size and overall plot size; and extensive input error checking and diagnostic oriented output. 2 - Method of solution: Fault trees are generated by user-supplied control parameters and a coded description of the fault tree structure consisting of the name of each gate, the gate type, the number of inputs to the gate, and the names of these inputs. 3 - Restrictions on the complexity of the problem: TREDRA can produce fault trees with a minimum of 3 and a maximum of 56 levels. The width of each level may range from 3 to 37. A total of 50 transfers is allowed during pagination

  6. The valuative tree

    Favre, Charles

    2004-01-01

    This volume is devoted to a beautiful object, called the valuative tree and designed as a powerful tool for the study of singularities in two complex dimensions. Its intricate yet manageable structure can be analyzed by both algebraic and geometric means. Many types of singularities, including those of curves, ideals, and plurisubharmonic functions, can be encoded in terms of positive measures on the valuative tree. The construction of these measures uses a natural tree Laplace operator of independent interest.

  7. Analysis by compression

    Meredith, David

    MEL is a geometric music encoding language designed to allow for musical objects to be encoded parsimoniously as sets of points in pitch-time space, generated by performing geometric transformations on component patterns. MEL has been implemented in Java and coupled with the SIATEC pattern...... discovery algorithm to allow for compact encodings to be generated automatically from in extenso note lists. The MEL-SIATEC system is founded on the belief that music analysis and music perception can be modelled as the compression of in extenso descriptions of musical objects....

  8. Compressive Fatigue in Wood

    Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben

    1999-01-01

    An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...

  9. Compressive full waveform lidar

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  10. Metal Hydride Compression

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)

    2017-07-01

    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  11. Coded Splitting Tree Protocols

    Sørensen, Jesper Hemming; Stefanovic, Cedomir; Popovski, Petar

    2013-01-01

    This paper presents a novel approach to multiple access control called coded splitting tree protocol. The approach builds on the known tree splitting protocols, code structure and successive interference cancellation (SIC). Several instances of the tree splitting protocol are initiated, each...... instance is terminated prematurely and subsequently iterated. The combined set of leaves from all the tree instances can then be viewed as a graph code, which is decodable using belief propagation. The main design problem is determining the order of splitting, which enables successful decoding as early...

  12. Morocco - Fruit Tree Productivity

    Millennium Challenge Corporation — Date Tree Irrigation Project: The specific objectives of this evaluation are threefold: - Performance evaluation of project activities, like the mid-term evaluation,...

  13. On Tree Pattern Matching by Pushdown Automata

    T. Flouri

    2009-01-01

    Full Text Available Tree pattern matching is an important operation in Computer Science on which a number of tasks such as mechanical theorem proving, term-rewriting, symbolic computation and non-procedural programming languages are based on. Work has begun on a systematic approach to the construction of tree pattern matchers by deterministic pushdown automata which read subject trees in prefix notation. The method is analogous to the construction of string pattern matchers: for given patterns, a non-deterministic pushdown automaton is created and then it is determinised. In this first paper, we present the proposed non-deterministic pushdown automaton which will serve as a basis for the determinisation process, and prove its correctness. 

  14. Selecting reference cities for i-Tree Streets

    E.G. McPherson

    2010-01-01

    The i-Tree Streets (formerly STRATUM) computer program quantifies municipal forest structure, function, and value using tree growth and geographic data from sixteen U.S. reference cities, one for each of sixteen climate zones. Selecting the reference city that best matches a subject city is problematic when the subject city is outside the U.S., lays on the border...

  15. Optimal tree-stem bucking of northeastern species of China

    Jingxin Wang; Chris B. LeDoux; Joseph McNeel

    2004-01-01

    An application of optimal tree-stem bucking to the northeastern tree species of China is reported. The bucking procedures used in this region are summarized, which are the basic guidelines for the optimal bucking design. The directed graph approach was adopted to generate the bucking patterns by using the network analysis labeling algorithm. A computer-based bucking...

  16. Local search for Steiner tree problems in graphs

    Verhoeven, M.G.A.; Severens, M.E.M.; Aarts, E.H.L.; Rayward-Smith, V.J.; Reeves, C.R.; Smith, G.D.

    1996-01-01

    We present a local search algorithm for the Steiner tree problem in graphs, which uses a neighbourhood in which paths in a steiner tree are exchanged. The exchange function of this neigbourhood is based on multiple-source shortest path algorithm. We present computational results for a known

  17. MrEnt: an editor for publication-quality phylogenetic tree illustrations.

    Zuccon, Alessandro; Zuccon, Dario

    2014-09-01

    We developed MrEnt, a Windows-based, user-friendly software that allows the production of complex, high-resolution, publication-quality phylogenetic trees in few steps, directly from the analysis output. The program recognizes the standard Nexus tree format and the annotated tree files produced by BEAST and MrBayes. MrEnt combines in a single software a large suite of tree manipulation functions (e.g. handling of multiple trees, tree rotation, character mapping, node collapsing, compression of large clades, handling of time scale and error bars for chronograms) with drawing tools typical of standard graphic editors, including handling of graphic elements and images. The tree illustration can be printed or exported in several standard formats suitable for journal publication, PowerPoint presentation or Web publication. © 2014 John Wiley & Sons Ltd.

  18. Algorithm for Compressing Time-Series Data

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  19. Biomedical sensor design using analog compressed sensing

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.

  20. Schwarz-based algorithms for compressible flows

    Tidriri, M.D. [ICASE, Hampton, VA (United States)

    1996-12-31

    To compute steady compressible flows one often uses an implicit discretization approach which leads to a large sparse linear system that must be solved at each time step. In the derivation of this system one often uses a defect-correction procedure, in which the left-hand side of the system is discretized with a lower order approximation than that used for the right-hand side. This is due to storage considerations and computational complexity, and also to the fact that the resulting lower order matrix is better conditioned than the higher order matrix. The resulting schemes are only moderately implicit. In the case of structured, body-fitted grids, the linear system can easily be solved using approximate factorization (AF), which is among the most widely used methods for such grids. However, for unstructured grids, such techniques are no longer valid, and the system is solved using direct or iterative techniques. Because of the prohibitive computational costs and large memory requirements for the solution of compressible flows, iterative methods are preferred. In these defect-correction methods, which are implemented in most CFD computer codes, the mismatch in the right and left hand side operators, together with explicit treatment of the boundary conditions, lead to a severely limited CFL number, which results in a slow convergence to steady state aerodynamic solutions. Many authors have tried to replace explicit boundary conditions with implicit ones. Although they clearly demonstrate that high CFL numbers are possible, the reduction in CPU time is not clear cut.

  1. Free compression tube. Applications

    Rusu, Ioan

    2012-11-01

    During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

  2. Photon compression in cylinders

    Ensley, D.L.

    1977-01-01

    It has been shown theoretically that intense microwave radiation is absorbed non-classically by a newly enunciated mechanism when interacting with hydrogen plasma. Fields > 1 Mg, lambda > 1 mm are within this regime. The predicted absorption, approximately P/sub rf/v/sub theta/sup e/, has not yet been experimentally confirmed. The applications of such a coupling are many. If microwave bursts approximately > 5 x 10 14 watts, 5 ns can be generated, the net generation of power from pellet fusion as well as various military applications becomes feasible. The purpose, then, for considering gas-gun photon compression is to obtain the above experimental capability by converting the gas kinetic energy directly into microwave form. Energies of >10 5 joules cm -2 and powers of >10 13 watts cm -2 are potentially available for photon interaction experiments using presently available technology. The following topics are discussed: microwave modes in a finite cylinder, injection, compression, switchout operation, and system performance parameter scaling

  3. Fingerprints in Compressed Strings

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2013-01-01

    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...... derivative that captures LZ78 compression and its variations) we get O(loglogN) query time. Hence, our data structures has the same time and space complexity as for random access in SLPs. We utilize the fingerprint data structures to solve the longest common extension problem in query time O(logNlogℓ) and O....... That is, given indices i and j, the answer to a query is the fingerprint of the substring S[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(logN) query time, and for Linear SLPs (an SLP...

  4. Space-Efficient Re-Pair Compression

    Bille, Philip; Gørtz, Inge Li; Prezza, Nicola

    2017-01-01

    Re-Pair [5] is an effective grammar-based compression scheme achieving strong compression rates in practice. Let n, σ, and d be the text length, alphabet size, and dictionary size of the final grammar, respectively. In their original paper, the authors show how to compute the Re-Pair grammar...... in expected linear time and 5n + 4σ2 + 4d + √n words of working space on top of the text. In this work, we propose two algorithms improving on the space of their original solution. Our model assumes a memory word of [log2 n] bits and a re-writable input text composed by n such words. Our first algorithm runs...

  5. Compressive sensing scalp EEG signals: implementations and practical performance.

    Abdulghani, Amir M; Casson, Alexander J; Rodriguez-Villegas, Esther

    2012-11-01

    Highly miniaturised, wearable computing and communication systems allow unobtrusive, convenient and long term monitoring of a range of physiological parameters. For long term operation from the physically smallest batteries, the average power consumption of a wearable device must be very low. It is well known that the overall power consumption of these devices can be reduced by the inclusion of low power consumption, real-time compression of the raw physiological data in the wearable device itself. Compressive sensing is a new paradigm for providing data compression: it has shown significant promise in fields such as MRI; and is potentially suitable for use in wearable computing systems as the compression process required in the wearable device has a low computational complexity. However, the practical performance very much depends on the characteristics of the signal being sensed. As such the utility of the technique cannot be extrapolated from one application to another. Long term electroencephalography (EEG) is a fundamental tool for the investigation of neurological disorders and is increasingly used in many non-medical applications, such as brain-computer interfaces. This article investigates in detail the practical performance of different implementations of the compressive sensing theory when applied to scalp EEG signals.

  6. Generalized massive optimal data compression

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  7. Are trees long-lived?

    Kevin T. Smith

    2009-01-01

    Trees and tree care can capture the best of people's motivations and intentions. Trees are living memorials that help communities heal at sites of national tragedy, such as Oklahoma City and the World Trade Center. We mark the places of important historical events by the trees that grew nearby even if the original tree, such as the Charter Oak in Connecticut or...

  8. MEDICAL IMAGE COMPRESSION USING HYBRID CODER WITH FUZZY EDGE DETECTION

    K. Vidhya

    2011-02-01

    Full Text Available Medical imaging techniques produce prohibitive amounts of digitized clinical data. Compression of medical images is a must due to large memory space required for transmission and storage. This paper presents an effective algorithm to compress and to reconstruct medical images. The proposed algorithm first extracts edge information of medical images by using fuzzy edge detector. The images are decomposed using Cohen-Daubechies-Feauveau (CDF wavelet. The hybrid technique utilizes the efficient wavelet based compression algorithms such as JPEG2000 and Set Partitioning In Hierarchical Trees (SPIHT. The wavelet coefficients in the approximation sub band are encoded using tier 1 part of JPEG2000. The wavelet coefficients in the detailed sub bands are encoded using SPIHT. Consistent quality images are produced by this method at a lower bit rate compared to other standard compression algorithms. Two main approaches to assess image quality are objective testing and subjective testing. The image quality is evaluated by objective quality measures. Objective measures correlate well with the perceived image quality for the proposed compression algorithm.

  9. Introduction to compressible fluid flow

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  10. Mammographic compression in Asian women.

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  11. TreeBASIS Feature Descriptor and Its Hardware Implementation

    Spencer Fowers

    2014-01-01

    Full Text Available This paper presents a novel feature descriptor called TreeBASIS that provides improvements in descriptor size, computation time, matching speed, and accuracy. This new descriptor uses a binary vocabulary tree that is computed using basis dictionary images and a test set of feature region images. To facilitate real-time implementation, a feature region image is binary quantized and the resulting quantized vector is passed into the BASIS vocabulary tree. A Hamming distance is then computed between the feature region image and the effectively descriptive basis dictionary image at a node to determine the branch taken and the path the feature region image takes is saved as a descriptor. The TreeBASIS feature descriptor is an excellent candidate for hardware implementation because of its reduced descriptor size and the fact that descriptors can be created and features matched without the use of floating point operations. The TreeBASIS descriptor is more computationally and space efficient than other descriptors such as BASIS, SIFT, and SURF. Moreover, it can be computed entirely in hardware without the support of a CPU for additional software-based computations. Experimental results and a hardware implementation show that the TreeBASIS descriptor compares well with other descriptors for frame-to-frame homography computation while requiring fewer hardware resources.

  12. Fragmentation of random trees

    Kalay, Z; Ben-Naim, E

    2015-01-01

    We study fragmentation of a random recursive tree into a forest by repeated removal of nodes. The initial tree consists of N nodes and it is generated by sequential addition of nodes with each new node attaching to a randomly-selected existing node. As nodes are removed from the tree, one at a time, the tree dissolves into an ensemble of separate trees, namely, a forest. We study statistical properties of trees and nodes in this heterogeneous forest, and find that the fraction of remaining nodes m characterizes the system in the limit N→∞. We obtain analytically the size density ϕ s of trees of size s. The size density has power-law tail ϕ s ∼s −α with exponent α=1+(1/m). Therefore, the tail becomes steeper as further nodes are removed, and the fragmentation process is unusual in that exponent α increases continuously with time. We also extend our analysis to the case where nodes are added as well as removed, and obtain the asymptotic size density for growing trees. (paper)

  13. The tree BVOC index

    J.R. Simpson; E.G. McPherson

    2011-01-01

    Urban trees can produce a number of benefits, among them improved air quality. Biogenic volatile organic compounds (BVOCs) emitted by some species are ozone precursors. Modifying future tree planting to favor lower-emitting species can reduce these emissions and aid air management districts in meeting federally mandated emissions reductions for these compounds. Changes...

  14. Flowering T Flowering Trees

    Adansonia digitata L. ( The Baobab Tree) of Bombacaceae is a tree with swollen trunk that attains a dia. of 10m. Leaves are digitately compound with leaflets up to 18cm. long. Flowers are large, solitary, waxy white, and open at dusk. They open in 30 seconds and are bat pollinated. Stamens are many. Fruit is about 30 cm ...

  15. Fault tree graphics

    Bass, L.; Wynholds, H.W.; Porterfield, W.R.

    1975-01-01

    Described is an operational system that enables the user, through an intelligent graphics terminal, to construct, modify, analyze, and store fault trees. With this system, complex engineering designs can be analyzed. This paper discusses the system and its capabilities. Included is a brief discussion of fault tree analysis, which represents an aspect of reliability and safety modeling

  16. Tree biology and dendrochemistry

    Kevin T. Smith; Walter C. Shortle

    1996-01-01

    Dendrochemistry, the interpretation of elemental analysis of dated tree rings, can provide a temporal record of environmental change. Using the dendrochemical record requires an understanding of tree biology. In this review, we pose four questions concerning assumptions that underlie recent dendrochemical research: 1) Does the chemical composition of the wood directly...

  17. Individual tree control

    Harvey A. Holt

    1989-01-01

    Controlling individual unwanted trees in forest stands is a readily accepted method for improving the value of future harvests. The practice is especially important in mixed hardwood forests where species differ considerably in value and within species individual trees differ in quality. Individual stem control is a mechanical or chemical weeding operation that...

  18. Trees and Climate Change

    Dettenmaier, Megan; Kuhns, Michael; Unger, Bethany; McAvoy, Darren

    2017-01-01

    This fact sheet describes the complex relationship between forests and climate change based on current research. It explains ways that trees can mitigate some of the risks associated with climate change. It details the impacts that forests are having on the changing climate and discuss specific ways that trees can be used to reduce or counter carbon emissions directly and indirectly.

  19. Structural Equation Model Trees

    Brandmaier, Andreas M.; von Oertzen, Timo; McArdle, John J.; Lindenberger, Ulman

    2013-01-01

    In the behavioral and social sciences, structural equation models (SEMs) have become widely accepted as a modeling tool for the relation between latent and observed variables. SEMs can be seen as a unification of several multivariate analysis techniques. SEM Trees combine the strengths of SEMs and the decision tree paradigm by building tree…

  20. Matching Subsequences in Trees

    Bille, Philip; Gørtz, Inge Li

    2009-01-01

    Given two rooted, labeled trees P and T the tree path subsequence problem is to determine which paths in P are subsequences of which paths in T. Here a path begins at the root and ends at a leaf. In this paper we propose this problem as a useful query primitive for XML data, and provide new...

  1. Shock compression profiles in ceramics

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  2. IND - THE IND DECISION TREE PACKAGE

    Buntine, W.

    1994-01-01

    A common approach to supervised classification and prediction in artificial intelligence and statistical pattern recognition is the use of decision trees. A tree is "grown" from data using a recursive partitioning algorithm to create a tree which has good prediction of classes on new data. Standard algorithms are CART (by Breiman Friedman, Olshen and Stone) and ID3 and its successor C4 (by Quinlan). As well as reimplementing parts of these algorithms and offering experimental control suites, IND also introduces Bayesian and MML methods and more sophisticated search in growing trees. These produce more accurate class probability estimates that are important in applications like diagnosis. IND is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or it may be omitted. One of the attributes is delegated the "target" and IND grows trees to predict the target. Prediction can then be done on new data or the decision tree printed out for inspection. IND provides a range of features and styles with convenience for the casual user as well as fine-tuning for the advanced user or those interested in research. IND can be operated in a CART-like mode (but without regression trees, surrogate splits or multivariate splits), and in a mode like the early version of C4. Advanced features allow more extensive search, interactive control and display of tree growing, and Bayesian and MML algorithms for tree pruning and smoothing. These often produce more accurate class probability estimates at the leaves. IND also comes with a comprehensive experimental control suite. IND consists of four basic kinds of routines: data manipulation routines, tree generation routines, tree testing routines, and tree display routines. The data manipulation routines are used to partition a single large data set into smaller training and test sets. The

  3. Algorithmic fault tree construction by component-based system modeling

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  4. Adiabatic compression and radiative compression of magnetic fields

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  5. Environmental tritium in trees

    Brown, R.M.

    1979-01-01

    The distribution of environmental tritium in the free water and organically bound hydrogen of trees growing in the vicinity of the Chalk River Nuclear Laboratories (CRNL) has been studied. The regional dispersal of HTO in the atmosphere has been observed by surveying the tritium content of leaf moisture. Measurement of the distribution of organically bound tritium in the wood of tree ring sequences has given information on past concentrations of HTO taken up by trees growing in the CRNL Liquid Waste Disposal Area. For samples at background environmental levels, cellulose separation and analysis was done. The pattern of bomb tritium in precipitation of 1955-68 was observed to be preserved in the organically bound tritium of a tree ring sequence. Reactor tritium was discernible in a tree growing at a distance of 10 km from CRNL. These techniques provide convenient means of monitoring dispersal of HTO from nuclear facilities. (author)

  6. Generalising tree traversals and tree transformations to DAGs

    Bahr, Patrick; Axelsson, Emil

    2017-01-01

    We present a recursion scheme based on attribute grammars that can be transparently applied to trees and acyclic graphs. Our recursion scheme allows the programmer to implement a tree traversal or a tree transformation and then apply it to compact graph representations of trees instead. The resul......We present a recursion scheme based on attribute grammars that can be transparently applied to trees and acyclic graphs. Our recursion scheme allows the programmer to implement a tree traversal or a tree transformation and then apply it to compact graph representations of trees instead...... as the complementing theory with a number of examples....

  7. Acceptable levels of digital image compression in chest radiology

    Smith, I.

    2000-01-01

    The introduction of picture archival and communications systems (PACS) and teleradiology has prompted an examination of techniques that optimize the storage capacity and speed of digital storage and distribution networks. The general acceptance of the move to replace conventional screen-film capture with computed radiography (CR) is an indication that clinicians within the radiology community are willing to accept images that have been 'compressed'. The question to be answered, therefore, is what level of compression is acceptable. The purpose of the present study is to provide an assessment of the ability of a group of imaging professionals to determine whether an image has been compressed. To undertake this study a single mobile chest image, selected for the presence of some subtle pathology in the form of a number of septal lines in both costphrenic angles, was compressed to levels of 10:1, 20:1 and 30:1. These images were randomly ordered and shown to the observers for interpretation. Analysis of the responses indicates that in general it was not possible to distinguish the original image from its compressed counterparts. Furthermore, a preference appeared to be shown for images that have undergone low levels of compression. This preference can most likely be attributed to the 'de-noising' effect of the compression algorithm at low levels. Copyright (1999) Blackwell Science Pty. Ltd

  8. Vacancy behavior in a compressed fcc Lennard-Jones crystal

    Beeler, J.R. Jr.

    1981-12-01

    This computer experiment study concerns the determination of the stable vacancy configuration in a compressed fcc Lennard-Jones crystal and the migration of this defect in a compressed crystal. Isotropic and uniaxial compression stress conditions were studied. The isotropic and uniaxial compression magnitudes employed were 0.94 less than or equal to eta less than or equal to 1.5, and 1.0 less than or equal to eta less than or equal to 1.5, respectively. The site-centered vacancy (SCV) was the stable vacancy configuration whenever cubic symmetry was present. This includes all of the isotropic compression cases and the particular uniaxial compression case (eta = √2) that give a bcc structure. In addition, the SCV was the stable configuration for uniaxial compression eta 1.20, the SV-OP is an extended defect and, therefore, a saddle point for SV-OP migration could not be determined. The mechanism for the transformation from the SCV to the SV-OP as the stable form at eta = 1.29 appears to be an alternating sign [101] and/or [011] shear process

  9. Parallel Algorithm for Wireless Data Compression and Encryption

    Qin Jiancheng

    2017-01-01

    Full Text Available As the wireless network has limited bandwidth and insecure shared media, the data compression and encryption are very useful for the broadcasting transportation of big data in IoT (Internet of Things. However, the traditional techniques of compression and encryption are neither competent nor efficient. In order to solve this problem, this paper presents a combined parallel algorithm named “CZ algorithm” which can compress and encrypt the big data efficiently. CZ algorithm uses a parallel pipeline, mixes the coding of compression and encryption, and supports the data window up to 1 TB (or larger. Moreover, CZ algorithm can encrypt the big data as a chaotic cryptosystem which will not decrease the compression speed. Meanwhile, a shareware named “ComZip” is developed based on CZ algorithm. The experiment results show that ComZip in 64 b system can get better compression ratio than WinRAR and 7-zip, and it can be faster than 7-zip in the big data compression. In addition, ComZip encrypts the big data without extra consumption of computing resources.

  10. Efficient traveltime compression for 3D prestack Kirchhoff migration

    Alkhalifah, Tariq

    2010-12-13

    Kirchhoff 3D prestack migration, as part of its execution, usually requires repeated access to a large traveltime table data base. Access to this data base implies either a memory intensive or I/O bounded solution to the storage problem. Proper compression of the traveltime table allows efficient 3D prestack migration without relying on the usually slow access to the computer hard drive. Such compression also allows for faster access to desirable parts of the traveltime table. Compression is applied to the traveltime field for each source location on the surface on a regular grid using 3D Chebyshev polynomial or cosine transforms of the traveltime field represented in the spherical coordinates or the Celerity domain. We obtain practical compression levels up to and exceeding 20 to 1. In fact, because of the smaller size traveltime table, we obtain exceptional traveltime extraction speed during migration that exceeds conventional methods. Additional features of the compression include better interpolation of traveltime tables and more stable estimates of amplitudes from traveltime curvatures. Further compression is achieved using bit encoding, by representing compression parameters values with fewer bits. © 2010 European Association of Geoscientists & Engineers.

  11. Waves and compressible flow

    Ockendon, Hilary

    2016-01-01

    Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications.  New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises.  Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science.   Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

  12. Tree-Based Unrooted Phylogenetic Networks.

    Francis, A; Huber, K T; Moulton, V

    2018-02-01

    Phylogenetic networks are a generalization of phylogenetic trees that are used to represent non-tree-like evolutionary histories that arise in organisms such as plants and bacteria, or uncertainty in evolutionary histories. An unrooted phylogenetic network on a non-empty, finite set X of taxa, or network, is a connected, simple graph in which every vertex has degree 1 or 3 and whose leaf set is X. It is called a phylogenetic tree if the underlying graph is a tree. In this paper we consider properties of tree-based networks, that is, networks that can be constructed by adding edges into a phylogenetic tree. We show that although they have some properties in common with their rooted analogues which have recently drawn much attention in the literature, they have some striking differences in terms of both their structural and computational properties. We expect that our results could eventually have applications to, for example, detecting horizontal gene transfer or hybridization which are important factors in the evolution of many organisms.

  13. Irreversible JPEG 2000 compression of abdominal CT for primary interpretation: assessment of visually lossless threshold

    Lee, Kyoung Ho; Kim, Young Hoon; Kim, Bo Hyoung; Kim, Kil Joong; Kim, Tae Jung; Kim, Hyuk Jung; Hahn, Seokyung

    2007-01-01

    To estimate the visually lossless threshold for Joint Photographic Experts Group (JPEG) 2000 compression of contrast-enhanced abdominal computed tomography (CT) images, 100 images were compressed to four different levels: a reversible (as negative control) and irreversible 5:1, 10:1, and 15:1. By alternately displaying the original and the compressed image on the same monitor, six radiologists independently determined if the compressed image was distinguishable from the original image. For each reader, we compared the proportion of the compressed images being rated distinguishable from the original images between the reversible compression and each of the three irreversible compressions using the exact test for paired proportions. For each reader, the proportion was not significantly different between the reversible (0-1%, 0/100 to 1/100) and irreversible 5:1 compression (0-3%). However, the proportion significantly increased with the irreversible 10:1 (95-99%) and 15:1 compressions (100%) versus reversible compression in all readers (P < 0.001); 100 and 95% of the 5:1 compressed images were rated indistinguishable from the original images by at least five of the six readers and all readers, respectively. Irreversibly 5:1 compressed abdominal CT images are visually lossless and, therefore, potentially acceptable for primary interpretation. (orig.)

  14. Investigation on wind energy-compressed air power system.

    Jia, Guang-Zheng; Wang, Xuan-Yin; Wu, Gen-Mao

    2004-03-01

    Wind energy is a pollution free and renewable resource widely distributed over China. Aimed at protecting the environment and enlarging application of wind energy, a new approach to application of wind energy by using compressed air power to some extent instead of electricity put forward. This includes: explaining the working principles and characteristics of the wind energy-compressed air power system; discussing the compatibility of wind energy and compressor capacity; presenting the theoretical model and computational simulation of the system. The obtained compressor capacity vs wind power relationship in certain wind velocity range can be helpful in the designing of the wind power-compressed air system. Results of investigations on the application of high-pressure compressed air for pressure reduction led to conclusion that pressure reduction with expander is better than the throttle regulator in energy saving.

  15. Ghost-tree: creating hybrid-gene phylogenetic trees for diversity analyses.

    Fouquier, Jennifer; Rideout, Jai Ram; Bolyen, Evan; Chase, John; Shiffer, Arron; McDonald, Daniel; Knight, Rob; Caporaso, J Gregory; Kelley, Scott T

    2016-02-24

    Fungi play critical roles in many ecosystems, cause serious diseases in plants and animals, and pose significant threats to human health and structural integrity problems in built environments. While most fungal diversity remains unknown, the development of PCR primers for the internal transcribed spacer (ITS) combined with next-generation sequencing has substantially improved our ability to profile fungal microbial diversity. Although the high sequence variability in the ITS region facilitates more accurate species identification, it also makes multiple sequence alignment and phylogenetic analysis unreliable across evolutionarily distant fungi because the sequences are hard to align accurately. To address this issue, we created ghost-tree, a bioinformatics tool that integrates sequence data from two genetic markers into a single phylogenetic tree that can be used for diversity analyses. Our approach starts with a "foundation" phylogeny based on one genetic marker whose sequences can be aligned across organisms spanning divergent taxonomic groups (e.g., fungal families). Then, "extension" phylogenies are built for more closely related organisms (e.g., fungal species or strains) using a second more rapidly evolving genetic marker. These smaller phylogenies are then grafted onto the foundation tree by mapping taxonomic names such that each corresponding foundation-tree tip would branch into its new "extension tree" child. We applied ghost-tree to graft fungal extension phylogenies derived from ITS sequences onto a foundation phylogeny derived from fungal 18S sequences. Our analysis of simulated and real fungal ITS data sets found that phylogenetic distances between fungal communities computed using ghost-tree phylogenies explained significantly more variance than non-phylogenetic distances. The phylogenetic metrics also improved our ability to distinguish small differences (effect sizes) between microbial communities, though results were similar to non

  16. Statistical Compression for Climate Model Output

    Hammerling, D.; Guinness, J.; Soh, Y. J.

    2017-12-01

    Numerical climate model simulations run at high spatial and temporal resolutions generate massive quantities of data. As our computing capabilities continue to increase, storing all of the data is not sustainable, and thus is it important to develop methods for representing the full datasets by smaller compressed versions. We propose a statistical compression and decompression algorithm based on storing a set of summary statistics as well as a statistical model describing the conditional distribution of the full dataset given the summary statistics. We decompress the data by computing conditional expectations and conditional simulations from the model given the summary statistics. Conditional expectations represent our best estimate of the original data but are subject to oversmoothing in space and time. Conditional simulations introduce realistic small-scale noise so that the decompressed fields are neither too smooth nor too rough compared with the original data. Considerable attention is paid to accurately modeling the original dataset-one year of daily mean temperature data-particularly with regard to the inherent spatial nonstationarity in global fields, and to determining the statistics to be stored, so that the variation in the original data can be closely captured, while allowing for fast decompression and conditional emulation on modest computers.

  17. Application specific compression : final report.

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  18. Compressed Baryonic Matter of Astrophysics

    Guo, Yanjun; Xu, Renxin

    2013-01-01

    Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

  19. Streaming Compression of Hexahedral Meshes

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  20. Data Compression with Linear Algebra

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.