WorldWideScience

Sample records for compressed target nouveaux

  1. Magnetic compression/magnetized target fusion (MAGO/MTF)

    International Nuclear Information System (INIS)

    Kirkpatrick, R.C.; Lindemuth, I.R.

    1997-03-01

    Magnetized Target Fusion (MTF) was reported in two papers at the First Symposium on Current Trends in International Fusion Research. MTF is intermediate between two very different mainline approaches to fusion: Inertial Confinement Fusion (ICF) and magnetic confinement fusion (MCF). The only US MTF experiments in which a target plasma was compressed were the Sandia National Laboratory ''Phi targets''. Despite the very interesting results from that series of experiments, the research was not pursued, and other embodiments of MTF concept such as the Fast Liner were unable to attract the financial support needed for a firm proof of principle. A mapping of the parameter space for MTF showed the significant features of this approach. The All-Russian Scientific Research Institute of Experimental Physics (VNIIEF) has an on-going interest in this approach to thermonuclear fusion, and Los Alamos National Laboratory (LANL) and VNIIEF have done joint target plasma generation experiments relevant to MTF referred to as MAGO (transliteration of the Russian acronym for magnetic compression). The MAGO II experiment appears to have achieved on the order of 200 eV and over 100 KG, so that adiabatic compression with a relatively small convergence could bring the plasma to fusion temperatures. In addition, there are other experiments being pursued for target plasma generation and proof of principle. This paper summarizes the previous reports on MTF and MAGO and presents the progress that has been made over the past three years in creating a target plasma that is suitable for compression to provide a scientific proof of principle experiment for MAGO/MTF

  2. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  3. Compression of magnetized target in the magneto-inertial fusion

    Science.gov (United States)

    Kuzenov, V. V.

    2017-12-01

    This paper presents a mathematical model, numerical method and results of the computer analysis of the compression process and the energy transfer in the target plasma, used in magneto-inertial fusion. The computer simulation of the compression process of magnetized cylindrical target by high-power laser pulse is presented.

  4. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian; Dutta, Aritra; Sun, Qiyu; Foroosh, Hassan

    2017-01-01

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  5. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian

    2017-05-02

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  6. Les nouveaux acteurs de la coopération en Afrique

    Directory of Open Access Journals (Sweden)

    Philippe Hugon

    2010-03-01

    Full Text Available Dans le contexte de la mondialisation et, aujourd’hui, de la crise financière mondiale, de nouveaux acteurs de la coopération émergent en Afrique. Ces partenaires desserrent la contrainte financière et les conditionnalités, augmentent les marges de manœuvre et dopent le marché des matières premières, mais ils accroissent aussi les risques de réendettement et de faiblesse de la coordination des politiques d’aide. Ces partenariats remettent-ils en question les nouvelles pratiques de la coopération des pays de l’OCDE ? Justifient-ils le retour à une realpolitik ou reproduisent-ils les anciennes erreurs des puissances industrielles ? Ces erreurs peuvent-elles être corrigées ? La question se pose également de savoir si la crise mondiale qui touche profondément l’Afrique conduira à un retrait ou à un relais des nouvelles puissances émergentes. Ce chapitre distingue les nouveaux enjeux géopolitiques de l’Afrique dans un monde multipolaire puis les nouveaux acteurs de la coopération en Afrique, avant d’explorer les perspectives qui s’ouvrent pour la coopération en Afrique, notamment eu égard à la crise mondiale.

  7. De nouveaux vaccins pour animaux pourraient aider davantage d ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    9 juil. 2013 ... Marie-Danielle Smith. Des scientifiques canadiens et africains mettent au point de nouveaux vaccins pour lutter contre les maladies animales et limiter les pertes économiques en Afrique subsaharienne. Certains de ces vaccins pourraient permettre de lutter contre des maladies semblables au Canada.

  8. Gestion decentralisee de l'ecole Au benin. Quand de nouveaux ...

    African Journals Online (AJOL)

    Quand de nouveaux acteurs interviennent sur la scene educative. ... PROMOTING ACCESS TO AFRICAN RESEARCH ... autres instruments organisant la gouvernance éducative décentralisée en république du Bénin, des observations et des ...

  9. Novel diagnostics for warm dense matter: application to shock compressed target; Nouveaux diagnostics pour l'etude de la matiere dense et chaude: application aux cibles comprimees par choc laser

    Energy Technology Data Exchange (ETDEWEB)

    Ravasio, A

    2007-03-15

    In this work, we present 3 novel diagnostics for warm dense plasma (WDM) investigations: hard X-ray radiography, proton radiography and X-ray Thomson scattering. Each of these techniques is applied in shock compression experiments. The main objective consists in accessing a new parameter, in addition to shock and particle velocity, for EOS (Equation of State) measurements. In the first chapter we give a deep description of WDM states as strongly coupled and Fermi degenerate states. Then, we introduce how we have generated a WDM state in our experiment: the shock wave. We, in particular, illustrate its formation in the classical laser-matter interaction regime. In the second chapter the principles of standard probing techniques are presented. We see that energetic probe sources are necessary to investigate high Z dense plasmas. The third chapter is dedicated to X-ray radiography results. We report on a first direct density measurement of a shock compressed high Z target using K{alpha} hard X-ray radiation. These results are of great interests as they allow an in-situ characterization of high Z material, impossible with standard techniques. We show that probing a well known material as Al will allow the comparison between our data and the results from already validated simulations. In the fourth chapter, we present the results obtained from proton radiography on low density carbon foam. The data analysis will require the development of a specific Monte-Carlo code to simulate the proton propagation through the shocked target. The comparison of the simulations with the experimental data show a low dependency on density. The fifth chapter is devoted to X-ray Thomson scattering results. For the first time, we have performed collective x-ray Thomson scattering measurement from a shock compressed target, accessing to electron density and temperature. The obtained results are compared with simulated x-ray scattered spectra. The novel technique is then used in the

  10. comportement des nouveaux riz africains face a la pyriculariose

    African Journals Online (AJOL)

    AISA

    Key words : Riz, NERICA variety, blast, inoculation, resistance, Côte d'Ivoire. INTRODUCTION. Les nouveaux riz africains ou NERICA (New Rice for Africa) sont des hybrides interspécifiques. (Oryza glaberrima X O. sativa) mises au point par l'ADRAO (actuellement dénommé Centre. Africain pour le Riz) dans les années 90.

  11. Colon Targeted Guar Gum Compression Coated Tablets of Flurbiprofen: Formulation, Development, and Pharmacokinetics

    Directory of Open Access Journals (Sweden)

    Sateesh Kumar Vemula

    2013-01-01

    Full Text Available The rationale of the present study is to formulate flurbiprofen colon targeted compression coated tablets using guar gum to improve the therapeutic efficacy by increasing drug levels in colon, and also to reduce the side effects in upper gastrointestinal tract. Direct compression method was used to prepare flurbiprofen core tablets, and they were compression coated with guar gum. Then the tablets were optimized with the support of in vitro dissolution studies, and further it was proved by pharmacokinetic studies. The optimized formulation (F4 showed almost complete drug release in the colon (99.86% within 24 h without drug loss in the initial lag period of 5 h (only 6.84% drug release was observed during this period. The pharmacokinetic estimations proved the capability of guar gum compression coated tablets to achieve colon targeting. The Cmax of colon targeted tablets was 11956.15 ng/mL at Tmax of 10 h whereas it was 15677.52 ng/mL at 3 h in case of immediate release tablets. The area under the curve for the immediate release and compression coated tablets was 40385.78 and 78214.50 ng-h/mL and the mean resident time was 3.49 and 10.78 h, respectively. In conclusion, formulation of guar gum compression coated tablets was appropriate for colon targeting of flurbiprofen.

  12. Colon Targeted Guar Gum Compression Coated Tablets of Flurbiprofen: Formulation, Development, and Pharmacokinetics

    Science.gov (United States)

    Bontha, Vijaya Kumar

    2013-01-01

    The rationale of the present study is to formulate flurbiprofen colon targeted compression coated tablets using guar gum to improve the therapeutic efficacy by increasing drug levels in colon, and also to reduce the side effects in upper gastrointestinal tract. Direct compression method was used to prepare flurbiprofen core tablets, and they were compression coated with guar gum. Then the tablets were optimized with the support of in vitro dissolution studies, and further it was proved by pharmacokinetic studies. The optimized formulation (F4) showed almost complete drug release in the colon (99.86%) within 24 h without drug loss in the initial lag period of 5 h (only 6.84% drug release was observed during this period). The pharmacokinetic estimations proved the capability of guar gum compression coated tablets to achieve colon targeting. The C max of colon targeted tablets was 11956.15 ng/mL at T max of 10 h whereas it was 15677.52 ng/mL at 3 h in case of immediate release tablets. The area under the curve for the immediate release and compression coated tablets was 40385.78 and 78214.50 ng-h/mL and the mean resident time was 3.49 and 10.78 h, respectively. In conclusion, formulation of guar gum compression coated tablets was appropriate for colon targeting of flurbiprofen. PMID:24260738

  13. Symmetric compression of 'laser greenhouse' targets by a few laser beams

    International Nuclear Information System (INIS)

    Gus'kov, Sergei Yu; Demchenko, N N; Rozanov, Vladislav B; Stepanov, R V; Zmitrenko, N V; Caruso, A; Strangio, C

    2003-01-01

    The possibility of efficient and symmetric compression of a target with a low-density structured absorber by a few laser beams is considered. An equation of state is proposed for a porous medium, which takes into account the special features of the absorption of high-power nanosecond laser pulses. The open version of this target is shown to allow the use of ordinary Gaussian beams, requiring no special profiling of the absorber surface. The conditions are defined under which such targets can be compressed efficiently by only two laser beams (or beam clusters). Simulations show that for a 2.1-MJ laser pulse, a seven-fold gain for the target under study is achieved. (special issue devoted to the 80th anniversary of academician n g basov's birth)

  14. New thermodynamical systems. Alternative of compression-absorption; Nouveaux systemes thermodynamiques. Alternative de la compression-absorption

    Energy Technology Data Exchange (ETDEWEB)

    Feidt, M.; Brunin, O.; Lottin, O.; Vidal, J.F. [Universite Henri Poincare Nancy, 54 - Vandoeuvre-les-Nancy (France); Hivet, B. [Electricite de France, 77 - Moret sur Loing (France)

    1996-12-31

    This paper describes a 5 years joint research work carried out by Electricite de France (EdF) and the ESPE group of the LEMTA on compression-absorption heat pumps. It shows how a thermodynamical model of machinery, completed with precise exchanger-reactor models, allows to simulate and dimension (and eventually optimize) the system. A small power prototype has been tested and the first results are analyzed with the help of the models. A real scale experiment in industrial sites is expected in the future. (J.S.) 20 refs.

  15. Generation and compression of a target plasma for magnetized target fusion

    International Nuclear Information System (INIS)

    Kirkpatrick, R.C.; Lindemuth, I.R.; Sheehey, P.T.

    1998-01-01

    This is the final report of a three-year, Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). Magnetized target fusion (MTF) is intermediate between the two very different approaches to fusion: inertial and magnetic confinement fusion (ICF and MCF). Results from collaboration with a Russian MTF team on their MAGO experiments suggest they have a target plasma suitable for compression to provide an MTF proof of principle. This LDRD project had tow main objectives: first, to provide a computational basis for experimental investigation of an alternative MTF plasma, and second to explore the physics and computational needs for a continuing program. Secondary objectives included analytic and computational support for MTF experiments. The first objective was fulfilled. The second main objective has several facets to be described in the body of this report. Finally, the authors have developed tools for analyzing data collected on the MAGO and LDRD experiments, and have tested them on limited MAGO data

  16. De nouveaux mécanismes de résilience : rapport interactif au sujet ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    13 nov. 2012 ... On y trouve de nombreux liens vers des ressources relatives au programme et aux projets, ainsi que des pistes pour explorer plus en profondeur les résultats des recherches scientifiques. Lire le rapport De nouveaux mécanismes de résilience (PDF, 2,98 Mo). Une version récente d'Adobe Reader est ...

  17. Issues with Strong Compression of Plasma Target by Stabilized Imploding Liner

    Science.gov (United States)

    Turchi, Peter; Frese, Sherry; Frese, Michael

    2017-10-01

    Strong compression (10:1 in radius) of an FRC by imploding liquid metal liners, stabilized against Rayleigh-Taylor modes, using different scalings for loss based on Bohm vs 100X classical diffusion rates, predict useful compressions with implosion times half the initial energy lifetime. The elongation (length-to-diameter ratio) near peak compression needed to satisfy empirical stability criterion and also retain alpha-particles is about ten. The present paper extends these considerations to issues of the initial FRC, including stability conditions (S*/E) and allowable angular speeds. Furthermore, efficient recovery of the implosion energy and alpha-particle work, in order to reduce the necessary nuclear gain for an economical power reactor, is seen as an important element of the stabilized liner implosion concept for fusion. We describe recent progress in design and construction of the high energy-density prototype of a Stabilized Liner Compressor (SLC) leading to repetitive laboratory experiments to develop the plasma target. Supported by ARPA-E ALPHA Program.

  18. O. Godard, C. Henry, P. Lagadec, E. Michel-Kerjan, 2002, Traité des nouveaux risques, éditions Gallimard, collection folio-actuel.

    Directory of Open Access Journals (Sweden)

    Bertrand Zuindeau

    2003-01-01

    Full Text Available Si le risque est inhérent à la condition humaine elle-même, ces dernières décennies ont néanmoins permis de faire apparaître des types de risques inconnus jusqu’alors. Les risques consécutifs au réchauffement climatique, les risques supposés liés aux OGM, de possibles nouveaux vecteurs de transmission pathologique (maladie de la vache folle ne sont que quelques exemples de ces nouveaux risques, dont la première caractéristique évidente est l’origine anthropique, mais qui ont surtout pour dén...

  19. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  20. Multi-source feature extraction and target recognition in wireless sensor networks based on adaptive distributed wavelet compression algorithms

    Science.gov (United States)

    Hortos, William S.

    2008-04-01

    Proposed distributed wavelet-based algorithms are a means to compress sensor data received at the nodes forming a wireless sensor network (WSN) by exchanging information between neighboring sensor nodes. Local collaboration among nodes compacts the measurements, yielding a reduced fused set with equivalent information at far fewer nodes. Nodes may be equipped with multiple sensor types, each capable of sensing distinct phenomena: thermal, humidity, chemical, voltage, or image signals with low or no frequency content as well as audio, seismic or video signals within defined frequency ranges. Compression of the multi-source data through wavelet-based methods, distributed at active nodes, reduces downstream processing and storage requirements along the paths to sink nodes; it also enables noise suppression and more energy-efficient query routing within the WSN. Targets are first detected by the multiple sensors; then wavelet compression and data fusion are applied to the target returns, followed by feature extraction from the reduced data; feature data are input to target recognition/classification routines; targets are tracked during their sojourns through the area monitored by the WSN. Algorithms to perform these tasks are implemented in a distributed manner, based on a partition of the WSN into clusters of nodes. In this work, a scheme of collaborative processing is applied for hierarchical data aggregation and decorrelation, based on the sensor data itself and any redundant information, enabled by a distributed, in-cluster wavelet transform with lifting that allows multiple levels of resolution. The wavelet-based compression algorithm significantly decreases RF bandwidth and other resource use in target processing tasks. Following wavelet compression, features are extracted. The objective of feature extraction is to maximize the probabilities of correct target classification based on multi-source sensor measurements, while minimizing the resource expenditures at

  1. Coherent structures in ablatively compressed ICF targets and Rayleigh-Taylor instability

    International Nuclear Information System (INIS)

    Pant, H.C.; Desai, T.

    1996-01-01

    One of the major issues in laser induced inertial confinement fusion (ICF) is a stable ablative compression of spherical fusion pellets. The main impediment in achievement of this objective is Rayleigh-Taylor instability at the pellet's ablation front. Under sufficiently high acceleration this instability can grow out of noise. However, it can also arise either due to non-uniform laser intensity distribution over the pellet surface or due to pellet wall areal mass irregularity. Coherent structures in the dense target behind the ablation front can be effectively utilised for stabilisation of the Rayleigh-Taylor phenomenon. Such coherent structures in the form of a super lattice can be created by doping the pellet pusher with high atomic number (Z) micro particles. A compressed-cool pusher under laser irradiation behaves like a strongly correlated non ideal plasma when compressed to sufficiently high density such that the non ideality parameter exceeds unity. Moreover, the nonideality parameter for high Z microinclusions may exceed a critical value of 180 and as a consequence they remain in the form of intact clusters, maintaining the superlattice intact during ablative acceleration. Micro-hetrogeneity and its superlattice plays an important role in stabilization of Rayleigh-Taylor instability, through a variety of mechanisms. (orig.)

  2. Target design for the cylindrical compression of matter driven by heavy ion beams

    Energy Technology Data Exchange (ETDEWEB)

    Piriz, A.R. [E. T. S. I. Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain)]. E-mail: roberto.piriz@uclm.es; Temporal, M. [E. T. S. I. Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain); Lopez Cela, J.J. [E. T. S. I. Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain); Grandjouan, N. [LULI, UMR 7605, Ecole Polytechnique-CNRS-CEA-Universite Paris VI, Palaiseau (France); Tahir, N.A. [GSI Darmstadt, Plankstrasse 1, 64291 Darmstadt (Germany); Serna Moreno, M.C. [E. T. S. I. Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain); Portugues, R.F. [E. T. S. I. Industriales, Universidad de Castilla-La Mancha, 13071 Ciudad Real (Spain); Hoffmann, D.H.H. [GSI Darmstadt, Plankstrasse 1, 64291 Darmstadt (Germany)

    2005-05-21

    The compression of a cylindrical sample of hydrogen contained in a hollow shell of Pb or Au has been analyzed in the framework of the experiments to be performed in the heavy ion synchrotron SIS100 to be constructed at the Gesellschaft fuer Schwerionenforschung (GSI) Darmstadt. The target implosion is driven by an intense beam of heavy ions with a ring-shaped focal spot. We report the results of a parametric study of the final state of the compressed hydrogen in terms of the target and beam parameters. We consider the generation of the annular heated region by means of a radio-frequency wobbler that rotates the beam at extremely high frequencies in order to accommodate symmetry constraints. We have also studied the hydrogen conditions that can be achieved with a non-rotating beam with Gaussian focal spot and the possibility to use a beam stopper as an alternative way to avoid the direct heating of the sample. Finally, we report the analysis of the hydrodynamic instabilities that affect the implosion and the mitigating effects of the elastoplastic properties of the shell.

  3. Target design for the cylindrical compression of matter driven by heavy ion beams

    International Nuclear Information System (INIS)

    Piriz, A.R.; Temporal, M.; Lopez Cela, J.J.; Grandjouan, N.; Tahir, N.A.; Serna Moreno, M.C.; Portugues, R.F.; Hoffmann, D.H.H.

    2005-01-01

    The compression of a cylindrical sample of hydrogen contained in a hollow shell of Pb or Au has been analyzed in the framework of the experiments to be performed in the heavy ion synchrotron SIS100 to be constructed at the Gesellschaft fuer Schwerionenforschung (GSI) Darmstadt. The target implosion is driven by an intense beam of heavy ions with a ring-shaped focal spot. We report the results of a parametric study of the final state of the compressed hydrogen in terms of the target and beam parameters. We consider the generation of the annular heated region by means of a radio-frequency wobbler that rotates the beam at extremely high frequencies in order to accommodate symmetry constraints. We have also studied the hydrogen conditions that can be achieved with a non-rotating beam with Gaussian focal spot and the possibility to use a beam stopper as an alternative way to avoid the direct heating of the sample. Finally, we report the analysis of the hydrodynamic instabilities that affect the implosion and the mitigating effects of the elastoplastic properties of the shell

  4. Soviet paper on laser target heating, symmetry of irradiation, and two-dimensional effects on compression

    International Nuclear Information System (INIS)

    Sahlin, H.L.

    1976-01-01

    Included is a paper presented at the Annual Meeting of the Plasma Physics Division of the American Physical Society in San Francisco on November 19, 1976. The paper discusses some theoretical problems of laser target irradiation and compression investigated at the laboratory of quantum radiophysics of Lebedev Physical Institute. Of significant interest was the absorption and reflection of laser radiation in the corona plasma of a laser target

  5. Effect of spatial nonuniformity of heating on compression and burning of a thermonuclear target under direct multibeam irradiation by a megajoule laser pulse

    Energy Technology Data Exchange (ETDEWEB)

    Bel’kov, S. A.; Bondarenko, S. V. [Russian Federal Nuclear Center, All-Russia Research Institute of Experimental Physics (Russian Federation); Vergunova, G. A. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation); Garanin, S. G. [Russian Federal Nuclear Center, All-Russia Research Institute of Experimental Physics (Russian Federation); Gus’kov, S. Yu.; Demchenko, N. N.; Doskoch, I. Ya. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation); Zmitrenko, N. V. [Russian Academy of Sciences, Keldysh Institute of Applied Mathematics (Russian Federation); Kuchugov, P. A., E-mail: pkuchugov@gmail.com; Rozanov, V. B.; Stepanov, R. V.; Yakhin, R. A. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation)

    2017-02-15

    Direct-drive fusion targets are considered at present as an alternative to targets of indirect compression at a laser energy level of about 2 MJ. In this approach, the symmetry of compression and ignition of thermonuclear fuel play the major role. We report on the results of theoretical investigation of compression and burning of spherical direct-drive targets in the conditions of spatial nonuniformity of heating associated with a shift of the target from the beam center of focusing and possible laser radiation energy disbalance in the beams. The investigation involves numerous calculations based on a complex of 1D and 2D codes RAPID, SEND (for determining the target illumination and the dynamics of absorption), DIANA, and NUT (1D and multidimensional hydrodynamics of compression and burning of targets). The target under investigation had the form of a two-layer shell (ablator made of inertial material CH and DT ice) filled with DT gas. We have determined the range of admissible variation of compression and combustion parameters of the target depending on the variation of the spatial nonuniformity of its heating by a multibeam laser system. It has been shown that low-mode (long-wavelength) perturbations deteriorate the characteristics of the central region due to less effective conversion of the kinetic energy of the target shell into the internal energy of the center. Local initiation of burning is also observed in off-center regions of the target in the case of substantial asymmetry of irradiation. In this case, burning is not spread over the entire volume of the DT fuel as a rule, which considerably reduces the thermonuclear yield as compared to that in the case of spherical symmetry and central ignition.

  6. Simple model of the indirect compression of targets under conditions close to the national ignition facility at an energy of 1.5 MJ

    Energy Technology Data Exchange (ETDEWEB)

    Rozanov, V. B., E-mail: rozanov@sci.lebedev.ru; Vergunova, G. A., E-mail: verg@sci.lebedev.ru [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation)

    2015-11-15

    The possibility of the analysis and interpretation of the reported experiments with the megajoule National Ignition Facility (NIF) laser on the compression of capsules in indirect-irradiation targets by means of the one-dimensional RADIAN program in the spherical geometry has been studied. The problem of the energy balance in a target and the determination of the laser energy that should be used in the spherical model of the target has been considered. The results of action of pulses differing in energy and time profile (“low-foot” and “high-foot” regimes) have been analyzed. The parameters of the compression of targets with a high-density carbon ablator have been obtained. The results of the simulations are in satisfactory agreement with the measurements and correspond to the range of the observed parameters. The set of compared results can be expanded, in particular, for a more detailed determination of the parameters of a target near the maximum compression of the capsule. The physical foundation of the possibility of using the one-dimensional description is the necessity of the closeness of the last stage of the compression of the capsule to a one-dimensional process. The one-dimensional simulation of the compression of the capsule can be useful in establishing the boundary behind which two-dimensional and three-dimensional simulation should be used.

  7. The capability of professional- and lay-rescuers to estimate the chest compression-depth target: a short, randomized experiment.

    Science.gov (United States)

    van Tulder, Raphael; Laggner, Roberta; Kienbacher, Calvin; Schmid, Bernhard; Zajicek, Andreas; Haidvogel, Jochen; Sebald, Dieter; Laggner, Anton N; Herkner, Harald; Sterz, Fritz; Eisenburger, Philip

    2015-04-01

    In CPR, sufficient compression depth is essential. The American Heart Association ("at least 5cm", AHA-R) and the European Resuscitation Council ("at least 5cm, but not to exceed 6cm", ERC-R) recommendations differ, and both are hardly achieved. This study aims to investigate the effects of differing target depth instructions on compression depth performances of professional and lay-rescuers. 110 professional-rescuers and 110 lay-rescuers were randomized (1:1, 4 groups) to estimate the AHA-R or ERC-R on a paper sheet (given horizontal axis) using a pencil and to perform chest compressions according to AHA-R or ERC-R on a manikin. Distance estimation and compression depth were the outcome variables. Professional-rescuers estimated the distance according to AHA-R in 19/55 (34.5%) and to ERC-R in 20/55 (36.4%) cases (p=0.84). Professional-rescuers achieved correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 36/55 (65.4%) cases (p=0.97). Lay-rescuers estimated the distance correctly according to AHA-R in 18/55 (32.7%) and to ERC-R in 20/55 (36.4%) cases (p=0.59). Lay-rescuers yielded correct compression depth according to AHA-R in 39/55 (70.9%) and to ERC-R in 26/55 (47.3%) cases (p=0.02). Professional and lay-rescuers have severe difficulties in correctly estimating distance on a sheet of paper. Professional-rescuers are able to yield AHA-R and ERC-R targets likewise. In lay-rescuers AHA-R was associated with significantly higher success rates. The inability to estimate distance could explain the failure to appropriately perform chest compressions. For teaching lay-rescuers, the AHA-R with no upper limit of compression depth might be preferable. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. Gaseous laser targets and optical diagnostics for studying compressible hydrodynamic instabilities

    International Nuclear Information System (INIS)

    Edwards, J M; Robey, H; Mackinnon, A

    2001-01-01

    Explore the combination of optical diagnostics and gaseous targets to obtain important information about compressible turbulent flows that cannot be derived from traditional laser experiments for the purposes of V and V of hydrodynamics models and understanding scaling. First year objectives: Develop and characterize blast wave-gas jet test bed; Perform single pulse shadowgraphy of blast wave interaction with turbulent gas jet as a function of blast wave Mach number; Explore double pulse shadowgraphy and image correlation for extracting velocity spectra in the shock-turbulent flow interaction; and Explore the use/adaptation of advanced diagnostics

  9. Time resolved x-ray pinhole photography of compressed laser fusion targets

    International Nuclear Information System (INIS)

    Attwood, D.T.

    1976-01-01

    Use of the Livermore x-ray streak camera to temporally record x-ray pinhole images of laser compressed targets is described. Use is made of specially fabricated composite x-ray pinholes which are near diffraction limited for 6 A x-rays, but easily aligned with a He--Ne laser of 6328 A wavelength. With a 6 μm x-ray pinhole, the overall system can be aligned to 5 μm accuracy and provides implosion characteristics with space--time resolutions of approximately 6 μm and 15 psec. Acceptable criteria for pinhole alignment, requisite x-ray flux, and filter characteristics are discussed. Implosion characteristics are presented from our present experiments with 68 μm diameter glass microshell targets and 0.45 terawatt, 70 psec Nd laser pulses. Final implosion velocities in excess of 3 x 10 7 cm/sec are evident

  10. Method for Multiple Targets Tracking in Cognitive Radar Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Yang Jun

    2016-02-01

    Full Text Available A multiple targets cognitive radar tracking method based on Compressed Sensing (CS is proposed. In this method, the theory of CS is introduced to the case of cognitive radar tracking process in multiple targets scenario. The echo signal is sparsely expressed. The designs of sparse matrix and measurement matrix are accomplished by expressing the echo signal sparsely, and subsequently, the restruction of measurement signal under the down-sampling condition is realized. On the receiving end, after considering that the problems that traditional particle filter suffers from degeneracy, and require a large number of particles, the particle swarm optimization particle filter is used to track the targets. On the transmitting end, the Posterior Cramér-Rao Bounds (PCRB of the tracking accuracy is deduced, and the radar waveform parameters are further cognitively designed using PCRB. Simulation results show that the proposed method can not only reduce the data quantity, but also provide a better tracking performance compared with traditional method.

  11. Advances in image compression and automatic target recognition; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Tescher, Andrew G. (Editor)

    1989-01-01

    Various papers on image compression and automatic target recognition are presented. Individual topics addressed include: target cluster detection in cluttered SAR imagery, model-based target recognition using laser radar imagery, Smart Sensor front-end processor for feature extraction of images, object attitude estimation and tracking from a single video sensor, symmetry detection in human vision, analysis of high resolution aerial images for object detection, obscured object recognition for an ATR application, neural networks for adaptive shape tracking, statistical mechanics and pattern recognition, detection of cylinders in aerial range images, moving object tracking using local windows, new transform method for image data compression, quad-tree product vector quantization of images, predictive trellis encoding of imagery, reduced generalized chain code for contour description, compact architecture for a real-time vision system, use of human visibility functions in segmentation coding, color texture analysis and synthesis using Gibbs random fields.

  12. Miniature proportional counter for compression measurements of laser-fusion targets

    International Nuclear Information System (INIS)

    Lane, S.M.; Dellis, J.H.; Bennett, C.K.; Campbell, E.M.

    1981-10-01

    Direct drive laser fusion targets consisting of DT gas encapsulated in glass microshells produce 14.1 MeV neutrons that can interact with silicon-28 nuclei in the glass to produce a 2.2 minute aluminum-28 activity. From the number of 28 Al nuclei created and the neutron yield, the compressed glass areal density can be found. To determine the number of activated atoms created, we collect approximately one-half of the target debris on a thin metal foil which is transferred to our beta-gamma coincidence detector. This detector consists of a 25 cm x 25 cm NaI(Tl) crystal having a 5 cm x 15 cm well. We have recently built a miniature proportional counter that fits into this well and is used to detect beta particles. It is constructed of .025 cm thick copper and has nine separate chambers through which methane flows. The coincidence background is 0.14 cpm and the measured beta efficiency is 45%. We are now building a .0125 cm thick counter made of aluminum having a predicted efficiency of > 90%

  13. Numerical investigation on target implosions driven by radiation ablation and shock compression in dynamic hohlraums

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Delong; Sun, Shunkai; Zhao, Yingkui; Ding, Ning; Wu, Jiming; Dai, Zihuan; Yin, Li; Zhang, Yang; Xue, Chuang [Institute of Applied Physics and Computational Mathematics, Beijing 100088 (China)

    2015-05-15

    In a dynamic hohlraum driven inertial confinement fusion (ICF) configuration, the target may experience two different kinds of implosions. One is driven by hohlraum radiation ablation, which is approximately symmetric at the equator and poles. The second is caused by the radiating shock produced in Z-pinch dynamic hohlraums, only taking place at the equator. To gain a symmetrical target implosion driven by radiation ablation and avoid asymmetric shock compression is a crucial issue in driving ICF using dynamic hohlraums. It is known that when the target is heated by hohlraum radiation, the ablated plasma will expand outward. The pressure in the shocked converter plasma qualitatively varies linearly with the material temperature. However, the ablation pressure in the ablated plasma varies with 3.5 power of the hohlraum radiation temperature. Therefore, as the hohlraum temperature increases, the ablation pressure will eventually exceed the shock pressure, and the expansion of the ablated plasma will obviously weaken the shock propagation and decrease its velocity after propagating into the ablator plasma. Consequently, longer time duration is provided for the symmetrical target implosion driven by radiation ablation. In this paper these processes are numerically investigated by changing drive currents or varying load parameters. The simulation results show that a critical hohlraum radiation temperature is needed to provide a high enough ablation pressure to decelerate the shock, thus providing long enough time duration for the symmetric fuel compression driven by radiation ablation.

  14. Nouveaux pétroles : quel avenir ? Partie 2 New Oil: What's in the Future? Part Two

    Directory of Open Access Journals (Sweden)

    Boy De la Tour X.

    2006-11-01

    Full Text Available PaL'accroissement des prix de 1973 a rendu accessible toute une plage de pétroles chers, mais ces nouveaux pétroles soulèvent encore des problèmes techniques considérables et leur compétitivité économique a été très affectée par le retournement du marché pétrolier. Quel est l'état des technologies ? Que reste-t-il des ambitieux projets conçus dans les années 70 ? Aux horizons 2000/2010, quel sera l'impact de ces nouveaux pétroles, en terme de quantités et au plan stratégique ? Telles sont les questions auxquelles la présente étude tente de répondre. The increase in oil prices in 1973 made an entire range of expensive oil available, but this new oil still raises considerable technical problems, and its economic competitiveness has been greatly affected by the downturn in the oil market. What is the state of the art of existing technologies ? What remains of the ambitious projects conceived in the 1970s? As of 2000/2010, what will the impact of this new oil be in terms of amounts and from the strategic standpoint? These are the questions that the present study attempts to answer.

  15. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  16. Compression of a Deep Competitive Network Based on Mutual Information for Underwater Acoustic Targets Recognition

    Directory of Open Access Journals (Sweden)

    Sheng Shen

    2018-04-01

    Full Text Available The accuracy of underwater acoustic targets recognition via limited ship radiated noise can be improved by a deep neural network trained with a large number of unlabeled samples. However, redundant features learned by deep neural network have negative effects on recognition accuracy and efficiency. A compressed deep competitive network is proposed to learn and extract features from ship radiated noise. The core idea of the algorithm includes: (1 Competitive learning: By integrating competitive learning into the restricted Boltzmann machine learning algorithm, the hidden units could share the weights in each predefined group; (2 Network pruning: The pruning based on mutual information is deployed to remove the redundant parameters and further compress the network. Experiments based on real ship radiated noise show that the network can increase recognition accuracy with fewer informative features. The compressed deep competitive network can achieve a classification accuracy of 89.1 % , which is 5.3 % higher than deep competitive network and 13.1 % higher than the state-of-the-art signal processing feature extraction methods.

  17. Investigation of the compression of high-aspect targets irradiated with a laser pulse of the second harmonic of the Iskra-4 iodine laser

    International Nuclear Information System (INIS)

    Bel'kov, S.A.; Bessarab, A.V.; Voronich, I.N.; Garanin, S.G.; Dolgoleva, G.V.; Zaretskii, A.I.; Izgorodin, V.M.; Ilyushechkin, B.N.; Kochemasov, G.G.; Kunin, A.V.; Martynenko, S.P.; Merkulov, S.G.; Rukavishnikov, N.N.; Ryadov, A.V.; Suslov, N.A.; Sukharev, S.A.

    1992-01-01

    Theoretical modeling of experiments on the compression of targets under exploding pusher shell conditions, carried out at the Iskra-4 facility with the iodine laser pulsed at its fundamental frequency (λ= 1.315 μm) showed a correlation between the increase in the discrepancy between the calculated and experimental neutron yields and increase of the aspect ratio of the shell of the target used in the experiment. After conversion of the Iskra-4 facility to generate the second harmonic and improving the beam uniformity in the region of the target, a series of experiments was carried out on the compression of high-aspect targets A s > 300. In this series a record neutron yield for this installation, N = 6 x 10 7 , was obtained in experiments with glass-shell targets

  18. High-speed photographic methods for compression dynamics investigation of laser irradiated shell target

    International Nuclear Information System (INIS)

    Basov, N.G.; Kologrivov, A.A.; Krokhin, O.N.; Rupasov, A.A.; Shikanov, A.S.

    1979-01-01

    Three methods are described for a high-speed diagnostics of compression dynamics of shell targets being spherically laser-heated on the installation ''Kal'mar''. The first method is based on the direct investigation of the space-time evolution of the critical-density region for Nd-laser emission (N sub(e) asymptotically equals 10 21 I/cm 3 ) by means of the streak photography of plasma image in the second-harmonic light. The second method involves investigation of time evolution of the second-harmonic spectral distribution by means of a spectrograph coupled with a streak camera. The use of a special laser pulse with two time-distributed intensity maxima for the irradiation of shell targets, and the analysis of the obtained X-ray pin-hole pictures constitute the basis of the third method. (author)

  19. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  20. Compression of a spherically symmetric deuterium-tritium plasma liner onto a magnetized deuterium-tritium target

    International Nuclear Information System (INIS)

    Santarius, J. F.

    2012-01-01

    Converging plasma jets may be able to reach the regime of high energy density plasmas (HEDP). The successful application of plasma jets to magneto-inertial fusion (MIF) would heat the plasma by fusion products and should increase the plasma energy density. This paper reports the results of using the University of Wisconsin’s 1-D Lagrangian, radiation-hydrodynamics, fusion code BUCKY to investigate two MIF converging plasma jet test cases originally analyzed by Samulyak et al.[Physics of Plasmas 17, 092702 (2010)]. In these cases, 15 cm or 5 cm radially thick deuterium-tritium (DT) plasma jets merge at 60 cm from the origin and converge radially onto a DT target magnetized to 2 T and of radius 5 cm. The BUCKY calculations reported here model these cases, starting from the time of initial contact of the jets and target. Compared to the one-temperature Samulyak et al. calculations, the one-temperature BUCKY results show similar behavior, except that the plasma radius remains about twice as long near maximum compression. One-temperature and two-temperature BUCKY results differ, reflecting the sensitivity of the calculations to timing and plasma parameter details, with the two-temperature case giving a more sustained compression.

  1. Direct Observation of Strong Ion Coupling in Laser-Driven Shock-Compressed Targets

    International Nuclear Information System (INIS)

    Ravasio, A.; Benuzzi-Mounaix, A.; Loupias, B.; Ozaki, N.; Rabec le Gloahec, M.; Koenig, M.; Gregori, G.; Daligault, J.; Delserieys, A.; Riley, D.; Faenov, A. Ya.; Pikuz, T. A.

    2007-01-01

    In this Letter we report on a near collective x-ray scattering experiment on shock-compressed targets. A highly coupled Al plasma was generated and probed by spectrally resolving an x-ray source forward scattered by the sample. A significant reduction in the intensity of the elastic scatter was observed, which we attribute to the formation of an incipient long-range order. This speculation is confirmed by x-ray scattering calculations accounting for both electron degeneracy and strong coupling effects. Measurements from rear side visible diagnostics are consistent with the plasma parameters inferred from x-ray scattering data. These results give the experimental evidence of the strongly coupled ionic dynamics in dense plasmas

  2. Quasi-isentropic compression using compressed water flow generated by underwater electrical explosion of a wire array

    Science.gov (United States)

    Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.

    2018-05-01

    A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.

  3. Beam dynamics of the Neutralized Drift Compression Experiment-II (NDCX-II),a novel pulse-compressing ion accelerator

    International Nuclear Information System (INIS)

    Friedman, A.; Barnard, J.J.; Cohen, R.H.; Grote, D.P.; Lund, S.M.; Sharp, W.M.; Faltens, A.; Henestroza, E.; Jung, J.-Y.; Kwan, J.W.; Lee, E.P.; Leitner, M.A.; Logan, B.G.; Vay, J.-L.; Waldron, W.L.; Davidson, R.C.; Dorf, M.; Gilson, E.P.; Kaganovich, I.D.

    2009-01-01

    Intense beams of heavy ions are well suited for heating matter to regimes of emerging interest. A new facility, NDCX-II, will enable studies of warm dense matter at ∼1 eV and near-solid density, and of heavy-ion inertial fusion target physics relevant to electric power production. For these applications the beam must deposit its energy rapidly, before the target can expand significantly. To form such pulses, ion beams are temporally compressed in neutralizing plasma; current amplification factors of ∼50-100 are routinely obtained on the Neutralized Drift Compression Experiment (NDCX) at LBNL. In the NDCX-II physics design, an initial non-neutralized compression renders the pulse short enough that existing high-voltage pulsed power can be employed. This compression is first halted and then reversed by the beam's longitudinal space-charge field. Downstream induction cells provide acceleration and impose the head-to-tail velocity gradient that leads to the final neutralized compression onto the target. This paper describes the discrete-particle simulation models (1-D, 2-D, and 3-D) employed and the space-charge-dominated beam dynamics being realized.

  4. Possible version of the compression degradation of the thermonuclear indirect-irradiation targets at the national ignition facility and a reason for the failure of ignition

    Energy Technology Data Exchange (ETDEWEB)

    Rozanov, V. B., E-mail: rozanov@sci.lebedev.ru; Vergunova, G. A., E-mail: verg@sci.lebedev.ru [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation)

    2017-01-15

    The main parameters of compression of a target and tendencies at change in the irradiation conditions are determined by analyzing the published results of experiments at the megajoule National Ignition Facility (NIF) on the compression of capsules in indirect-irradiation targets by means of the one-dimensional RADIAN program in the spherical geometry. A possible version of the “failure of ignition” of an indirect-irradiation target under the NIF conditions is attributed to radiation transfer. The application of onedimensional model to analyze the National Ignition Campaign (NIC) experiments allows identifying conditions corresponding to the future ignition regime and distinguishing them from conditions under which ignition does not occur.

  5. Targeted retrograde transfection of adenovirus vector carrying brain-derived neurotrophic factor gene prevents loss of mouse (twy/twy) anterior horn neurons in vivo sustaining mechanical compression.

    Science.gov (United States)

    Xu, Kan; Uchida, Kenzo; Nakajima, Hideaki; Kobayashi, Shigeru; Baba, Hisatoshi

    2006-08-01

    Immunohistochemical analysis after adenovirus (AdV)-mediated BDNF gene transfer in and around the area of mechanical compression in the cervical spinal cord of the hyperostotic mouse (twy/twy). To investigate the neuroprotective effect of targeted AdV-BDNF gene transfection in the twy mouse with spontaneous chronic compression of the spinal cord motoneurons. Several studies reported the neuroprotective effects of neurotrophins on injured spinal cord. However, no report has described the effect of targeted retrograde neurotrophic gene delivery on motoneuron survival in chronic compression lesions of the cervical spinal cord resembling lesions of myelopathy. LacZ marker gene using adenoviral vector (AdV-LacZ) was used to evaluate retrograde delivery from the sternomastoid muscle in adult twy mice (16-week-old) and (control). Four weeks after the AdV-LacZ or AdV-BDNF injection, the compressed cervical spinal cord was removed en bloc for immunohistologic investigation of b-galactosidase activity and immunoreactivity and immunoblot analyses of BDNF. The number of anterior horn neurons was counted using Nissl, ChAT and AChE staining. Spinal accessory motoneurons between C1 and C3 segments were successfully transfected by AdV-LacZ in both twy and ICR mice after targeted intramuscular injection. Immunoreactivity to BDNF was significantly stronger in AdV-BDNF-gene transfected twy mice than in AdV-LacZ-gene transfected mice. At the cord level showing the maximum compression in AdV-BDNF-transfected twy mice, the number of anterior horn neurons was sinificantly higher in the topographic neuronal cell counting of Nissl-, ChAT-, and AChE-stained samples than in AdV-LacZ-injected twy mice. Targeted AdV-BDNF-gene delivery significantly increased Nissl-stained anterior horn neurons and enhanced cholinergic enzyme activities in the twy. Our results suggest that targeted retrograde AdV-BDNF-gene in vivo delivery may enhance neuronal survival even under chronic mechanical compression.

  6. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  7. Traité des nouveaux risques précaution, crise, assurance

    CERN Document Server

    Godard, Olivier; Lagadec, Patrick; Michel-Kerjan, Erwann

    2002-01-01

    Au-delà des dénonciations de la technologie prométhéenne, s'appuyant sur les acquis de la recherche économique et d'autres disciplines des sciences sociales, cet ouvrage pionnier rassemble les pièces d'un puzzle dispersé. Le motif en est clair: dégager les axes d'une gouvernance des nouveaux risques. Celle-ci repose sur trois piliers qui organisent le panorama offert : la précaution - de la théorie du risque à celle des régimes politiques dans un univers à la fois non probabiliste et controversé; la prévention et la gestion de crises - dont les traits saillants sont montrés à partir de trois cas exemplaires : la contamination criminelle d'un produit pharmaceutique ; la destruction du réseau électrique québécois en 1998 ; l'épidémie de la vache folle au Royaume-Uni ; l'assurance des risques à grande échelle (désastres naturels, catastrophes technologiques et terrorisme de masse) qui, avérés ou potentiels, bouleversent l'économie de l'assurance. Pourquoi un Traité ? La raison en es...

  8. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh D.; Heidrich, Wolfgang

    2014-01-01

    consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived

  9. Analysis of target volume motion followed by induced abdominal compression in tomotherapy for prostate cancer

    International Nuclear Information System (INIS)

    Oh, Jeong Hun; Jung, Geon A; Jung, Won Seok; Jo, Jung Young; Kim, Gi Chul; Choi, Tae Kyu

    2014-01-01

    To evaluate the changes of the motion of abdominal cavity between interfraction and intrafraction by using abdominal compression for reducing abdominal motion. 60 MVCT images were obtained before and after tomotherapy from 10 prostate cancer patients over the whole radiotherapy period. Shift values ( X -lateral Y -longitudinal Z -vertical and Roll ) were measured and from it, the correlation of between interfraction set up change and intrafraction target motion was analyzed when applying abdominal compression. The motion changes of interfraction were X- average 0.65±2.32mm, Y-average 1.41±4.83mm, Z-average 0.73± 0.52mm and Roll-average 0.96±0.21mm. The motion changes of intrafraction were X-average 0.15±0.44mm, Y-average 0.13 ±0.44mm, Z-average 0.24±0.64mm and Roll- average 0.1±0.9mm. The average PTV maximum dose difference was minimum for 10% phase and maximum for 70% phase. The average Spain cord maximum dose difference was minimum for 0% phase and maximum for 50% phase. The average difference of V 20 , V 10 , V 5 of Lung show bo certain trend. Abdominal compression can minimize the motion of internal organs and patients. So it is considered to be able to get more ideal dose volume without damage of normal structures from generating margin in small in producing PTV

  10. Enhanced compressed sensing for visual target tracking in wireless visual sensor networks

    Science.gov (United States)

    Qiang, Guo

    2017-11-01

    Moving object tracking in wireless sensor networks (WSNs) has been widely applied in various fields. Designing low-power WSNs for the limited resources of the sensor, such as energy limitation, energy restriction, and bandwidth constraints, is of high priority. However, most existing works focus on only single conflicting optimization criteria. An efficient compressive sensing technique based on a customized memory gradient pursuit algorithm with early termination in WSNs is presented, which strikes compelling trade-offs among energy dissipation for wireless transmission, certain types of bandwidth, and minimum storage. Then, the proposed approach adopts an unscented particle filter to predict the location of the target. The experimental results with a theoretical analysis demonstrate the substantially superior effectiveness of the proposed model and framework in regard to the energy and speed under the resource limitation of a visual sensor node.

  11. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  12. Effect of feedback on delaying deterioration in quality of compressions during 2 minutes of continuous chest compressions

    DEFF Research Database (Denmark)

    Lyngeraa, Tobias S; Hjortrup, Peter Buhl; Wulff, Nille B

    2012-01-01

    delays deterioration of quality of compressions. METHODS: Participants attending a national one-day conference on cardiac arrest and CPR in Denmark were randomized to perform single-rescuer BLS with (n = 26) or without verbal and visual feedback (n = 28) on a manikin using a ZOLL AED plus. Data were...... analyzed using Rescuenet Code Review. Blinding of participants was not possible, but allocation concealment was performed. Primary outcome was the proportion of delivered compressions within target depth compared over a 2-minute period within the groups and between the groups. Secondary outcome...... was the proportion of delivered compressions within target rate compared over a 2-minute period within the groups and between the groups. Performance variables for 30-second intervals were analyzed and compared. RESULTS: 24 (92%) and 23 (82%) had CPR experience in the group with and without feedback respectively. 14...

  13. Evaluation experimentale et theorique du comportement a la flexion de nouveaux poteaux en materiaux composites

    Science.gov (United States)

    Metiche, Slimane

    La demande croissante en poteaux pour les differents reseaux d'electricite et de telecommunications a rendu necessaire l'utilisation de materiaux innovants, qui preservent l'environnement. La majorite des poteaux electriques existants au Canada ainsi qu'a travers le monde, sont fabriques a partir de materiaux traditionnels tel que le bois, le beton ou l'acier. Les motivations des industriels et des chercheurs a penser a d'autres solutions sont diverses, citons entre autre: La limitation en longueur des poteaux en bois ainsi que la vulnerabilite des poteaux fabriques en beton ou en acier aux agressions climatiques. Les nouveaux poteaux en materiaux composites se presentent comme de bons candidats a cet effet, cependant; leur comportement structural n'est pas connu et des etudes theoriques et experimentales approfondies sont necessaires avant leur mise en marche a grande echelle. Un programme de recherche intensif comportant plusieurs projets experimentaux, analytiques et numeriques est en cours a l'Universite de Sherbrooke afin d'evaluer le comportement a court et a long termes de ces nouveaux poteaux en Polymeres Renforces de Fibres (PRF). C'est dans ce contexte que s'inscrit la presente these, et notre recherche vise a evaluer le comportement a la flexion de nouveaux poteaux tubulaires coniques fabriques en materiaux composites par enroulement filamentaire et ce, a travers une etude theorique, ainsi qu'a travers une serie d'essais de flexion en "grandeur reelle" afin de comprendre le comportement structural de ces poteaux, d'optimiser la conception et de proposer une procedure de dimensionnement pour les utilisateurs. Les poteaux en Polymeres Renforces de Fibres (PRF) etudies dans cette these sont fabriques avec une resine epoxyde renforcee de fibres de verre type E. Chaque type poteaux est constitue principalement de trois zones ou les proprietes geometriques (epaisseur, diametre) et les proprietes mecaniques sont differentes d'une zone a l'autre. La difference

  14. Nouveaux supraconducteurs à haute température critique à base de mercure

    Science.gov (United States)

    Michel, C.; Hervieu, M.; Martin, C.; Maignan, A.; Pelloquin, D.; Goutenoire, F.; Huvé, M.; Raveau, B.

    1994-11-01

    Structural and superconducting properties of new cations substituted mercury based oxides are described. They are mainly characterized by [ Hg{1-x}MxOδ] infty monolayers, however a compound with doubled [ Hg{1-x}MxOδ] infty layer is described for the first time. Critical temperatures vary in a large range 0leqslant T_cleqslant 130 K, but they are lower than those of related to pure mercury oxides. The influence of annealings in various atmospheres upon T_c is discussed. Les caractéristiques structurales et supraconductrices de nouveaux oxydes à base de mercure, dans lesquels le mercure est partiellement remplacé par un autre cation, sont décrites. Dans la majorité des cas, ces oxydes sont caractérisés par une monocouche [ Hg{1-x}MxOδ] infty ; cependant, pour la première fois un composé contenant une double couche majoritaire en mercure est isolé. Les températures critiques varient dans un large domaine (0-130 K) mais restent inférieures à celles des oxydes parents ll tout mercure gg. L'influence des recuits sous des atmosphères diverses est discutée.

  15. Seeding magnetic fields for laser-driven flux compression in high-energy-density plasmas.

    Science.gov (United States)

    Gotchev, O V; Knauer, J P; Chang, P Y; Jang, N W; Shoup, M J; Meyerhofer, D D; Betti, R

    2009-04-01

    A compact, self-contained magnetic-seed-field generator (5 to 16 T) is the enabling technology for a novel laser-driven flux-compression scheme in laser-driven targets. A magnetized target is directly irradiated by a kilojoule or megajoule laser to compress the preseeded magnetic field to thousands of teslas. A fast (300 ns), 80 kA current pulse delivered by a portable pulsed-power system is discharged into a low-mass coil that surrounds the laser target. A >15 T target field has been demonstrated using a hot spot of a compressed target. This can lead to the ignition of massive shells imploded with low velocity-a way of reaching higher gains than is possible with conventional ICF.

  16. Magnetized Target Fusion At General Fusion: An Overview

    Science.gov (United States)

    Laberge, Michel; O'Shea, Peter; Donaldson, Mike; Delage, Michael; Fusion Team, General

    2017-10-01

    Magnetized Target Fusion (MTF) involves compressing an initial magnetically confined plasma on a timescale faster than the thermal confinement time of the plasma. If near adiabatic compression is achieved, volumetric compression of 350X or more of a 500 eV target plasma would achieve a final plasma temperature exceeding 10 keV. Interesting fusion gains could be achieved provided the compressed plasma has sufficient density and dwell time. General Fusion (GF) is developing a compression system using pneumatic pistons to collapse a cavity formed in liquid metal containing a magnetized plasma target. Low cost driver, straightforward heat extraction, good tritium breeding ratio and excellent neutron protection could lead to a practical power plant. GF (65 employees) has an active plasma R&D program including both full scale and reduced scale plasma experiments and simulation of both. Although pneumatic driven compression of full scale plasmas is the end goal, present compression studies use reduced scale plasmas and chemically accelerated aluminum liners. We will review results from our plasma target development, motivate and review the results of dynamic compression field tests and briefly describe the work to date on the pneumatic driver front.

  17. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  18. Joint synthetic aperture radar plus ground moving target indicator from single-channel radar using compressive sensing

    Science.gov (United States)

    Thompson, Douglas; Hallquist, Aaron; Anderson, Hyrum

    2017-10-17

    The various embodiments presented herein relate to utilizing an operational single-channel radar to collect and process synthetic aperture radar (SAR) and ground moving target indicator (GMTI) imagery from a same set of radar returns. In an embodiment, data is collected by randomly staggering a slow-time pulse repetition interval (PRI) over a SAR aperture such that a number of transmitted pulses in the SAR aperture is preserved with respect to standard SAR, but many of the pulses are spaced very closely enabling movers (e.g., targets) to be resolved, wherein a relative velocity of the movers places them outside of the SAR ground patch. The various embodiments of image reconstruction can be based on compressed sensing inversion from undersampled data, which can be solved efficiently using such techniques as Bregman iteration. The various embodiments enable high-quality SAR reconstruction, and high-quality GMTI reconstruction from the same set of radar returns.

  19. Variations of target volume definition and daily target volume localization in stereotactic body radiotherapy for early-stage non–small cell lung cancer patients under abdominal compression

    Energy Technology Data Exchange (ETDEWEB)

    Han, Chunhui, E-mail: chan@coh.org; Sampath, Sagus; Schultheisss, Timothy E.; Wong, Jeffrey Y.C.

    2017-07-01

    We aimed to compare gross tumor volumes (GTV) in 3-dimensional computed tomography (3DCT) simulation and daily cone beam CT (CBCT) with the internal target volume (ITV) in 4-dimensional CT (4DCT) simulation in stereotactic body radiotherapy (SBRT) treatment of patients with early-stage non–small cell lung cancer (NSCLC) under abdominal compression. We retrospectively selected 10 patients with NSCLC who received image-guided SBRT treatments under abdominal compression with daily CBCT imaging. GTVs were contoured as visible gross tumor on the planning 3DCT and daily CBCT, and ITVs were contoured using maximum intensity projection (MIP) images of the planning 4DCT. Daily CBCTs were registered with 3DCT and MIP images by matching of bony landmarks in the thoracic region to evaluate interfractional GTV position variations. Relative to MIP-based ITVs, the average 3DCT-based GTV volume was 66.3 ± 17.1% (range: 37.5% to 92.0%) (p < 0.01 in paired t-test), and the average CBCT-based GTV volume was 90.0 ± 6.7% (daily range: 75.7% to 107.1%) (p = 0.02). Based on bony anatomy matching, the center-of-mass coordinates for CBCT-based GTVs had maximum absolute shift of 2.4 mm (left-right), 7.0 mm (anterior-posterior [AP]), and 5.2 mm (superior-inferior [SI]) relative to the MIP-based ITV. CBCT-based GTVs had average overlapping ratio of 81.3 ± 11.2% (range: 45.1% to 98.9%) with the MIP-based ITV, and 57.7 ± 13.7% (range: 35.1% to 83.2%) with the 3DCT-based GTV. Even with abdominal compression, both 3DCT simulations and daily CBCT scans significantly underestimated the full range of tumor motion. In daily image-guided patient setup corrections, automatic bony anatomy-based image registration could lead to target misalignment. Soft tissue-based image registration should be performed for accurate treatment delivery.

  20. Approaching maximal performance of longitudinal beam compression in induction accelerator drivers

    International Nuclear Information System (INIS)

    Mark, J.W.K.; Ho, D.D.M.; Brandon, S.T.; Chang, C.L.; Drobot, A.T.; Faltens, A.; Lee, E.P.; Krafft, G.A.

    1986-01-01

    Longitudinal beam compression occurs before final focus and fusion chamber beam transport and is a key process determining initial conditions for final focus hardware. Determining the limits for maximal performance of key accelerator components is an essential element of the effort to reduce driver costs. Studies directed towards defining the limits of final beam compression including considerations such as maximal available compression, effects of longitudinal dispersion and beam emittance, combining pulse-shaping with beam compression to reduce the total number of beam manipulators, etc., are given. Several possible techniques are illustrated for utilizing the beam compression process to provide the pulse shapes required by a number of targets. Without such capabilities to shape the pulse, an additional factor of two or so of beam energy would be required by the targets

  1. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain; Ahmed, Sajid; Al-Naffouri, Tareq Y.; Sharawi, Mohammad S.; Alouini, Mohamed-Slim

    2017-01-01

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  2. Target parameter estimation for spatial and temporal formulations in MIMO radars using compressive sensing

    KAUST Repository

    Ali, Hussain

    2017-01-09

    Conventional algorithms used for parameter estimation in colocated multiple-input-multiple-output (MIMO) radars require the inversion of the covariance matrix of the received spatial samples. In these algorithms, the number of received snapshots should be at least equal to the size of the covariance matrix. For large size MIMO antenna arrays, the inversion of the covariance matrix becomes computationally very expensive. Compressive sensing (CS) algorithms which do not require the inversion of the complete covariance matrix can be used for parameter estimation with fewer number of received snapshots. In this work, it is shown that the spatial formulation is best suitable for large MIMO arrays when CS algorithms are used. A temporal formulation is proposed which fits the CS algorithms framework, especially for small size MIMO arrays. A recently proposed low-complexity CS algorithm named support agnostic Bayesian matching pursuit (SABMP) is used to estimate target parameters for both spatial and temporal formulations for the unknown number of targets. The simulation results show the advantage of SABMP algorithm utilizing low number of snapshots and better parameter estimation for both small and large number of antenna elements. Moreover, it is shown by simulations that SABMP is more effective than other existing algorithms at high signal-to-noise ratio.

  3. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  4. Estimates of post-acceleration longitudinal bunch compression

    International Nuclear Information System (INIS)

    Judd, D.L.

    1977-01-01

    A simple analytic method is developed, based on physical approximations, for treating transient implosive longitudinal compression of bunches of heavy ions in an accelerator system for ignition of inertial-confinement fusion pellet targets. Parametric dependences of attainable compressions and of beam path lengths and times during compression are indicated for ramped pulsed-gap lines, rf systems in storage and accumulator rings, and composite systems, including sections of free drift. It appears that for high-confidence pellets in a plant producing 1000 MW of electric power the needed pulse lengths cannot be obtained with rings alone unless an unreasonably large number of them are used, independent of choice of rf harmonic number. In contrast, pulsed-gap lines alone can meet this need. The effects of an initial inward compressive drift and of longitudinal emittance are included

  5. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  6. Effects of dynamic range compression on spatial selective auditory attention in normal-hearing listeners.

    Science.gov (United States)

    Schwartz, Andrew H; Shinn-Cunningham, Barbara G

    2013-04-01

    Many hearing aids introduce compressive gain to accommodate the reduced dynamic range that often accompanies hearing loss. However, natural sounds produce complicated temporal dynamics in hearing aid compression, as gain is driven by whichever source dominates at a given moment. Moreover, independent compression at the two ears can introduce fluctuations in interaural level differences (ILDs) important for spatial perception. While independent compression can interfere with spatial perception of sound, it does not always interfere with localization accuracy or speech identification. Here, normal-hearing listeners reported a target message played simultaneously with two spatially separated masker messages. We measured the amount of spatial separation required between the target and maskers for subjects to perform at threshold in this task. Fast, syllabic compression that was independent at the two ears increased the required spatial separation, but linking the compressors to provide identical gain to both ears (preserving ILDs) restored much of the deficit caused by fast, independent compression. Effects were less clear for slower compression. Percent-correct performance was lower with independent compression, but only for small spatial separations. These results may help explain differences in previous reports of the effect of compression on spatial perception of sound.

  7. The mathematical theory of signal processing and compression-designs

    Science.gov (United States)

    Feria, Erlan H.

    2006-05-01

    The mathematical theory of signal processing, named processor coding, will be shown to inherently arise as the computational time dual of Shannon's mathematical theory of communication which is also known as source coding. Source coding is concerned with signal source memory space compression while processor coding deals with signal processor computational time compression. Their combination is named compression-designs and referred as Conde in short. A compelling and pedagogically appealing diagram will be discussed highlighting Conde's remarkable successful application to real-world knowledge-aided (KA) airborne moving target indicator (AMTI) radar.

  8. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  9. Etudes optiques de nouveaux materiaux laser: Des orthosilicates dopes a l'ytterbium: Le yttrium (lutetium,scandium) pentoxide de silicium

    Science.gov (United States)

    Denoyer, Aurelie

    La decouverte et l'elaboration de nouveaux materiaux laser solides suscitent beaucoup d'interet parmi la communaute scientifique. En particulier les lasers dans la gamme de frequence du micron debouchent sur beaucoup d'applications, en telecommunication, en medecine, dans le domaine militaire, pour la, decoupe des metaux (lasers de puissance), en optique non lineaire (doublage de frequence, bistabilite optique). Le plus couramment utilise actuellement est le Nd:YAG dans cette famille de laser, mais des remplacants plus performants sont toujours recherches. Les lasers a base d'Yb3+ possedent beaucoup d'avantages compares aux lasers Nd3+ du fait de leur structure electronique simple et de leur deterioration moins rapide. Parmi les matrices cristallines pouvant accueillir l'ytterbium, les orthosilicates Yb:Y 2SiO5, Yb:Lu2SiO5 et Yb:Sc2SiO 5 se positionnent tres bien, du fait de leur bonne conductivite thermique et du fort eclatement de leur champ cristallin necessaire a l'elaboration de lasers quasi-3 niveaux. De plus l'etude fine et systematique des proprietes microscopiques de nouveaux materiaux s'avere toujours tres interessante du point de vue de la recherche fondamentale, c'est ainsi que de nouveaux modeles sont concus (par exemple pour le champ cristallin) ou que de nouvelles proprietes inhabituelles sont decouvertes, menant a de nouvelles applications. Ainsi d'autres materiaux dopes a l'ytterbium sont connus pour leurs proprietes de couplage electron-phonon, de couplage magnetique, d'emission cooperative ou encore de bistabilite optique, mais ces proprietes n'ont encore jamais ete mises en evidence dans Yb:Y 2SiO5, Yb:Lu2SiO5 et Yb:Sc2SiO 5. Ainsi, cette these a pour but l'etude des proprietes optiques et des interactions microscopiques dans Yb:Y2SiO 5, Yb:Lu2SiO5 et Yb:Sc2SiO5. Nous utilisons principalement les techniques d'absorption IR et de spectroscopie Raman pour determiner les excitations du champ cristallin et les modes de vibration dans le materiau

  10. Drift Compression and Final Focus Options for Heavy Ion Fusion

    International Nuclear Information System (INIS)

    Hong Qin; Davidson, Ronald C.; Barnard, John J.; Lee, Edward P.

    2005-01-01

    A drift compression and final focus lattice for heavy ion beams should focus the entire beam pulse onto the same focal spot on the target. We show that this requirement implies that the drift compression design needs to satisfy a self-similar symmetry condition. For un-neutralized beams, the Lie symmetry group analysis is applied to the warm-fluid model to systematically derive the self-similar drift compression solutions. For neutralized beams, the 1-D Vlasov equation is solved explicitly, and families of self-similar drift compression solutions are constructed. To compensate for the deviation from the self-similar symmetry condition due to the transverse emittance, four time-dependent magnets are introduced in the upstream of the drift compression such that the entire beam pulse can be focused onto the same focal spot

  11. Polarimetric and Indoor Imaging Fusion Based on Compressive Sensing

    Science.gov (United States)

    2013-04-01

    34 in Proc. IEEE Radar Conf, Rome, Italy , May 2008. [17] M. G. Amin, F. Ahmad, W. Zhang, "A compressive sensing approach to moving target... Ferrara , J. Jackson, and M. Stuff, "Three-dimensional sparse-aperture moving-target imaging," in Proc. SPIE, vol. 6970, 2008. [43] M. Skolnik (Ed

  12. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Energy Technology Data Exchange (ETDEWEB)

    Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex

    2012-06-21

    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  13. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  14. L’Art pour l’Art or L’Art pour Tous? The Tension between Artistic Autonomy and Social Engagement in Les Temps Nouveaux, 1896-1903

    Directory of Open Access Journals (Sweden)

    Laura Prins

    2016-12-01

    Full Text Available Between 1896 and 1903, Jean Grave, editor of the anarchist journal LesTemps Nouveaux, published an artistic album of original prints, withthe collaboration of (avant-garde artists and illustrators. While anarchisttheorists, including Grave, summoned artists to create social art,which had to be didactic and accessible to the working classes, artistswished to emphasize their autonomous position instead. Even thoughGrave requested ‘absolutely artistic’ prints in the case of this album,artists had difficulties with creating something for him, trying to combinetheir social engagement with their artistic autonomy. The artisticalbum appears to have become a compromise of the debate between theanarchist theorists and artists with anarchist sympathies.

  15. Simulations and experiments of intense ion beam compression in space and time

    International Nuclear Information System (INIS)

    Yu, S.S.; Seidl, P.A.; Roy, P.K.; Lidia, S.M.; Coleman, J.E.; Kaganovich, I.D.; Gilson, E.P.; Welch, Dale Robert; Sefkow, Adam B.; Davidson, R.C.

    2008-01-01

    The Heavy Ion Fusion Science Virtual National Laboratory has achieved 60-fold longitudinal pulse compression of ion beams on the Neutralized Drift Compression Experiment (NDCX) (P. K. Roy et al., Phys. Rev. Lett. 95, 234801 (2005)). To focus a space-charge-dominated charge bunch to sufficiently high intensities for ion-beam-heated warm dense matter and inertial fusion energy studies, simultaneous transverse and longitudinal compression to a coincident focal plane is required. Optimizing the compression under the appropriate constraints can deliver higher intensity per unit length of accelerator to the target, thereby facilitating the creation of more compact and cost-effective ion beam drivers. The experiments utilized a drift region filled with high-density plasma in order to neutralize the space charge and current of an ∼300 keV K + beam and have separately achieved transverse and longitudinal focusing to a radius Z 2 MeV) ion beam user-facility for warm dense matter and inertial fusion energy-relevant target physics experiments.

  16. De nouveaux acteurs de la régulation du travail dans la gestion par projets

    Directory of Open Access Journals (Sweden)

    Marie-Josée Legault

    2009-03-01

    Full Text Available Dans le contexte d’une grande transformation de l’économie mondiale, on constate de plus en plus l’inadéquation des efforts de théorisation des relations de travail qui datent du fordisme, notamment du modèle d’analyse systémique (Dunlop, 1958 comme du modèle stratégique (Kochan, Katz et McKersie, 1986 ; dans ces deux modèles en effet, trois acteurs se partagent exclusivement la scène de l’action : les syndicats, les employeurs et l’État, et leurs interactions se déroulent essentiellement dans le cadre de l’État-nation. Or, on voit émerger de nouveaux modes de régulation qui, à leur tour, illustrent la nécessité d’intégrer de nouveaux acteurs et de nouvelles frontières aux modèles théoriques du système de relations industrielles. Une enquête menée auprès de 88 professionnels de l’informatique des entreprises de services technologiques aux entreprises, dans une population composée à parts égales d’hommes et de femmes, a permis d’y relever des pratiques de régulation qui non seulement remettent en cause les termes traditionnels de la régulation fordiste qui hier encore dominaient les bureaucraties professionnelles où on employait les professionnels de l’informatique, mais encore les frontières traditionnelles du système de relations industrielles selon deux aspects : celui des trois acteurs principaux : les employeurs, les travailleurs et l’État, par l’ajout du client et des équipes de travail ; et celui de la séparation entre les contextes du système de relations industrielles et ce système lui-même.As the global economy undergoes a major transformation, the inadequacy of labour relations theories dating back to Fordism, especially the systemic analysis model (Dunlop, 1958 and the strategic model (Kochan, Katz, & McKersie, 1986, in which only three actors—union, employer and State—share the stage is becoming increasingly obvious. A good example is provided by companies

  17. Commissioning Results of the Upgraded Neutralized Drift Compression Experiment

    International Nuclear Information System (INIS)

    Lidia, S.M.; Roy, P.K.; Seidl, P.A.; Waldron, W.L.; Gilson, E.P.

    2009-01-01

    Recent changes to the NDCX beamline offer the promise of higher charge compressed bunches (>15nC), with correspondingly large intensities (>500kW/cm 2 ), delivered to the target plane for ion-beam driven warm dense matter experiments. We report on commissioning results of the upgraded NDCX beamline that includes a new induction bunching module with approximately twice the volt-seconds and greater tuning flexibility, combined with a longer neutralized drift compression channel.

  18. New Light Alloys (Les Nouveaux Alliages Legers)

    Science.gov (United States)

    1990-09-01

    composites r~alis~s avec I’alliage 15-3-3-3 (15-3) de TIMET, alliage 8 m~tastable, facilement laminable et disponible sous forme de feuillards de...part au procMd6 de fabrication - les matdriaux devant tre disponibles soit sous forme de feuillards, soit sous forme de poudres pr~alli~es - , d’autre...a naturaI so an ariii~ aain a( TeSI 5 40 SIC 1 Fig. 9. Compressive and tensile yield stress 1.b In time (secnds in an 6-Al 0 reinforced Al-A ICu alloy

  19. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    Science.gov (United States)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  20. Word aligned bitmap compression method, data structure, and apparatus

    Science.gov (United States)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  1. Results of subscale MTF compression experiments

    Science.gov (United States)

    Howard, Stephen; Mossman, A.; Donaldson, M.; Fusion Team, General

    2016-10-01

    In magnetized target fusion (MTF) a magnetized plasma torus is compressed in a time shorter than its own energy confinement time, thereby heating to fusion conditions. Understanding plasma behavior and scaling laws is needed to advance toward a reactor-scale demonstration. General Fusion is conducting a sequence of subscale experiments of compact toroid (CT) plasmas being compressed by chemically driven implosion of an aluminum liner, providing data on several key questions. CT plasmas are formed by a coaxial Marshall gun, with magnetic fields supported by internal plasma currents and eddy currents in the wall. Configurations that have been compressed so far include decaying and sustained spheromaks and an ST that is formed into a pre-existing toroidal field. Diagnostics measure B, ne, visible and x-ray emission, Ti and Te. Before compression the CT has an energy of 10kJ magnetic, 1 kJ thermal, with Te of 100 - 200 eV, ne 5x1020 m-3. Plasma was stable during a compression factor R0/R >3 on best shots. A reactor scale demonstration would require 10x higher initial B and ne but similar Te. Liner improvements have minimized ripple, tearing and ejection of micro-debris. Plasma facing surfaces have included plasma-sprayed tungsten, bare Cu and Al, and gettering with Ti and Li.

  2. Neutralized drift compression experiments with a high-intensity ion beam

    International Nuclear Information System (INIS)

    Roy, P.K.; Yu, S.S.; Waldron, W.L.; Anders, A.; Baca, D.; Barnard, J.J.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Eylon, S.; Friedman, A.; Gilson, E.P.; Greenway, W.G.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Sefkow, A.B.; Seidl, P.A.; Sharp, W.M.; Thoma, C.; Welch, D.R.

    2007-01-01

    To create high-energy density matter and fusion conditions, high-power drivers, such as lasers, ion beams, and X-ray drivers, may be employed to heat targets with short pulses compared to hydro-motion. Both high-energy density physics and ion-driven inertial fusion require the simultaneous transverse and longitudinal compression of an ion beam to achieve high intensities. We have previously studied the effects of plasma neutralization for transverse beam compression. The scaled experiment, the Neutralized Transport Experiment (NTX), demonstrated that an initially un-neutralized beam can be compressed transversely to ∼1 mm radius when charge neutralization by background plasma electrons is provided. Here, we report longitudinal compression of a velocity-tailored, intense, neutralized 25 mA K + beam at 300 keV. The compression takes place in a 1-2 m drift section filled with plasma to provide space-charge neutralization. An induction cell produces a head-to-tail velocity ramp that longitudinally compresses the neutralized beam, enhances the beam peak current by a factor of 50 and produces a pulse duration of about 3 ns. The physics of longitudinal compression, experimental procedure, and the results of the compression experiments are presented

  3. Configuring and Characterizing X-Rays for Laser-Driven Compression Experiments at the Dynamic Compression Sector

    Science.gov (United States)

    Li, Y.; Capatina, D.; D'Amico, K.; Eng, P.; Hawreliak, J.; Graber, T.; Rickerson, D.; Klug, J.; Rigg, P. A.; Gupta, Y. M.

    2017-06-01

    Coupling laser-driven compression experiments to the x-ray beam at the Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS) of Argonne National Laboratory requires state-of-the-art x-ray focusing, pulse isolation, and diagnostics capabilities. The 100J UV pulsed laser system can be fired once every 20 minutes so precise alignment and focusing of the x-rays on each new sample must be fast and reproducible. Multiple Kirkpatrick-Baez (KB) mirrors are used to achieve a focal spot size as small as 50 μm at the target, while the strategic placement of scintillating screens, cameras, and detectors allows for fast diagnosis of the beam shape, intensity, and alignment of the sample to the x-ray beam. In addition, a series of x-ray choppers and shutters are used to ensure that the sample is exposed to only a single x-ray pulse ( 80ps) during the dynamic compression event and require highly precise synchronization. Details of the technical requirements, layout, and performance of these instruments will be presented. Work supported by DOE/NNSA.

  4. Experimental scheme and restoration algorithm of block compression sensing

    Science.gov (United States)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  5. Effect of feedback on delaying deterioration in quality of compressions during 2 minutes of continuous chest compressions: a randomized manikin study investigating performance with and without feedback

    Directory of Open Access Journals (Sweden)

    Lyngeraa Tobias

    2012-02-01

    Full Text Available Abstract Background Good quality basic life support (BLS improves outcome following cardiac arrest. As BLS performance deteriorates over time we performed a parallel group, superiority study to investigate the effect of feedback on quality of chest compression with the hypothesis that feedback delays deterioration of quality of compressions. Methods Participants attending a national one-day conference on cardiac arrest and CPR in Denmark were randomized to perform single-rescuer BLS with (n = 26 or without verbal and visual feedback (n = 28 on a manikin using a ZOLL AED plus. Data were analyzed using Rescuenet Code Review. Blinding of participants was not possible, but allocation concealment was performed. Primary outcome was the proportion of delivered compressions within target depth compared over a 2-minute period within the groups and between the groups. Secondary outcome was the proportion of delivered compressions within target rate compared over a 2-minute period within the groups and between the groups. Performance variables for 30-second intervals were analyzed and compared. Results 24 (92% and 23 (82% had CPR experience in the group with and without feedback respectively. 14 (54% were CPR instructors in the feedback group and 18 (64% in the group without feedback. Data from 26 and 28 participants were analyzed respectively. Although median values for proportion of delivered compressions within target depth were higher in the feedback group (0-30 s: 54.0%; 30-60 s: 88.0%; 60-90 s: 72.6%; 90-120 s: 87.0%, no significant difference was found when compared to without feedback (0-30 s: 19.6%; 30-60 s: 33.1%; 60-90 s: 44.5%; 90-120 s: 32.7% and no significant deteriorations over time were found within the groups. In the feedback group a significant improvement was found in the proportion of delivered compressions below target depth when the subsequent intervals were compared to the first 30 seconds (0-30 s: 3.9%; 30-60 s: 0.0%; 60-90 s: 0

  6. Spinning targets for laser fusion

    International Nuclear Information System (INIS)

    Baldwin, D.E.; Ryutov, D.D.

    1995-09-01

    Several techniques for spinning the ICF targets up prior to or in the course of their compression are suggested. Interference of the rotational shear flow with Rayleigh-Taylor instability is briefly discussed and possible consequences for the target performance are pointed out

  7. Experimental study of hot electrons propagation and energy deposition in solid or laser-shock compressed targets: applications to fast igniter

    International Nuclear Information System (INIS)

    Pisani, F.

    2000-02-01

    In the fast igniter scheme, a recent approach proposed for the inertial confinement fusion, the idea is to dissociate the fuel ignition phase from its compression. The ignition phase would be then achieved by means of an external energy source: a fast electron beam generated by the interaction with an ultra-intense laser. The main goal of this work is to study the mechanisms of the hot electron energy transfer to the compressed fuel. We intent in particular to study the role of the electric and collisional effects involved in the hot electron propagation in a medium with properties similar to the compressed fuel. We carried out two experiments, one at the Vulcan laser facility (England) and the second one at the new LULI 100 TW laser (France). During the first experiment, we obtained the first results on the hot electron propagation in a dense and hot plasma. The innovating aspect of this work was in particular the use of the laser-shock technique to generate high pressures, allowing the strongly correlated and degenerated plasma to be created. The role of the electric and magnetic effects due to the space charge associated with the fast electron beam has been investigated in the second experiment. Here we studied the propagation in materials with different electrical characteristics: an insulator and a conductor. The analysis of the results showed that only by taking into account simultaneously the two propagation mechanisms (collisions and electric effects) a correct treatment of the energy deposition is possible. We also showed the importance of taking into account the induced modifications due to the electrons beam crossing the target, especially the induced heating. (author)

  8. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  9. High-speed and high-ratio referential genome compression.

    Science.gov (United States)

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Word aligned bitmap compression method, data structure, and apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  11. Design and analysis of compressed sensing radar detectors

    NARCIS (Netherlands)

    Anitori, L.; Maleki, A.; Otten, M.P.G.; Baraniuk, R.G.; Hoogeboom, P.

    2013-01-01

    We consider the problem of target detection from a set of Compressed Sensing (CS) radar measurements corrupted by additive white Gaussian noise. We propose two novel architectures and compare their performance by means of Receiver Operating Characteristic (ROC) curves. Using asymptotic arguments and

  12. A Fast Faraday Cup for the Neutralized Drift Compression Experiment

    CERN Document Server

    Sefkow, Adam; Coleman, Joshua E; Davidson, Ronald C; Efthimion, Philip; Eylon, Shmuel; Gilson, Erik P; Greenway, Wayne; Henestroza, Enrique; Kwan, Joe W; Roy, Prabir K; Vanecek, David; Waldron, William; Welch, Dale; Yu, Simon

    2005-01-01

    Heavy ion drivers for high energy density physics applications and inertial fusion energy use space-charge-dominated beams which require longitudinal bunch compression in order to achieve sufficiently high beam intensity at the target. The Neutralized Drift Compression Experiment-1A (NDCX-1A) at Lawrence Berkeley National Laboratory (LBNL) is used to determine the effective limits of neutralized drift compression. NDCX-1A investigates the physics of longitudinal drift compression of an intense ion beam, achieved by imposing an initial velocity tilt on the drifting beam and neutralizing the beam's space-charge with background plasma. Accurately measuring the longitudinal compression of the beam pulse with high resolution is critical for NDCX-1A, and an understanding of the accessible parameter space is modeled using the LSP particle-in-cell (PIC) code. The design and preliminary experimental results for an ion beam probe which measures the total beam current at the focal plane as a function of time are summari...

  13. SBS pulse compression for excimer inertial fusion energy drivers

    International Nuclear Information System (INIS)

    Linford, G.J.

    1994-01-01

    A key requirement for the development of commercial fusion power plants utilizing inertial confinement fusion (ICF) as a source of thermonuclear power is the availability of reliable, efficient laser drivers. These laser drivers must be capable of delivering UV optical pulses having energies of the order of 5MJ to cryogenic deuterium-tritium (D/T) ICF targets. The current requirements for laser ICF target irradiation specify the laser wavelength, λ ca. 250 nm, pulse duration, τ p ca. 6 ns, bandwidth, Δλ ca. 0.1 nm, polarization state, etc. Excimer lasers are a leading candidate to fill these demanding ICF driver requirements. However, since excimer lasers are not storage lasers, the excimer laser pulse duration, τ pp , is determined primarily by the length of the excitation pulse delivered to the excimer laser amplifier. Pulsed power associated with efficiently generating excimer laser pulses has a time constant, τ pp which falls in the range, 30 τ p pp p . As a consequence, pulse compression is needed to convert the long excimer laser pulses to pulses of duration τ p . These main ICF driver pulses require, in addition, longer, lower power precursor pulses delivered to the ICF target before the arrival of the main pulse. Although both linear and non-linear optical (NLO) pulse compression techniques have been developed, computer simulations have shown that a ''chirped,'' self-seeded, stimulated Brillouin scattering (SBS) pulse compressor cell using SF 6 at a density, ρ ca. 1 amagat can efficiently compress krypton fluoride (KrF) laser pulses at λ=248 nm. In order to avoid the generation of output pulses substantially shorter than τ p , the optical power in the chirped input SBS ''seed'' beams was ramped. Compressed pulse conversion efficiencies of up to 68% were calculated for output pulse durations of τ p ca. ns

  14. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  15. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  16. Object characterization simulator for estimating compressed breast during mammography

    International Nuclear Information System (INIS)

    Pinheiro, Luciana de J.S.; Rio, Margarita Chevalier del

    2011-01-01

    The measurement of the thickness of a compressed breast during the mammography test is necessary in order to calculate the glandular dose in mammography procedures, in an analysis of risk/benefit, given that the target organ in these procedures is highly sensitive to ionising radiation. However, mammography is a test of utmost importance in diagnosis. In theory, it may be possible to calculate the thickness of the compressed breast through the measurements of the focus object distance by using projections of radio opaque objects fixed to the compression tray. The facilities of the Laboratory of Applied Radioprotection to Mammography - LARAM were used for this study, as well as breast simulators with well defined thickness, in the assembly of the techniques for the measurement of the thickness of the compressed breast. The results showed that it is possible to determine this thickness through calculations and simulators through this method which is susceptible to be adequate to the dosimetry. (author)

  17. Plans for longitudinal and transverse neutralized beam compression experiments, and initial results from solenoid transport experiments

    International Nuclear Information System (INIS)

    Seidl, P.A.; Armijo, J.; Baca, D.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Friedman, A.; Gilson, E.P.; Grote, D.; Haber, I.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Molvik, A.W.; Rose, D.V.; Roy, P.K.; Sefkow, A.B.; Sharp, W.M.; Vay, J.L.; Waldron, W.L.; Welch, D.R.; Yu, S.S.

    2007-01-01

    This paper presents plans for neutralized drift compression experiments, precursors to future target heating experiments. The target-physics objective is to study warm dense matter (WDM) using short-duration (∼1 ns) ion beams that enter the targets at energies just above that at which dE/dx is maximal. High intensity on target is to be achieved by a combination of longitudinal compression and transverse focusing. This work will build upon recent success in longitudinal compression, where the ion beam was compressed lengthwise by a factor of more than 50 by first applying a linear head-to-tail velocity tilt to the beam, and then allowing the beam to drift through a dense, neutralizing background plasma. Studies on a novel pulse line ion accelerator were also carried out. It is planned to demonstrate simultaneous transverse focusing and longitudinal compression in a series of future experiments, thereby achieving conditions suitable for future WDM target experiments. Future experiments may use solenoids for transverse focusing of un-neutralized ion beams during acceleration. Recent results are reported in the transport of a high-perveance heavy ion beam in a solenoid transport channel. The principal objectives of this solenoid transport experiment are to match and transport a space-charge-dominated ion beam, and to study associated electron-cloud and gas effects that may limit the beam quality in a solenoid transport system. Ideally, the beam will establish a Brillouin-flow condition (rotation at one-half the cyclotron frequency). Other mechanisms that potentially degrade beam quality are being studied, such as focusing-field aberrations, beam halo, and separation of lattice focusing elements

  18. An investigation of the ability to produce a defined 'target pressure' using the PressCise compression bandage.

    Science.gov (United States)

    Wiklander, Kerstin; Andersson, Annette Erichsen; Källman, Ulrika

    2016-12-01

    Compression therapy is the cornerstone in the prevention and treatment of leg ulcers related to chronic venous insufficiency. The application of optimal high pressure is essential for a successful outcome, but the literature has reported difficulty applying the intended pressure, even among highly skilled nurses. The PressCise bandage has a novel design, with both longitudinal and horizontal reference points for correct application. In the current experimental study, the results for the general linear model, where the data set is treated optimally, showed that all 95% confidence intervals of the expected values for pressure were, at most, 5 mmHg from the target value of 50 mmHg, independent of the position on the leg and the state of activity. Moreover, even nurses with limited experience were consistently able to reach the targeted pressure goal. Future studies are needed to determine how well the bandage works on legs of different shapes, the optimal way of using the bandage (day only or both day and night) and whether the bandage should be combined with an outer bandage layer. In addition, special attention should be paid to subjective patient experiences in relation to the treatment as pain, discomfort and bulk are factors that can compromise patients' willingness to adhere to the treatment protocol and thereby prolong the healing process. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  19. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  20. Stabilization and target delivery of Nattokinase using compression coating.

    Science.gov (United States)

    Law, D; Zhang, Z

    2007-05-01

    The aim of the work is to develop a new formulation in order to stabilize a nutraceutical enzyme Nattokinase (NKCP) in powders and to control its release rate when it passes through the gastrointestinal tract of human. NKCP powders were first compacted into a tablet, which was then coated with a mixture of an enteric material Eudragit L100-55 (EL100-55) and Hydroxypropylcellulose (HPC) by direct compression. The activity of the enzyme was determined using amidolytic assay and its release rates in artificial gastric juice and an intestinal fluid were quantified using bicinchoninic acid assay. Results have shown that the activity of NKCP was pressure independent and the coated tablets protected NKCP from being denatured in the gastric juice, and realized its controlled release to the intestine based on in vitro experiments.

  1. MPD model for radar echo signal of hypersonic targets

    Directory of Open Access Journals (Sweden)

    Xu Xuefei

    2014-08-01

    Full Text Available The stop-and-go (SAG model is typically used for echo signal received by the radar using linear frequency modulation pulse compression. In this study, the authors demonstrate that this model is not applicable to hypersonic targets. Instead of SAG model, they present a more realistic echo signal model (moving-in-pulse duration (MPD for hypersonic targets. Following that, they evaluate the performances of pulse compression under the SAG and MPD models by theoretical analysis and simulations. They found that the pulse compression gain has an increase of 3 dB by using the MPD model compared with the SAG model in typical cases.

  2. Quasi-isentropic Compression of Iron and Magnesium Oxide to 3 Mbar at the Omega Laser Facility

    Science.gov (United States)

    Wang, J.; Smith, R. F.; Coppari, F.; Eggert, J. H.; Boehly, T.; Collins, G.; Duffy, T. S.

    2011-12-01

    Developing a high-pressure, modest temperature ramp compression drive permits exploration of new regions of thermodynamic space, inaccessible through traditional methods of shock or static compression, and of particular relevance to material conditions found in planetary interiors both within and outside our solar system. Ramp compression is a developing technique that allows materials to be compressed along a quasi-isentropic path and provides the ability to study materials in the solid state to higher pressures than can be achieved with diamond anvil cell or shock wave methods. Iron and magnesium oxide are geologically important materials each representative of one of the two major interior regions (core and mantle) of terrestrial planets. An experimental platform for ramp loading of iron (Fe) and magnesium oxide (MgO), has been established and tested in experiments at the Omega Laser Facility, University of Rochester. Omega is a 60-beam ultraviolet (352 nm) neodymium glass laser which is capable of delivery kilojoules of energy in ~10 ns pulses onto targets of a few mm in dimension. In the current experiments, we used a composite ramped laser pulse involving typically 15 beams with total energy of 2.6-3.3 kJ. The laser beams were used to launch spatially planar ramp compression waves into Fe and MgO targets. Each target had four steps that were approximately 5-7 μm thick. Detection of the ramp wave arrival and its velocity at the free surface of each step was made using a VISAR velocity interferometer. Through the use of Lagrangian analysis on the measured wave profiles, stress-density states in iron and magnesium oxide have been determined to pressures of 291 GPa and 260 GPa respectively. For Fe, the α-ɛ transition of iron is overdriven by an initial shock pulse of ~90.1 GPa followed by ramp compression to the peak pressure. The results will be compared with shock compression and diamond anvil cell data for both materials. We acknowledge the Omega staff at

  3. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Preparation and chemical crystallographic study of new hydrides and hydro-fluorides of ionic character; Preparation et etude cristallochimique de nouveaux hydrures et fluorohydrures a caractere ionique

    Energy Technology Data Exchange (ETDEWEB)

    Park, Hyung-Ho

    1988-07-22

    Within the context of a growing interest in the study of reversible hydrides with the perspective of their application in hydrogen storage, this research thesis more particularly addressed the case of ternary hydrides and fluorides, and of hydro-fluorides. The author reports the development of a method of preparation of alkaline hydrides, of alkaline earth hydrides and of europium hydride, and then the elaboration of ternary hydrides. He addresses the preparation of caesium fluorides and of calcium or nickel fluorides, of Europium fluorides, and of ternary fluorides. Then, he addresses the preparation of hydro-fluorides (caesium, calcium, europium fluorides, and caesium and nickel fluorides). The author presents the various experimental techniques: chemical analysis, radio-crystallographic analysis, volumetric mass density measurement, magnetic measurements, ionic conductivity measurements, Moessbauer spectroscopy, and nuclear magnetic resonance. He reports the crystallographic study of some ternary alkaline and alkaline-earth hydrides (KH-MgH{sub 2}, RbH-CaH{sub 2}, CsH-CaH{sub 2}, RbH-MgH{sub 2} and CsH-MgH{sub 2}) and of some hydro-fluorides (CsCaF{sub 2}H, EuF{sub 2}H, CsNiF{sub 2}H) [French] Dans une premiere partie, de nouveaux hydrures ternaires ont ete prepares et caracterises. Les systemes etudies sont AH-MH 2 (A = K, Rb, Cs et M = Mg, Ca). Dans les systemes AH-MgH 2 l'evolution structurale a ete discutee en fonction du caractere iono-covalent de la liaison magnesium-hydrogene. Dans une deuxieme partie, plusieurs nouveaux fluorohydrures ont ete mis en evidence. L'effet de la substitution de l'hydrogene au fluor dans ces phases a ete etudiee en utilisant la RMN, la spectroscopie Moessbauer, la conductivite ionique et les mesures magnetiques.

  5. X-ray spectroscopy of laser imploded targets

    International Nuclear Information System (INIS)

    Yaakobi, B.; Skupsky, S.; McCrory, R.L.; Hooper, C.F.; Deckman, H.; Bourke, P.; Soures, J.M.

    1981-01-01

    X-ray spectroscopy provides a variety of means for studying the interaction of lasers with plasmas, in particular the interaction with imploding targets in inertial confinement fusion. A typical fusion target is composed of materials other than the thermonuclear fuel which play a variety of roles (tamping, shielding, thermal isolation, etc.). These structural elements emit characteristic X-ray lines and continua, and through their spectral and spatial distributions can yield very valuable information on the interaction and implosion dynamics. Examples are the study of heat conductivity, the mixing of different target layers, and the determination of temperature and density at the compressed target core. Results will be shown for electron densities Nsub(e) approximately equal to 10 24 cm -3 and temperatures T approximately equal to 1 keV measured during compression of argon-filled targets with a six-beam laser of peak power 2 TW. (author)

  6. [Effects of real-time audiovisual feedback on secondary-school students' performance of chest compressions].

    Science.gov (United States)

    Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto

    2015-06-01

    To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.

  7. Approaches to radiotherapy in metastatic spinal cord compression.

    Science.gov (United States)

    Suppl, Morten Hiul

    2018-04-01

    Metastatic spinal cord compression is caused by the progression of metastatic lesions within the vicinity of the spinal cord. The consequences are very severe with loss of neurological function and severe pain. The standard treatment is surgical intervention followed by radiotherapy or radiotherapy alone. However, the majority of patients are treated with radiotherapy only due to contraindications to surgery and technical inoperability. Stereotactic body radiotherapy is a technology to deliver higher radiation dose to the radiotherapy target with the use of spatial coordinates. This modality has shown positive results in treating lesions in brain and lungs. Hence, it could prove beneficial in metastatic spinal cord compression. We designed and planned a trial to investigate this method in patients with metastatic spinal cord compression. The method was usable but the trial was stopped prematurely due to low accrual that made comparison with surgery impossible. Low accrual is a known problem for trials evaluating new approaches in radiotherapy. Target definition in radiotherapy of metastatic spinal cord compression is defined by patient history, examination and imaging. Functional imaging could provide information to guide target definition with the sparring of normal tissue e.g. spinal cord and hematopoietic tissue of the bone marrow. In future trials this may be used for dose escalation of spinal metastases. The trial showed that PET/MRI was feasible in this group of patients but did not change the radiotherapy target in the included patients. Neurological outcome is similar irrespective of course length and therefore single fraction radiotherapy is recommended for the majority of patients. In-field recurrence is a risk factor of both short and long fractionation schemes and re-irradiation have the potential risk of radiation-induced myelopathy. In a retrospective study of re-irradiation, we investigated the incidence of radiation-induced myelopathy. In our study

  8. Progress In Magnetized Target Fusion Driven by Plasma Liners

    Science.gov (United States)

    Thio, Francis Y. C.; Kirkpatrick, Ronald C.; Knapp, Charles E.; Cassibry, Jason; Eskridge, Richard; Lee, Michael; Smith, James; Martin, Adam; Wu, S. T.; Schmidt, George; hide

    2001-01-01

    Magnetized target fusion (MTF) attempts to combine the favorable attributes of magnetic confinement fusion (MCF) for energy confinement with the attributes of inertial confinement fusion (ICF) for efficient compression heating and wall-free containment of the fusing plasma. It uses a material liner to compress and contain a magnetized plasma. For practical applications, standoff drivers to deliver the imploding momentum flux to the target plasma remotely are required. Spherically converging plasma jets have been proposed as standoff drivers for this purpose. The concept involves the dynamic formation of a spherical plasma liner by the merging of plasma jets, and the use of the liner so formed to compress a spheromak or a field reversed configuration (FRC).

  9. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    Science.gov (United States)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  10. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  11. Neutron penumbral imaging of laser-fusion targets

    International Nuclear Information System (INIS)

    Lerche, R.A.; Ress, D.B.

    1988-01-01

    Using a new technique, penumbral coded-aperture imaging, the first neutron images of laser-driven, inertial-confinement fusion targets were obtained. With these images the deuterium-tritium burn region within a compressed target can be measured directly. 4 references, 11 figures

  12. Analysis of tractable distortion metrics for EEG compression applications

    International Nuclear Information System (INIS)

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Cárdenas-Barrera, Julián

    2012-01-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio. (paper)

  13. SBS pulse compression for excimer inertial fusion energy drivers

    Energy Technology Data Exchange (ETDEWEB)

    Linford, G.J. [TRW Space and Electronics Group, Redondo Beach, CA (United States). Space and Technology Div.

    1994-12-31

    A key requirement for the development of commercial fusion power plants utilizing inertial confinement fusion (ICF) as a source of thermonuclear power is the availability of reliable, efficient laser drivers. These laser drivers must be capable of delivering UV optical pulses having energies of the order of 5MJ to cryogenic deuterium-tritium (D/T) ICF targets. The current requirements for laser ICF target irradiation specify the laser wavelength, {lambda} ca. 250 nm, pulse duration, {tau}{sub p} ca. 6 ns, bandwidth, {Delta}{lambda} ca. 0.1 nm, polarization state, etc. Excimer lasers are a leading candidate to fill these demanding ICF driver requirements. However, since excimer lasers are not storage lasers, the excimer laser pulse duration, {tau}{sub pp}, is determined primarily by the length of the excitation pulse delivered to the excimer laser amplifier. Pulsed power associated with efficiently generating excimer laser pulses has a time constant, {tau}{sub pp} which falls in the range, 30 {tau}{sub p}<{tau}{sub pp}<100{tau}{sub p}. As a consequence, pulse compression is needed to convert the long excimer laser pulses to pulses of duration {tau}{sub p}. These main ICF driver pulses require, in addition, longer, lower power precursor pulses delivered to the ICF target before the arrival of the main pulse. Although both linear and non-linear optical (NLO) pulse compression techniques have been developed, computer simulations have shown that a ``chirped,`` self-seeded, stimulated Brillouin scattering (SBS) pulse compressor cell using SF{sub 6} at a density, {rho} ca. 1 amagat can efficiently compress krypton fluoride (KrF) laser pulses at {lambda}=248 nm. In order to avoid the generation of output pulses substantially shorter than {tau}{sub p}, the optical power in the chirped input SBS ``seed`` beams was ramped. Compressed pulse conversion efficiencies of up to 68% were calculated for output pulse durations of {tau}{sub p} ca. ns.

  14. Cerebral magnetic resonance imaging of compressed air divers in diving accidents.

    Science.gov (United States)

    Gao, G K; Wu, D; Yang, Y; Yu, T; Xue, J; Wang, X; Jiang, Y P

    2009-01-01

    To investigate the characteristics of the cerebral magnetic resonance imaging (MRI) of compressed air divers in diving accidents, we conducted an observational case series study. MRI of brain were examined and analysed on seven cases compressed air divers complicated with cerebral arterial gas embolism CAGE. There were some characteristics of cerebral injury: (1) Multiple lesions; (2) larger size; (3) Susceptible to parietal and frontal lobe; (4) Both cortical grey matter and subcortical white matter can be affected; (5) Cerebellum is also the target of air embolism. The MRI of brain is an sensitive method for detecting cerebral lesions in compressed air divers in diving accidents. The MRI should be finished on divers in diving accidents within 5 days.

  15. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    International Nuclear Information System (INIS)

    Braunschweig, R.; Kaden, Ingmar; Schwarzer, J.; Sprengel, C.; Klose, K.

    2009-01-01

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  16. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, R.; Kaden, Ingmar [Klinik fuer Bildgebende Diagnostik und Interventionsradiologie, BG-Kliniken Bergmannstrost Halle (Germany); Schwarzer, J.; Sprengel, C. [Dept. of Management Information System and Operations Research, Martin-Luther-Univ. Halle Wittenberg (Germany); Klose, K. [Medizinisches Zentrum fuer Radiologie, Philips-Univ. Marburg (Germany)

    2009-07-15

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  17. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  18. Laser driven compression and neutron generation with spherical shell targets

    International Nuclear Information System (INIS)

    Campbell, P.M.; Hammerling, P.; Johnson, R.R.; Kubis, J.J.; Mayer, F.J.

    1977-01-01

    Laser-driven implosion experiments using DT-gas-filled spherical glass-shell targets are described. Neutron yields to 5 x 10 7 are produced from implosions of small ( -- 55 μm-diameter) targets spherically illuminated with an on-target laser power of 0.4 terawatt. Nuclear reaction product diagnostics, X-ray pinhole photographs, fast-ion spectra and X-ray measurements are used in conjunction with hydrodynamic computer code simulations to investigate the implosion phenomenology as well as the target corona evolution. Simulations using completely classical effects are not able to describe the full range of experimental data. Electron or radiation preheating may be required to explain some implosion measurements. (auth.)

  19. Method for mounting laser fusion targets for irradiation

    Science.gov (United States)

    Fries, R. Jay; Farnum, Eugene H.; McCall, Gene H.

    1977-07-26

    Methods for preparing laser fusion targets of the ball-and-disk type are disclosed. Such targets are suitable for irradiation with one or two laser beams to produce the requisite uniform compression of the fuel material.

  20. Acoustically Driven Magnetized Target Fusion At General Fusion: An Overview

    Science.gov (United States)

    O'Shea, Peter; Laberge, M.; Donaldson, M.; Delage, M.; the Fusion Team, General

    2016-10-01

    Magnetized Target Fusion (MTF) involves compressing an initial magnetically confined plasma of about 1e23 m-3, 100eV, 7 Tesla, 20 cm radius, >100 μsec life with a 1000x volume compression in 100 microseconds. If near adiabatic compression is achieved, the final plasma of 1e26 m-3, 10keV, 700 Tesla, 2 cm radius, confined for 10 μsec would produce interesting fusion energy gain. General Fusion (GF) is developing an acoustic compression system using pneumatic pistons focusing a shock wave on the CT plasma in the center of a 3 m diameter sphere filled with liquid lead-lithium. Low cost driver, straightforward heat extraction, good tritium breeding ratio and excellent neutron protection could lead to a practical power plant. GF (65 employees) has an active plasma R&D program including both full scale and reduced scale plasma experiments and simulation of both. Although acoustic driven compression of full scale plasmas is the end goal, present compression studies use reduced scale plasmas and chemically accelerated Aluminum liners. We will review results from our plasma target development, motivate and review the results of dynamic compression field tests and briefly describe the work to date on the acoustic driver front.

  1. Structured cylindrical targets

    International Nuclear Information System (INIS)

    Arnold, R.

    1986-01-01

    A variety of experimental concepts using high-energy heavy-ion beams in cylindrical targets have been studied through numerical simulation. With an accelerator planned for GSl, plasma temperatures of 100 eV can be reached by cylindrical compression, using inhomogeneous hollow-shell targets. Magnetic insulation, using external fields, has been explored as an aid in reaching high core temperatures. Experiments on collision-pumped x-ray laser physics are also discussed. (ii) Two-dimensional PlC code simulations of homogeneous solid targets show hydrodynamic effects not found in previous 1-D calculations. (iii) Preliminary ideas for an experiment on non-equilibrium heavy-ion charge-states using an existing accelerator and a pre-formed plasma target are outlined. (author)

  2. Structured cylindrical targets

    International Nuclear Information System (INIS)

    Arnold, R.; Lackner-Russo, D.; Meyer-ter-Vehn, J.; Hoffmann, I.

    1986-01-01

    A variety of experimental concepts using high-energy heavy-ion beams in cylindrical targets have been studied through numerical simulation. With an accelerator planned for GSl, plasma temperatures of 100 eV can be reached by cylindrical compression, using inhomogenous hollow-shell targets. Magnetic insulation, using external fields, has been explored as an aid in reaching high core temperatures. Experiments on collision-pumped x-ray laser physics are also discussed. (ii) Two-dimensional PlC code simulations of homogeneous solid targets show hydrodynamic effects not found in previous l-D calculations. (iii) Preliminary ideas for an experiment on non-equilibrium heavy-ion charge-states using an existing accelerator and a pre-formed plasma target are outlined. (author)

  3. Variable Frame Rate and Length Analysis for Data Compression in Distributed Speech Recognition

    DEFF Research Database (Denmark)

    Kraljevski, Ivan; Tan, Zheng-Hua

    2014-01-01

    This paper addresses the issue of data compression in distributed speech recognition on the basis of a variable frame rate and length analysis method. The method first conducts frame selection by using a posteriori signal-to-noise ratio weighted energy distance to find the right time resolution...... length for steady regions. The method is applied to scalable source coding in distributed speech recognition where the target bitrate is met by adjusting the frame rate. Speech recognition results show that the proposed approach outperforms other compression methods in terms of recognition accuracy...... for noisy speech while achieving higher compression rates....

  4. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  5. Super liquid density target designs

    International Nuclear Information System (INIS)

    Pan, Y.L.; Bailey, D.S.

    1976-01-01

    The success of laser fusion depends on obtaining near isentropic compression of fuel to very high densities and igniting this fuel. To date, the results of laser fusion experiments have been based mainly on the exploding pusher implosion of fusion capsules consisting of thin glass microballoons (wall thickness of less than 1 micron) filled with low density DT gas (initial density of a few mg/cc). Maximum DT densities of a few tenths of g/cc and temperatures of a few keV have been achieved in these experiments. We will discuss the results of LASNEX target design calculations for targets which: (a) can compress fuel to much higher densities using the capabilities of existing Nd-glass systems at LLL; (b) allow experimental measurement of the peak fuel density achieved

  6. Metronome improves compression and ventilation rates during CPR on a manikin in a randomized trial.

    Science.gov (United States)

    Kern, Karl B; Stickney, Ronald E; Gallison, Leanne; Smith, Robert E

    2010-02-01

    We hypothesized that a unique tock and voice metronome could prevent both suboptimal chest compression rates and hyperventilation. A prospective, randomized, parallel design study involving 34 pairs of paid firefighter/emergency medical technicians (EMTs) performing two-rescuer CPR using a Laerdal SkillReporter Resusci Anne manikin with and without metronome guidance was performed. Each CPR session consisted of 2 min of 30:2 CPR with an unsecured airway, then 4 min of CPR with a secured airway (continuous compressions at 100 min(-1) with 8-10 ventilations/min), repeated after the rescuers switched roles. The metronome provided "tock" prompts for compressions, transition prompts between compressions and ventilations, and a spoken "ventilate" prompt. During CPR with a bag/valve/mask the target compression rate of 90-110 min(-1) was achieved in 5/34 CPR sessions (15%) for the control group and 34/34 sessions (100%) for the metronome group (pmetronome or control group during CPR with a bag/valve/mask. During CPR with a bag/endotracheal tube, the target of both a compression rate of 90-110 min(-1) and a ventilation rate of 8-11 min(-1) was achieved in 3/34 CPR sessions (9%) for the control group and 33/34 sessions (97%) for the metronome group (pMetronome use with the secured airway scenario significantly decreased the incidence of over-ventilation (11/34 EMT pairs vs. 0/34 EMT pairs; pmetronome was effective at directing correct chest compression and ventilation rates both before and after intubation. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.

  7. Effect of radiation losses on the compression of hydrogen by imploding solid liners

    International Nuclear Information System (INIS)

    Hussey, T.W.; Kiuttu, G.F.; Degnan, J.H.; Peterkin, R.E.; Smith, G.A.; Turchi, P.J.

    1996-01-01

    Quasispherical solid liner implosions with little or no instability growth have been achieved experimentally. Applications for such implosions include the uniform, shock-free compression of some sort of on-axis target. One proposed means of obtaining such compression is to inject a 1 eV hydrogen plasma working fluid between the liner and the target, and imploding the liner around it. the high initial temperature assures that the sound speed within the liner is always greater than the inner surface implosion velocity of the liner, and the initial density is chosen so that the volume of the working fluid at peak compression is sufficiently large so that perfectly spherical convergence of the liner is not required. One concern with such an approach is that energy losses associated with ionization and radiation will degrade the effective gamma of the compression. To isolate and, therefore, understand these effects the authors have developed a simple zero-dimensional model for the liner implosion that accurately accounts for the shape and thickness of the liner as it implodes and compresses the working fluid. Based on simple considerations they make a crude estimate of the range of initial densities of interest for this technique. They then observe that within this density rage, for the temperatures of interest, the lines are strongly self-absorbed so that the transport of radiation is dominated by bound-free and free-free processes

  8. Development of real time abdominal compression force monitoring and visual biofeedback system

    Science.gov (United States)

    Kim, Tae-Ho; Kim, Siyong; Kim, Dong-Su; Kang, Seong-Hee; Cho, Min-Seok; Kim, Kyeong-Hyeon; Shin, Dong-Seok; Suh, Tae-Suk

    2018-03-01

    In this study, we developed and evaluated a system that could monitor abdominal compression force (ACF) in real time and provide a surrogating signal, even under abdominal compression. The system could also provide visual-biofeedback (VBF). The real-time ACF monitoring system developed consists of an abdominal compression device, an ACF monitoring unit and a control system including an in-house ACF management program. We anticipated that ACF variation information caused by respiratory abdominal motion could be used as a respiratory surrogate signal. Four volunteers participated in this test to obtain correlation coefficients between ACF variation and tidal volumes. A simulation study with another group of six volunteers was performed to evaluate the feasibility of the proposed system. In the simulation, we investigated the reproducibility of the compression setup and proposed a further enhanced shallow breathing (ESB) technique using VBF by intentionally reducing the amplitude of the breathing range under abdominal compression. The correlation coefficient between the ACF variation caused by the respiratory abdominal motion and the tidal volume signal for each volunteer was evaluated and R 2 values ranged from 0.79 to 0.84. The ACF variation was similar to a respiratory pattern and slight variations of ACF ranges were observed among sessions. About 73-77% average ACF control rate (i.e. compliance) over five trials was observed in all volunteer subjects except one (64%) when there was no VBF. The targeted ACF range was intentionally reduced to achieve ESB for VBF simulation. With VBF, in spite of the reduced target range, overall ACF control rate improved by about 20% in all volunteers except one (4%), demonstrating the effectiveness of VBF. The developed monitoring system could help reduce the inter-fraction ACF set up error and the intra fraction ACF variation. With the capability of providing a real time surrogating signal and VBF under compression, it could

  9. Magnetic Compression Experiment at General Fusion with Simulation Results

    Science.gov (United States)

    Dunlea, Carl; Khalzov, Ivan; Hirose, Akira; Xiao, Chijin; Fusion Team, General

    2017-10-01

    The magnetic compression experiment at GF was a repetitive non-destructive test to study plasma physics applicable to Magnetic Target Fusion compression. A spheromak compact torus (CT) is formed with a co-axial gun into a containment region with an hour-glass shaped inner flux conserver, and an insulating outer wall. External coil currents keep the CT off the outer wall (levitation) and then rapidly compress it inwards. The optimal external coil configuration greatly improved both the levitated CT lifetime and the rate of shots with good compressional flux conservation. As confirmed by spectrometer data, the improved levitation field profile reduced plasma impurity levels by suppressing the interaction between plasma and the insulating outer wall during the formation process. We developed an energy and toroidal flux conserving finite element axisymmetric MHD code to study CT formation and compression. The Braginskii MHD equations with anisotropic heat conduction were implemented. To simulate plasma / insulating wall interaction, we couple the vacuum field solution in the insulating region to the full MHD solution in the remainder of the domain. We see good agreement between simulation and experiment results. Partly funded by NSERC and MITACS Accelerate.

  10. Compressive Strength of Volcanic Ash/Ordinary Portland Cement Laterized Concrete

    Directory of Open Access Journals (Sweden)

    Olusola K. O.

    2010-01-01

    Full Text Available This study investigates the effect of partial replacement of cement with volcanic ash (VA on the compressive strength of laterized concrete. A total of 192 cubes of 150mm dimensions were cast and cured in water for 7, 14, 21, and 28 days of hydration with cement replacement by VA and sand replacement by laterite both ranging from 0 to 30% respectively, while a control mix of 28-day target strength of 25 N/mm2 was adopted. The results show that the density and compressive strength of concrete decreased with increase in volcanic ash content. The 28-day, density dropped from 2390 kg/m3 to 2285 kg/m3 (i.e. 4.4% loss and the compressive strength from 25.08 N/mm2 to 17.98 N/mm2 (i.e. 28% loss for 0-30% variation of VA content with no laterite introduced. The compressive strength also decreased with increase in laterite content; the strength of the laterized concrete however increases as the curing age progresses.

  11. Reactor potential for magnetized target fusion

    International Nuclear Information System (INIS)

    Dahlin, J.E.

    2001-06-01

    Magnetized Target Fusion (MTF) is a possible pathway to thermonuclear fusion different from both magnetic fusion and inertial confinement fusion. An imploding cylindrical metal liner compresses a preheated and magnetized plasma configuration until thermonuclear conditions are achieved. In this report the Magnetized Target Fusion concept is evaluated and a zero-dimensional computer model of the plasma, liner and circuit as a connected system is designed. The results of running this code are that thermonuclear conditions are achieved indeed, but only during a very short time. At peak compression the pressure from the compressed plasma and magnetic field is so large reversing the liner implosion into an explosion. The time period of liner motion reversal is termed the dwell time and is crucial to the performance of the fusion system. Parameters as liner thickness and plasma density are certainly of significant importance to the dwell time, but it seems like a reactor based on the MTF principle hardly can become economic if not innovative solutions are introduced. In the report two such solutions are presented as well

  12. Reactor potential for magnetized target fusion

    Energy Technology Data Exchange (ETDEWEB)

    Dahlin, J.E

    2001-06-01

    Magnetized Target Fusion (MTF) is a possible pathway to thermonuclear fusion different from both magnetic fusion and inertial confinement fusion. An imploding cylindrical metal liner compresses a preheated and magnetized plasma configuration until thermonuclear conditions are achieved. In this report the Magnetized Target Fusion concept is evaluated and a zero-dimensional computer model of the plasma, liner and circuit as a connected system is designed. The results of running this code are that thermonuclear conditions are achieved indeed, but only during a very short time. At peak compression the pressure from the compressed plasma and magnetic field is so large reversing the liner implosion into an explosion. The time period of liner motion reversal is termed the dwell time and is crucial to the performance of the fusion system. Parameters as liner thickness and plasma density are certainly of significant importance to the dwell time, but it seems like a reactor based on the MTF principle hardly can become economic if not innovative solutions are introduced. In the report two such solutions are presented as well.

  13. The search for new radioisotopes; La recherche de nouveaux noyaux et de nouveaux elements

    Energy Technology Data Exchange (ETDEWEB)

    Bernas, M [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Armbruster, P [GSI, Max-Planck-Str., Darmstadt (Germany)

    2000-07-01

    Phosphorus-30 was the first artificial radioisotope, it was produced by F. and I. Joliot-Curie in 1934, since then 2460 new nuclei have been discovered. This document reviews the radioisotopes known and the methods used to separate them. The authors describe the discovery of new radioisotopes such as Nickel-78 produced in the fission of high energy uranium ions impinging on a lead target (IPN-GSI collaboration) and the discovery of Nickel-48 by a team CENBG-Ganil. All this experience is useful for the processing of nuclear wastes by using transmutation. (A.C.)

  14. Final report on the Magnetized Target Fusion Collaboration

    Energy Technology Data Exchange (ETDEWEB)

    John Slough

    2009-09-08

    Nuclear fusion has the potential to satisfy the prodigious power that the world will demand in the future, but it has yet to be harnessed as a practical energy source. The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. It is the contention here that a simpler path to fusion can be achieved by creating fusion conditions in a different regime at small scale (~ a few cm). One such program now under study, referred to as Magnetized Target Fusion (MTF), is directed at obtaining fusion in this high energy density regime by rapidly compressing a compact toroidal plasmoid commonly referred to as a Field Reversed Configuration (FRC). To make fusion practical at this smaller scale, an efficient method for compressing the FRC to fusion gain conditions is required. In one variant of MTF a conducting metal shell is imploded electrically. This radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target plasmoid suppresses the thermal transport to the confining shell, thus lowering the imploding power needed to compress the target. The undertaking to be described in this proposal is to provide a suitable target FRC, as well as a simple and robust method for inserting and stopping the FRC within the imploding liner. The timescale for testing and development can be rapidly accelerated by taking advantage of a new facility funded by the Department of Energy. At this facility, two inductive plasma accelerators (IPA) were constructed and tested. Recent experiments with these IPAs have demonstrated the ability to rapidly form, accelerate and merge two hypervelocity FRCs into a compression chamber. The resultant FRC that was formed was hot (T&ion ~ 400 eV), stationary, and stable with a configuration lifetime several times that necessary for the MTF liner experiments. The accelerator length was less than

  15. Les nouveaux horizons de l’ecclésiologie médiévale. Ecclésiologie et hérésiologie (Moyen Âge, Temps modernes

    Directory of Open Access Journals (Sweden)

    Dominique Iogna-Prat

    2010-10-01

    Full Text Available Le Père Yves Congar, auquel l’une des journées du cycle les «Nouveaux horizons de l’ecclésiologie» a été consacrée l’an passé, a en bonne partie fondé son histoire de l’ecclésiologie sur le couple ecclésiologie/hérésiologie, notant le fait avéré que les grands tournants ecclésiologiques sont, au Moyen Âge, le plus souvent concomitants de graves crises internes – par exemple le Grand Schisme – ou de remises en cause venues de milieux «dissidents» – ou dénoncés et poursuivis comme tels. Les réf...

  16. Spectral Distortion in Lossy Compression of Hyperspectral Data

    Directory of Open Access Journals (Sweden)

    Bruno Aiazzi

    2012-01-01

    Full Text Available Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM, is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD for near-lossless methods, for example, differential pulse code modulation (DPCM, or mean-squared error (MSE for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to the virtually lossless protocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.

  17. Influence of Curing Age and Mix Composition on Compressive Strength of Volcanic Ash Blended Cement Laterized Concrete

    Directory of Open Access Journals (Sweden)

    Babafemi A.J.

    2012-01-01

    Full Text Available This study investigates the influence of curing age and mix proportions on the compressive strength of volcanic ash (VA blended cement laterized concrete. A total of 288 cubes of 100mm dimensions were cast and cured in water for 3, 7, 28, 56, 90 and 120 days of hydration with cement replacement by VA and sand replacement by laterite both ranging from 0 to 30% respectively while a control mix of 28-day target strength of 25N/mm2 (using British Method was adopted. The results show that the compressive strength of the VA-blended cement laterized concrete increased with the increase in curing age but decreased as the VA and laterite (LAT contents increased. The optimum replacement level was 20%LAT/20%VA. At this level the compressive strength increased with curing age at a decreasing rate beyond 28 days. The target compressive strength of 25N/mm2 was achieved for this mixture at 90 days of curing. VA content and curing age was noted to have significant effect (α ≤ 0.5 on the compressive strength of the VA-blended cement laterized concrete.

  18. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  19. Reconfigurable Hardware for Compressing Hyperspectral Image Data

    Science.gov (United States)

    Aranki, Nazeeh; Namkung, Jeffrey; Villapando, Carlos; Kiely, Aaron; Klimesh, Matthew; Xie, Hua

    2010-01-01

    High-speed, low-power, reconfigurable electronic hardware has been developed to implement ICER-3D, an algorithm for compressing hyperspectral-image data. The algorithm and parts thereof have been the topics of several NASA Tech Briefs articles, including Context Modeler for Wavelet Compression of Hyperspectral Images (NPO-43239) and ICER-3D Hyperspectral Image Compression Software (NPO-43238), which appear elsewhere in this issue of NASA Tech Briefs. As described in more detail in those articles, the algorithm includes three main subalgorithms: one for computing wavelet transforms, one for context modeling, and one for entropy encoding. For the purpose of designing the hardware, these subalgorithms are treated as modules to be implemented efficiently in field-programmable gate arrays (FPGAs). The design takes advantage of industry- standard, commercially available FPGAs. The implementation targets the Xilinx Virtex II pro architecture, which has embedded PowerPC processor cores with flexible on-chip bus architecture. It incorporates an efficient parallel and pipelined architecture to compress the three-dimensional image data. The design provides for internal buffering to minimize intensive input/output operations while making efficient use of offchip memory. The design is scalable in that the subalgorithms are implemented as independent hardware modules that can be combined in parallel to increase throughput. The on-chip processor manages the overall operation of the compression system, including execution of the top-level control functions as well as scheduling, initiating, and monitoring processes. The design prototype has been demonstrated to be capable of compressing hyperspectral data at a rate of 4.5 megasamples per second at a conservative clock frequency of 50 MHz, with a potential for substantially greater throughput at a higher clock frequency. The power consumption of the prototype is less than 6.5 W. The reconfigurability (by means of reprogramming) of

  20. Laser-driven shock-wave propagation in pure and layered targets

    International Nuclear Information System (INIS)

    Salzmann, D.; Eliezer, S.; Krumbein, A.D.; Gitter, L.

    1983-01-01

    The propagation properties of laser-driven shock waves in pure and layered polyethylene and aluminum slab targets are studied for a set of laser intensities and pulse widths. The laser-plasma simulations were carried out by means of our one-dimensional Lagrangian hydrodynamic code. It is shown that the various parts of a laser-driven compression wave undergo different thermodynamic trajectories: The shock front portion is on the Hugoniot curve whereas the rear part is closer to an adiabat. It is found that the shock front is accelerated into the cold material till troughly-equal0.8tau (where tau is the laser pulse width) and only later is a constant velocity propagation attained. The scaling laws obtained for the pressure and temperature of the compression wave in pure targets are in good agreement with those published in other works. In layered targets, high compression and pressure were found to occur at the interface of CH 2 on Al targets due to impedance mismatch but were not found when the layers were reversed. The persistence time of the high pressure on the interface in the CH 2 on Al case is long enough relative to the characteristic times of the plasma to have an appreciable influence on the shock-wave propagation into the aluminum layer. This high pressure and compression on the interface can be optimized by adjusting the CH 2 layer thickness

  1. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Principles of non-Liouvillean pulse compression by photoionization for heavy ion fusion drivers

    International Nuclear Information System (INIS)

    Hofmann, I.

    1990-05-01

    Photoionization of single charged heavy ions has been proposed recently by Rubbia as a non-Liouvillean injection scheme from the linac into the storage rings of a driver accelerator for inertial confinement fusion (ICF). The main idea of this scheme is the accumulation of high currents of heavy ions without the usually inevitable increase of phase space. Here we suggest to use the photoionization idea in an alternative scheme: if it is applied at the final stage of pulse compression (replacing the conventional bunch compression by an rf voltage, which always increases the momentum spread) there is a significant advantage in the performance of the accelerator. We show, in particular, that this new compression scheme has the potential to relax the tough stability limitations, which were identified in the heavy ion fusion reactor study HIBALL. Moreover, it is promising for achieving the higher beam power, which is suitable for indirectly driven fusion targets (10 16 Watts/gram in contrast with the 10 14 for the directly driven targets in HIBALL). The idea of non-Liouvillean bunch compression is to stack a large number of bunches (typically 50-100) in the same phase space volume during a change of charge state of the ion. A particular feature of this scheme with regard to beam dynamics is its transient nature, since the time required is one revolution per bunch. After the stacking the intense bunch is ejected and directly guided to the target. The present study is a first step to explore the possibly limiting effect of space charge under the conditions of parameters of a full-size driver accelerator. Preliminary results indicate that there is a limit to the effective stacking number (non-Liouvillean 'compression-factor'), which is, however, not prohibitive. Requirements to the power of the photon beam from a free electron laser are also discussed. It is seen that resonant cross sections of the order of 10 -15 cm 2 lead to photon beam powers of a few Megawatt. (orig.)

  3. Influence of chest compression rate guidance on the quality of cardiopulmonary resuscitation performed on manikins.

    Science.gov (United States)

    Jäntti, H; Silfvast, T; Turpeinen, A; Kiviniemi, V; Uusaro, A

    2009-04-01

    The adequate chest compression rate during CPR is associated with improved haemodynamics and primary survival. To explore whether the use of a metronome would affect also chest compression depth beside the rate, we evaluated CPR quality using a metronome in a simulated CPR scenario. Forty-four experienced intensive care unit nurses participated in two-rescuer basic life support given to manikins in 10min scenarios. The target chest compression to ventilation ratio was 30:2 performed with bag and mask ventilation. The rescuer performing the compressions was changed every 2min. CPR was performed first without and then with a metronome that beeped 100 times per minute. The quality of CPR was analysed with manikin software. The effect of rescuer fatigue on CPR quality was analysed separately. The mean compression rate between ventilation pauses was 137+/-18compressions per minute (cpm) without and 98+/-2cpm with metronome guidance (pmetronome (pmetronome guidance (p=0.09). The total number of chest compressions performed was 1022 without metronome guidance, 42% at the correct depth; and 780 with metronome guidance, 61% at the correct depth (p=0.09 for difference for percentage of compression with correct depth). Metronome guidance corrected chest compression rates for each compression cycle to within guideline recommendations, but did not affect chest compression quality or rescuer fatigue.

  4. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  5. GPU Lossless Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  6. Excavation and drying of compressed peat; Tiivistetyn turpeen nosto ja kuivaus

    Energy Technology Data Exchange (ETDEWEB)

    Erkkilae, A.; Frilander, P.; Hillebrand, K.; Nurmi, H.

    1996-12-31

    The target of this three year (1993 - 1995) project was to improve the peat product-ion efficiency by developing an energy economical excavation method for compressed peat, by which it is possible to obtain best possible degree of compression and load from the DS-production point of view. It is possible to improve the degree of utilization of solar radiation in drying from 30 % to 40 %. The main research areas were drying of the compressed peat and peat compression. The third sub-task for 1995 was demonstration of the main parts of the method in laboratory scale. Experimental compressed peat (Compeat) drying models were made for peats Carex-peat H7, Carex-peat H5 and Carex-Sphagnum-peat H7. Compeat dried without turning in best circumstances in 34 % shorter time than milled layer made of the same peat turned twice, the initial moisture content being 4 kgH2OkgDS-1. In the tests carried out in 1995 with Carex-peat the compression had not corresponding effect on intensifying of the drying of peat. Compression of Carex-Sphagnum peat H7 increased the drying speed by about 10 % compared with the drying time of uncompressed milled layer. In the sprinkling test about 30-50 % of the sprinkled water was sucked into the compressed peat layer, while about 70 % of the rain is sucked into the corresponding uncompressed milled layer. Use of vibration decreased the energy consumption of the steel-surfaced nozzles about 20 % in the maximum, but the effect depend on the rotation speed of the macerator and the vibration power. In the new Compeat method (production method for compressed peat), developed in the research, the peat is loosened from the field surface by milling 3-5 cm thick layer of peat of moisture content 75-80 %

  7. Resource efficient data compression algorithms for demanding, WSN based biomedical applications.

    Science.gov (United States)

    Antonopoulos, Christos P; Voros, Nikolaos S

    2016-02-01

    During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the

  8. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  9. Distribution to the Astronomy Community of the Compressed Digitized Sky Survey

    Science.gov (United States)

    Postman, Marc

    1996-03-01

    The Space Telescope Science Institute has compressed an all-sky collection of ground-based images and has printed the data on a two volume, 102 CD-ROM disc set. The first part of the survey (containing images of the southern sky) was published in May 1994. The second volume (containing images of the northern sky) was published in January 1995. Software which manages the image retrieval is included with each volume. The Astronomical Society of the Pacific (ASP) is handling the distribution of the lOx compressed data and has sold 310 sets as of October 1996. ASP is also handling the distribution of the recently published 100x version of the northern sky survey which is publicly available at a low cost. The target markets for the 100x compressed data set are the amateur astronomy community, educational institutions, and the general public. During the next year, we plan to publish the first version of a photometric calibration database which will allow users of the compressed sky survey to determine the brightness of stars in the images.

  10. Stability analysis and numerical simulation of a hard-core diffuse z pinch during compression with Atlas facility liner parameters

    Science.gov (United States)

    Siemon, R. E.; Atchison, W. L.; Awe, T.; Bauer, B. S.; Buyko, A. M.; Chernyshev, V. K.; Cowan, T. E.; Degnan, J. H.; Faehl, R. J.; Fuelling, S.; Garanin, S. F.; Goodrich, T.; Ivanovsky, A. V.; Lindemuth, I. R.; Makhin, V.; Mokhov, V. N.; Reinovsky, R. E.; Ryutov, D. D.; Scudder, D. W.; Taylor, T.; Yakubov, V. B.

    2005-09-01

    In the 'metal liner' approach to magnetized target fusion (MTF), a preheated magnetized plasma target is compressed to thermonuclear temperature and high density by externally driving the implosion of a flux conserving metal enclosure, or liner, which contains the plasma target. As in inertial confinement fusion, the principal fusion fuel heating mechanism is pdV work by the imploding enclosure, called a pusher in ICF. One possible MTF target, the hard-core diffuse z pinch, has been studied in MAGO experiments at VNIIEF and is one possible target being considered for experiments on the Atlas pulsed power facility. Numerical MHD simulations show two intriguing and helpful features of the diffuse z pinch with respect to compressional heating. First, in two-dimensional simulations the m = 0 interchange modes, arising from an unstable pressure profile, result in turbulent motions and self-organization into a stable pressure profile. The turbulence also gives rise to convective thermal transport, but the level of turbulence saturates at a finite level, and simulations show substantial heating during liner compression despite the turbulence. The second helpful feature is that pressure profile evolution during compression tends towards improved stability rather than instability when analysed according to the Kadomtsev criteria. A liner experiment is planned for Atlas to study compression of magnetic flux without plasma, as a first step. The Atlas geometry is compatible with a diffuse z pinch, and simulations of possible future experiments show that kiloelectronvolt temperatures and useful neutron production for diagnostic purposes should be possible if a suitable plasma injector is added to the Atlas facility.

  11. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  12. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  13. Formes du regroupement pluriprofessionnel en soins de premiers recours. Une typologie des maisons, pôles et centres de santé participant aux Expérimentations des nouveaux modes de rémunération (ENMR)

    OpenAIRE

    Anissa Afrite; Julien Mousques

    2014-01-01

    Dans le cadre d’un programme de recherche global sur le lien entre le regroupement pluriprofessionnel en soins de premiers recours dans les sites participant aux Expérimentations des nouveaux modes de rémunération (ENMR) et la performance des médecins généralistes en matière d’activité, de productivité, d’efficacité et d’efficience de leurs pratiques, cette recherche a pour objectif d’analyser la structure, l’organisation et le fonctionnement des maisons, pôles et centres de santé participant...

  14. Inertial-confinement-fusion targets

    International Nuclear Information System (INIS)

    Hendricks, C.D.

    1982-01-01

    Much of the research in laser fusion has been done using simple ball on-stalk targets filled with a deuterium-tritium mixture. The targets operated in the exploding pusher mode in which the laser energy was delivered in a very short time (approx. 100 ps or less) and was absorbed by the glass wall of the target. The high energy density in the glass literally exploded the shell with the inward moving glass compressing the DT fuel to high temperatures and moderate densities. Temperatures achieved were high enough to produce DT reactions and accompanying thermonuclear neutrons and alpha particles. The primary criteria imposed on the target builders were: (1) wall thickness, (2) sphere diameter, and (3) fuel in the sphere

  15. Influence of curing regimes on compressive strength of ultra high

    Indian Academy of Sciences (India)

    The present paper is aimed to identify an efficient curing regime for ultra high performance concrete (UHPC), to achieve a target compressive strength more than 150 MPa, using indigenous materials. The thermal regime plays a vital role due to the limited fineness of ingredients and low water/binder ratio. By activation of the ...

  16. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  17. Two-dimensional simulations of thermonuclear burn in ignition-scale inertial confinement fusion targets under compressed axial magnetic fields

    International Nuclear Information System (INIS)

    Perkins, L. J.; Logan, B. G.; Zimmerman, G. B.; Werner, C. J.

    2013-01-01

    We report for the first time on full 2-D radiation-hydrodynamic implosion simulations that explore the impact of highly compressed imposed magnetic fields on the ignition and burn of perturbed spherical implosions of ignition-scale cryogenic capsules. Using perturbations that highly convolute the cold fuel boundary of the hotspot and prevent ignition without applied fields, we impose initial axial seed fields of 20–100 T (potentially attainable using present experimental methods) that compress to greater than 4 × 10 4 T (400 MG) under implosion, thereby relaxing hotspot areal densities and pressures required for ignition and propagating burn by ∼50%. The compressed field is high enough to suppress transverse electron heat conduction, and to allow alphas to couple energy into the hotspot even when highly deformed by large low-mode amplitudes. This might permit the recovery of ignition, or at least significant alpha particle heating, in submarginal capsules that would otherwise fail because of adverse hydrodynamic instabilities

  18. Method for data compression by associating complex numbers with files of data values

    Science.gov (United States)

    Feo, John Thomas; Hanks, David Carlton; Kraay, Thomas Arthur

    1998-02-10

    A method for compressing data for storage or transmission. Given a complex polynomial and a value assigned to each root, a root generated data file (RGDF) is created, one entry at a time. Each entry is mapped to a point in a complex plane. An iterative root finding technique is used to map the coordinates of the point to the coordinates of one of the roots of the polynomial. The value associated with that root is assigned to the entry. An equational data compression (EDC) method reverses this procedure. Given a target data file, the EDC method uses a search algorithm to calculate a set of m complex numbers and a value map that will generate the target data file. The error between a simple target data file and generated data file is typically less than 10%. Data files can be transmitted or stored without loss by transmitting the m complex numbers, their associated values, and an error file whose size is at most one-tenth of the size of the input data file.

  19. Search for compressed SUSY scenarios with the ATLAS detector

    CERN Document Server

    Maurer, Julien; The ATLAS collaboration

    2017-01-01

    Scenarios where multiple SUSY states are nearly degenerate in mass produce soft decay products, and they represent an experimental challenge for ATLAS. This contribution presented recent results of analyses explicitly targeting such ``compressed'' scenarios with a variety of experimental techniques. All results made use of proton-proton collisions collected at a centre-of-mass energy of 13 TeV with the ATLAS detector at the LHC.

  20. Impact of external pneumatic compression target inflation pressure on transcriptome-wide RNA expression in skeletal muscle.

    Science.gov (United States)

    Martin, Jeffrey S; Kephart, Wesley C; Haun, Cody T; McCloskey, Anna E; Shake, Joshua J; Mobley, Christopher B; Goodlett, Michael D; Kavazis, Andreas; Pascoe, David D; Zhang, Lee; Roberts, Michael D

    2016-11-01

    Next-generation RNA sequencing was employed to determine the acute and subchronic impact of peristaltic pulse external pneumatic compression (PEPC) of different target inflation pressures on global gene expression in human vastus lateralis skeletal muscle biopsy samples. Eighteen (N = 18) male participants were randomly assigned to one of the three groups: (1) sham (n = 6), 2) EPC at 30-40 mmHg (LP-EPC; n = 6), and 3) EPC at 70-80 mmHg (MP-EPC; n = 6). One hour treatment with sham/EPC occurred for seven consecutive days. Vastus lateralis skeletal muscle biopsies were performed at baseline (before first treatment; PRE), 1 h following the first treatment (POST1), and 24 h following the last (7th) treatment (POST2). Changes from PRE in gene expression were analyzed via paired comparisons within each group. Genes were filtered to include only those that had an RPKM ≥ 1.0, a fold-change of ≥1.5 and a paired t-test value of <0.01. For the sham condition, two genes at POST1 and one gene at POST2 were significantly altered. For the LP-EPC condition, nine genes were up-regulated and 0 genes were down-regulated at POST1 while 39 genes were up-regulated and one gene down-regulated at POST2. For the MP-EPC condition, two genes were significantly up-regulated and 21 genes were down-regulated at POST1 and 0 genes were altered at POST2. Both LP-EPC and MP-EPC acutely alter skeletal muscle gene expression, though only LP-EPC appeared to affect gene expression with subchronic application. Moreover, the transcriptome response to EPC demonstrated marked heterogeneity (i.e., genes and directionality) with different target inflation pressures. © 2016 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.

  1. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  2. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    Science.gov (United States)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  3. Quasi-spherical compression of a spark-channel plasma

    International Nuclear Information System (INIS)

    Panarella, E.

    1980-01-01

    An axial spark channel in deuterium has been used as a target for implosive shock waves created with a conventional cylindrical theta-pinch device. The compression of the channel by the implosive waves raised the plasma electron temperature to approximately 120 eV for approximately 6 kJ of condenser bank energy and 1 Torr initial gas pressure. In order to improve the efficiency of compression of the channel plasma and to reduce the end losses inherent in the cylindrical configuration, the theta-pinch geometry was then converted from cylindrical into spherical. Under identical conditions of gas pressure and condenser bank energy, the electron temperature now peaked at approximately 400 eV. When the bank energy was increased to approximately 10 kJ, neutron production was observed. The total neutron output per shot ranged from 10 5 to 10 6 and increased inversely with the pinch discharge volume

  4. Implosion of the small cavity and large cavity cannonball targets

    International Nuclear Information System (INIS)

    Nishihara, Katsunobu; Yamanaka, Chiyoe.

    1984-01-01

    Recent results of cannonball target implosion research are briefly reviewed with theoretical predictions for GEKKO XII experiments. The cannonball targets are classified into two types according to the cavity size ; small cavity and large cavity. The compression mechanisms of the two types are discussed. (author)

  5. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  6. Compression measurement in laser driven implosion experiments

    International Nuclear Information System (INIS)

    Attwood, D.T.; Cambell, E.M.; Ceglio, N.M.; Lane, S.L.; Larsen, J.T.; Matthews, D.M.

    1981-01-01

    This paper discusses the measurement of compression in the context of the Inertial Confinement Fusion Programs' transition from thin-walled exploding pusher targets, to thicker walled targets which are designed to lead the way towards ablative type implosions which will result in higher fuel density and pR at burn time. These experiments promote desirable reactor conditions but pose diagnostic problems because of reduced multi-kilovolt x-ray and reaction product emissions, as well as increasingly more difficult transport problems for these emissions as they pass through the thicker pR pusher conditions. Solutions to these problems, pointing the way toward higher energy twodimensional x-ray images, new reaction product imaging ideas and the use of seed gases for both x-ray spectroscopic and nuclear activation techniques are identified

  7. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  8. Studies of implosion dynamics of D3He gas-filled plastic targets using nuclear diagnostics at OMEGA

    International Nuclear Information System (INIS)

    Falk, Magnus

    2004-09-01

    Information about target-implosion dynamics is essential for understanding how assembly occurs. Without carefully tailored assembly of the fuel, hot-spot ignition on National Ignition Facility (NIF) will fail. Hot spot ignition relies on shock convergence to 'ignite' the hot spot (shock burn), followed by propagation of the burn into the compressed shell material (compressive burn). The relationship between these events must be understood to ensure the success of Inertial Confinement Fusion (ICF) ignition. To further improve our knowledge about the timing of these events, temporal evolution of areal density (density times radius, normally referred to as ρR) and burn of direct-drive, D 3 He gas-filled plastic target implosions have been studied using dd neutrons and d 3 He protons. The proton temporal diagnostic (PTD) code was developed for this purpose. ρR asymmetries were observed at shock-bang time (time of peak burn during shock phase) and grew approximately twice as fast as the average ρR, without any phase changes. Furthermore, it was observed that the shock-bang and compression-bang time occur earlier, and that the time difference between these events decreases for higher laser energy on target, which indicates that the compression-bang time is more sensitive to the variation of laser energy on target. It was also observed that the duration of shock and compression phase might decrease for higher laser energy on target

  9. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  10. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  11. The plasma formation stage in magnetic compression/magnetized target fusion (MAGO/MTF)

    International Nuclear Information System (INIS)

    Lindemuth, I.R.; Reinovsky, R.E.; Chrien, R.E.

    1996-01-01

    In early 1992, emerging governmental policy in the US and Russia began to encourage ''lab-to-lab'' interactions between the All- Russian Scientific Research Institute of Experimental Physics (VNIIEF) and the Los Alamos National Laboratory (LANL). As nuclear weapons stockpiles and design activities were being reduced, highly qualified scientists become for fundamental scientific research of interest to both nations. VNIIEF and LANL found a common interest in the technology and applications of magnetic flux compression, the technique for converting the chemical energy released by high-explosives into intense electrical pulses and intensely concentrated magnetic energy. Motivated originally to evaluate any possible defense applications of flux compression technology, the two teams worked independently for many years, essentially unaware of the others' accomplishments. But, an early US publication stimulated Soviet work, and the Soviets followed with a report of the achievement of 25 MG. During the cold war, a series of conferences on Megagauss Magnetic Field Generation and Related Topics became a forum for scientific exchange of ideas and accomplishments. Because of relationships established at the Megagauss conferences, VNIIEF and LANL were able to respond quickly to the initiatives of their respective governments. In late 1992, following the Megagauss VI conference, the two institutions agreed to combine resources to perform a series of experiments that essentially could not be performed by each institution independently. Beginning in September, 1993, the two institutions have performed eleven joint experimental campaigns, either at VNIIEF or at LANL. Megagauss- VII has become the first of the series to include papers with joint US and Russian authorship. In this paper, we review the joint LANL/VNIIEF experimental work that has relevance to a relatively unexplored approach to controlled thermonuclear fusion

  12. Exact partial solution to the compressible flow problems of jet formation and penetration in plane, steady flow

    International Nuclear Information System (INIS)

    Karpp, R.R.

    1984-01-01

    The particle solution of the problem of the symmetric impact of two compressible fluid stream is derived. The plane two-dimensional flow is assumed to be steady, and the inviscid compressible fluid is of the Chaplygin (tangent gas) type. The equations governing this flow are transformed to the hodograph plane where an exact, closed-form solution for the stream function is obtained. The distribution of fluid properties along the plane of symmetry and the shape of free surface streamlines are determined by transformation back to the physical plane. The problem of a compressible fluid jet penetrating an infinite target of similar material is also solved by considering a limiting case of this solution. Differences between compressible and incompressible flows of the type considered are illustrated

  13. CERN : Nouveaux records lors d'une période d'exploitation au PS/SPS ; Un nouveau rôle pour les ISR ; Deuxième région d'intersection pour le système pp du SPS ; Détermination de la durée de vie du charme ; Calcul en Sicile et au CERN

    CERN Multimedia

    1979-01-01

    CERN : Nouveaux records lors d'une période d'exploitation au PS/SPS ; Un nouveau rôle pour les ISR ; Deuxième région d'intersection pour le système pp du SPS ; Détermination de la durée de vie du charme ; Calcul en Sicile et au CERN

  14. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  15. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  16. PLANS FOR WARM DENSE MATTER EXPERIMENTS AND IFE TARGET EXPERIMENTS ON NDCX-II

    International Nuclear Information System (INIS)

    Waldron, W.L.; Barnard, J.J.; Bieniosek, F.M.; Friedman, A.; Henestroza, E.; Leitner, M.; Logan, B.G.; Ni, P.A.; Roy, P.K.; Seidl, P.A.; Sharp, W.M.

    2008-01-01

    The Heavy Ion Fusion Science Virtual National Laboratory (HIFS-VNL) is currently developing design concepts for NDCX-II, the second phase of the Neutralized Drift Compression Experiment, which will use ion beams to explore Warm Dense Matter (WDM) and Inertial Fusion Energy (IFE) target hydrodynamics. The ion induction accelerator will consist of a new short pulse injector and induction cells from the decommissioned Advanced Test Accelerator (ATA) at Lawrence Livermore National Laboratory (LLNL). To fit within an existing building and to meet the energy and temporal requirements of various target experiments, an aggressive beam compression and acceleration schedule is planned. WDM physics and ion-driven direct drive hydrodynamics will initially be explored with 30 nC of lithium ions in experiments involving ion deposition, ablation, acceleration and stability of planar targets. Other ion sources which may deliver higher charge per bunch will be explored. A test stand has been built at Lawrence Berkeley National Laboratory (LBNL) to test refurbished ATA induction cells and pulsed power hardware for voltage holding and ability to produce various compression and acceleration waveforms. Another test stand is being used to develop and characterize lithium-doped aluminosilicate ion sources. The first experiments will include heating metallic targets to 10,000 K and hydrodynamics studies with cryogenic hydrogen targets

  17. Studies of implosion dynamics of D{sup 3}He gas-filled plastic targets using nuclear diagnostics at OMEGA

    Energy Technology Data Exchange (ETDEWEB)

    Falk, Magnus

    2004-09-01

    Information about target-implosion dynamics is essential for understanding how assembly occurs. Without carefully tailored assembly of the fuel, hot-spot ignition on National Ignition Facility (NIF) will fail. Hot spot ignition relies on shock convergence to 'ignite' the hot spot (shock burn), followed by propagation of the burn into the compressed shell material (compressive burn). The relationship between these events must be understood to ensure the success of Inertial Confinement Fusion (ICF) ignition. To further improve our knowledge about the timing of these events, temporal evolution of areal density (density times radius, normally referred to as {rho}R) and burn of direct-drive, D{sup 3}He gas-filled plastic target implosions have been studied using dd neutrons and d{sup 3}He protons. The proton temporal diagnostic (PTD) code was developed for this purpose. {rho}R asymmetries were observed at shock-bang time (time of peak burn during shock phase) and grew approximately twice as fast as the average {rho}R, without any phase changes. Furthermore, it was observed that the shock-bang and compression-bang time occur earlier, and that the time difference between these events decreases for higher laser energy on target, which indicates that the compression-bang time is more sensitive to the variation of laser energy on target. It was also observed that the duration of shock and compression phase might decrease for higher laser energy on target.

  18. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    Science.gov (United States)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  19. Les nouveaux défis pour l’anthropologie de la santé New challenges for anthropology of health

    Directory of Open Access Journals (Sweden)

    Raymond Massé

    2010-11-01

    Full Text Available L’anthropologie de la santé a connu une évolution spectaculaire au cours des dernières décennies. Elle doit toutefois composer aujourd’hui avec les contributions de plusieurs autres disciplines des sciences sociales et humaines dans le champ d’étude des dimensions sociales, politiques et culturelles de la santé et de la maladie. Pour conserver sa crédibilité comme “science sociale” et pour faire face à l’émergence de nouveaux objets de recherche, elle devra relever plusieurs défis, tant au plan de ses outils conceptuels, mais aussi et surtout au plan de sa maîtrise des méthodologies qualitatives et quantitatives. Mais cela, sans mettre de côté son profond souci pour une réflexion critique sur les politiques de santé et pour une phénoménologie de la souffrance.Medical anthropology has become one of the major subdiscipline in anthropology in the past decades. However, it has today to deal with the interest of other social sciences disciplines for the social, cultural and political dimensions of disease, health and care. In order to strengthen its leadership and to increase its credibility in dealing with new emerging research objects, medical anthropologists will have to accept many challenges, conceptual, theoretical, but especially methodological. This paper discusses some of these challenges.

  20. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  1. Shock compression profiles in ceramics

    Energy Technology Data Exchange (ETDEWEB)

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  2. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  3. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  4. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  5. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  6. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  7. [Effects of a voice metronome on compression rate and depth in telephone assisted, bystander cardiopulmonary resuscitation: an investigator-blinded, 3-armed, randomized, simulation trial].

    Science.gov (United States)

    van Tulder, Raphael; Roth, Dominik; Krammel, Mario; Laggner, Roberta; Schriefl, Christoph; Kienbacher, Calvin; Lorenzo Hartmann, Alexander; Novosad, Heinz; Constantin Chwojka, Christof; Havel, Christoph; Schreiber, Wolfgang; Herkner, Harald

    2015-01-01

    We investigated the effect on compression rate and depth of a conventional metronome and a voice metronome in simulated telephone-assisted, protocol-driven bystander Cardiopulmonary resucitation (CPR) compared to standard instruction. Thirty-six lay volunteers performed 10 minutes of compression-only CPR in a prospective, investigator-blinded, 3-arm study on a manikin. Participants were randomized either to standard instruction ("push down firmly, 5 cm"), a regular metronome pacing 110 beats per minute (bpm), or a voice metronome continuously prompting "deep-deepdeep- deeper" at 110 bpm. The primary outcome was deviation from the ideal chest compression target range (50 mm compression depth x 100 compressions per minute x 10 minutes = 50 m). Secondary outcomes were CPR quality measures (compression and leaning depth, rate, no-flow times) and participants' related physiological response (heart rate, blood pressure and nine hole peg test and borg scales score). We used a linear regression model to calculate effects. The mean (SD) deviation from the ideal target range (50 m) was -11 (9) m in the standard group, -20 (11) m in the conventional metronome group (adjusted difference [95%, CI], 9.0 [1.2-17.5 m], P=.03), and -18 (9) m in the voice metronome group (adjusted difference, 7.2 [-0.9-15.3] m, P=.08). Secondary outcomes (CPR quality measures and physiological response of participants to CPR performance) showed no significant differences. Compared to standard instruction, the conventional metronome showed a significant negative effect on the chest compression target range. The voice metronome showed a non-significant negative effect and therefore cannot be recommended for regular use in telephone-assisted CPR.

  8. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  9. A Computational model for compressed sensing RNAi cellular screening

    Directory of Open Access Journals (Sweden)

    Tan Hua

    2012-12-01

    Full Text Available Abstract Background RNA interference (RNAi becomes an increasingly important and effective genetic tool to study the function of target genes by suppressing specific genes of interest. This system approach helps identify signaling pathways and cellular phase types by tracking intensity and/or morphological changes of cells. The traditional RNAi screening scheme, in which one siRNA is designed to knockdown one specific mRNA target, needs a large library of siRNAs and turns out to be time-consuming and expensive. Results In this paper, we propose a conceptual model, called compressed sensing RNAi (csRNAi, which employs a unique combination of group of small interfering RNAs (siRNAs to knockdown a much larger size of genes. This strategy is based on the fact that one gene can be partially bound with several small interfering RNAs (siRNAs and conversely, one siRNA can bind to a few genes with distinct binding affinity. This model constructs a multi-to-multi correspondence between siRNAs and their targets, with siRNAs much fewer than mRNA targets, compared with the conventional scheme. Mathematically this problem involves an underdetermined system of equations (linear or nonlinear, which is ill-posed in general. However, the recently developed compressed sensing (CS theory can solve this problem. We present a mathematical model to describe the csRNAi system based on both CS theory and biological concerns. To build this model, we first search nucleotide motifs in a target gene set. Then we propose a machine learning based method to find the effective siRNAs with novel features, such as image features and speech features to describe an siRNA sequence. Numerical simulations show that we can reduce the siRNA library to one third of that in the conventional scheme. In addition, the features to describe siRNAs outperform the existing ones substantially. Conclusions This csRNAi system is very promising in saving both time and cost for large-scale RNAi

  10. Exact partial solution to the steady-state, compressible fluid flow problems of jet formation and jet penetration

    International Nuclear Information System (INIS)

    Karpp, R.R.

    1980-10-01

    This report treats analytically the problem of the symmetric impact of two compressible fluid streams. The flow is assumed to be steady, plane, inviscid, and subsonic and that the compressible fluid is of the Chaplygin (tangent gas) type. In the analysis, the governing equations are first transformed to the hodograph plane where an exact, closed-form solution is obtained by standard techniques. The distributions of fluid properties along the plane of symmetry as well as the shapes of the boundary streamlines are exactly determined by transforming the solution back to the physical plane. The problem of a compressible fluid jet penetrating into an infinite target of similar material is also exactly solved by considering a limiting case of this solution. This new compressible flow solution reduces to the classical result of incompressible flow theory when the sound speed of the fluid is allowed to approach infinity. Several illustrations of the differences between compressible and incompressible flows of the type considered are presented

  11. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  12. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Science.gov (United States)

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq

    2016-04-01

    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  13. Comparison of the effectiveness of compression stockings and layer compression systems in venous ulceration treatment

    Science.gov (United States)

    Jawień, Arkadiusz; Cierzniakowska, Katarzyna; Cwajda-Białasik, Justyna; Mościcka, Paulina

    2010-01-01

    Introduction The aim of the research was to compare the dynamics of venous ulcer healing when treated with the use of compression stockings as well as original two- and four-layer bandage systems. Material and methods A group of 46 patients suffering from venous ulcers was studied. This group consisted of 36 (78.3%) women and 10 (21.70%) men aged between 41 and 88 years (the average age was 66.6 years and the median was 67). Patients were randomized into three groups, for treatment with the ProGuide two-layer system, Profore four-layer compression, and with the use of compression stockings class II. In the case of multi-layer compression, compression ensuring 40 mmHg blood pressure at ankle level was used. Results In all patients, independently of the type of compression therapy, a few significant statistical changes of ulceration area in time were observed (Student’s t test for matched pairs, p ulceration area in each of the successive measurements was observed in patients treated with the four-layer system – on average 0.63 cm2/per week. The smallest loss of ulceration area was observed in patients using compression stockings – on average 0.44 cm2/per week. However, the observed differences were not statistically significant (Kruskal-Wallis test H = 4.45, p > 0.05). Conclusions A systematic compression therapy, applied with preliminary blood pressure of 40 mmHg, is an effective method of conservative treatment of venous ulcers. Compression stockings and prepared systems of multi-layer compression were characterized by similar clinical effectiveness. PMID:22419941

  14. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  15. Does the quality of chest compressions deteriorate when the chest compression rate is above 120/min?

    Science.gov (United States)

    Lee, Soo Hoon; Kim, Kyuseok; Lee, Jae Hyuk; Kim, Taeyun; Kang, Changwoo; Park, Chanjong; Kim, Joonghee; Jo, You Hwan; Rhee, Joong Eui; Kim, Dong Hoon

    2014-08-01

    The quality of chest compressions along with defibrillation is the cornerstone of cardiopulmonary resuscitation (CPR), which is known to improve the outcome of cardiac arrest. We aimed to investigate the relationship between the compression rate and other CPR quality parameters including compression depth and recoil. A conventional CPR training for lay rescuers was performed 2 weeks before the 'CPR contest'. CPR anytime training kits were distributed to respective participants for self-training on their own in their own time. The participants were tested for two-person CPR in pairs. The quantitative and qualitative data regarding the quality of CPR were collected from a standardised check list and SkillReporter, and compared by the compression rate. A total of 161 teams consisting of 322 students, which includes 116 men and 206 women, participated in the CPR contest. The mean depth and rate for chest compression were 49.0±8.2 mm and 110.2±10.2/min. Significantly deeper chest compression depths were noted at rates over 120/min than those at any other rates (47.0±7.4, 48.8±8.4, 52.3±6.7, p=0.008). Chest compression depth was proportional to chest compression rate (r=0.206, pcompression including chest compression depth and chest recoil by chest compression rate. Further evaluation regarding the upper limit of the chest compression rate is needed to ensure complete full chest wall recoil while maintaining an adequate chest compression depth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  17. Macron Formed Liner Compression as a Practical Method for Enabling Magneto-Inertial Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Slough, John

    2011-12-10

    The entry of fusion as a viable, competitive source of power has been stymied by the challenge of finding an economical way to provide for the confinement and heating of the plasma fuel. The main impediment for current nuclear fusion concepts is the complexity and large mass associated with the confinement systems. To take advantage of the smaller scale, higher density regime of magnetic fusion, an efficient method for achieving the compressional heating required to reach fusion gain conditions must be found. The very compact, high energy density plasmoid commonly referred to as a Field Reversed Configuration (FRC) provides for an ideal target for this purpose. To make fusion with the FRC practical, an efficient method for repetitively compressing the FRC to fusion gain conditions is required. A novel approach to be explored in this endeavor is to remotely launch a converging array of small macro-particles (macrons) that merge and form a more massive liner inside the reactor which then radially compresses and heats the FRC plasmoid to fusion conditions. The closed magnetic field in the target FRC plasmoid suppresses the thermal transport to the confining liner significantly lowering the imploding power needed to compress the target. With the momentum flux being delivered by an assemblage of low mass, but high velocity macrons, many of the difficulties encountered with the liner implosion power technology are eliminated. The undertaking to be described in this proposal is to evaluate the feasibility achieving fusion conditions from this simple and low cost approach to fusion. During phase I the design and testing of the key components for the creation of the macron formed liner have been successfully carried out. Detailed numerical calculations of the merging, formation and radial implosion of the Macron Formed Liner (MFL) were also performed. The phase II effort will focus on an experimental demonstration of the macron launcher at full power, and the demonstration

  18. Process and application of shock compression by nanosecond pulses of frequency-doubled Nd:YAG laser

    Science.gov (United States)

    Sano, Yuji; Kimura, Motohiko; Mukai, Naruhiko; Yoda, Masaki; Obata, Minoru; Ogisu, Tatsuki

    2000-02-01

    The authors have developed a new process of laser-induced shock compression to introduce a residual compressive stress on material surface, which is effective for prevention of stress corrosion cracking (SCC) and enhancement of fatigue strength of metal materials. The process developed is unique and beneficial. It requires no pre-conditioning for the surface, whereas the conventional process requires that the so-called sacrificial layer is made to protect the surface from damage. The new process can be freely applied to water- immersed components, since it uses water-penetrable green light of a frequency-doubled Nd:YAG laser. The process developed has the potential to open up new high-power laser applications in manufacturing and maintenance technologies. The laser-induced shock compression process (LSP) can be used to improve a residual stress field from tensile to compressive. In order to understand the physics and optimize the process, the propagation of a shock wave generated by the impulse of laser irradiation and the dynamic response of the material were analyzed by time-dependent elasto-plastic calculations with a finite element program using laser-induced plasma pressure as an external load. The analysis shows that a permanent strain and a residual compressive stress remain after the passage of the shock wave with amplitude exceeding the yield strength of the material. A practical system materializing the LSP was designed, manufactured, and tested to confirm the applicability to core components of light water reactors (LWRs). The system accesses the target component and remotely irradiates laser pulses to the heat affected zone (HAZ) along weld lines. Various functional tests were conducted using a full-scale mockup facility, in which remote maintenance work in a reactor vessel could be simulated. The results showed that the system remotely accessed the target weld lines and successfully introduced a residual compressive stress. After sufficient training

  19. Neutralized Drift Compression Experiment (NDCX) - II Quarterly Report

    International Nuclear Information System (INIS)

    Kwan, J.W.

    2009-01-01

    LBNL has received American Recovery and Reinvestment Act (ARRA) funding to construct a new accelerator at Lawrence Berkeley National Laboratory (LBNL) to significantly increase the energy on target, which will allow both the Heavy Ion Fusion (HIF) and Warm Dense Matter (WDM) research communities to explore scientific conditions that have not been available in any other device. For NDCX-II, a new induction linear accelerator (linac) will be constructed at Lawrence Berkeley National Laboratory (LBNL). NDCX-II will produce nano-second long ion beam bunches to hit thin foil targets. The final kinetic energy of the ions arriving at the target varies according to the ion mass. For atomic mass unit of 6 or 7 (Lithium ions), useful kinetic energies range from 1.5 to 5 or more MeV. The expected beam charge in the 1 ns (or shorter) pulse is about 20 nanoCoulombs. The pulse repetition rate will be about once or twice per minute (of course, target considerations will often reduce this rate). Our approach to building the NDCX-II ion accelerator is to make use of the available induction modules and 200 kV pulsers from the retired ATA electron linac at LLNL. Reusing this hardware will maximize the ion energy on target at a minimum cost. Some modification of the cells (e.g., reduce the bore diameter and replace with higher field pulsed solenoids) are needed in order to meet the requirements of this project. The NDCX-II project will include the following tasks: (1) Physics design to determine the required ion current density at the ion source, the injector beam optics, the layout of accelerator cells along the beam line, the voltage waveforms for beam acceleration and compression, the solenoid focusing, the neutralized drift compression and the final focus on target; (2) Engineering design and fabrication of the accelerator components, pulsed power system, diagnostic system, and control and data acquisition system; (3) Conventional facilities; and (4) Installation and integration

  20. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  1. The role of Z-pinches and related configurations in magnetized target fusion

    International Nuclear Information System (INIS)

    Lindemuth, I.R.

    1997-01-01

    The use of a magnetic field within a fusion target is now known as Magnetized Target Fusion in the US and as MAGO (Magnitnoye Obzhatiye, or magnetic compression) in Russia. In contrast to direct, hydrodynamic compression of initially ambient-temperature fuel (e.g., ICF), MTF involves two steps: (a) formation of a warm, magnetized, wall-confined plasma of intermediate density within a fusion target prior to implosion; (b) subsequent quasi-adiabatic compression and heating of the plasma by imploding the confining wall, or pusher. In many ways, MTF can be considered a marriage between the more mature MFE and ICF approaches, and this marriage potentially eliminates some of the hurdles encountered in the other approaches. When compared to ICF, MTF requires lower implosion velocity, lower initial density, significantly lower radial convergence, and larger targets, all of which lead to substantially reduced driver intensity, power, and symmetry requirements. When compared to MFE, MTF does not require a vacuum separating the plasma from the wall, and, in fact, complete magnetic confinement, even if possible, may not be desirable. The higher density of MTF and much shorter confinement times should make magnetized plasma formation a much less difficult step than in MFE. The substantially lower driver requirements and implosion velocity of MTF make z-pinch magnetically driven liners, magnetically imploded by existing modern pulsed power electrical current sources, a leading candidate for the target pusher of an MTF system

  2. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  3. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    Science.gov (United States)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  4. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  5. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  6. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  7. Compression and heating of a cladding target by a partially profiled laser pulse

    International Nuclear Information System (INIS)

    Andreev, A.A.; Samsonov, A.G.; Solov'ev, N.A.

    1990-01-01

    The CLADDING-T semiempirical model and numerical calculations in accordance with the SPHERE program have been employed to show that the action of a partially profiled pulse on a simple cladding target raises the fuel compession degree and reduces the fuel temperature as compared to the action of a rectangular pulse (or a polynomial-shaped pulse) with the same energy. From the standpoint of the flash criterion the system composed of the profiled pulse and the simple (cladding) target is shown to be equivalent to that composed of a simple pulse and a dual-cascade (profiled) target. An analysis of the system composed of the laser and the simple target shows that the use of the partially profiled pulse and the simple target makes it possible to reduce requirements to the energy of laser systems

  8. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  9. Compressive sampling by artificial neural networks for video

    Science.gov (United States)

    Szu, Harold; Hsu, Charles; Jenkins, Jeffrey; Reinhardt, Kitt

    2011-06-01

    We describe a smart surveillance strategy for handling novelty changes. Current sensors seem to keep all, redundant or not. The Human Visual System's Hubel-Wiesel (wavelet) edge detection mechanism pays attention to changes in movement, which naturally produce organized sparseness because a stagnant edge is not reported to the brain's visual cortex by retinal neurons. Sparseness is defined as an ordered set of ones (movement or not) relative to zeros that could be pseudo-orthogonal among themselves; then suited for fault tolerant storage and retrieval by means of Associative Memory (AM). The firing is sparse at the change locations. Unlike purely random sparse masks adopted in medical Compressive Sensing, these organized ones have an additional benefit of using the image changes to make retrievable graphical indexes. We coined this organized sparseness as Compressive Sampling; sensing but skipping over redundancy without altering the original image. Thus, we turn illustrate with video the survival tactics which animals that roam the Earth use daily. They acquire nothing but the space-time changes that are important to satisfy specific prey-predator relationships. We have noticed a similarity between the mathematical Compressive Sensing and this biological mechanism used for survival. We have designed a hardware implementation of the Human Visual System's Compressive Sampling scheme. To speed up further, our mixedsignal circuit design of frame differencing is built in on-chip processing hardware. A CMOS trans-conductance amplifier is designed here to generate a linear current output using a pair of differential input voltages from 2 photon detectors for change detection---one for the previous value and the other the subsequent value, ("write" synaptic weight by Hebbian outer products; "read" by inner product & pt. NL threshold) to localize and track the threat targets.

  10. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  11. Compressibility of the protein-water interface

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  12. Compressibility of the protein-water interface.

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  13. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  14. EFFECTIVENESS OF ADJUVANT USE OF POSTERIOR MANUAL COMPRESSION WITH GRADED COMPRESSION IN THE SONOGRAPHIC DIAGNOSIS OF ACUTE APPENDICITIS

    Directory of Open Access Journals (Sweden)

    Senthilnathan V

    2018-01-01

    Full Text Available BACKGROUND Diagnosing appendicitis by Graded Compression Ultrasonogram is a difficult task because of limiting factors such as operator– dependent technique, retrocaecal location of the appendix and patient obesity. Posterior manual compression technique visualizes the appendix better in the Grey-scale Ultrasonogram. The Aim of this study is to determine the accuracy of ultrasound in detecting or excluding acute appendicitis and to evaluate the usefulness of the adjuvant use of posterior manual compression technique in visualization of the appendix and in the diagnosis of acute appendicitis MATERIALS AND METHODS This prospective study involved a total of 240 patients in all age groups and both sexes. All these patients underwent USG for suspected appendicitis. Ultrasonography was performed with transverse and longitudinal graded compression sonography. If the appendix is not visualized on graded compression sonography, posterior manual compression technique was used to further improve the detection of appendix. RESULTS The vermiform appendix was visualized in 185 patients (77.1% out of 240 patients with graded compression alone. 55 out of 240 patients whose appendix could not be visualized by graded compression alone were subjected to both graded followed by posterior manual compression technique among that Appendix was visualized in 43 patients on posterior manual compression technique amounting to 78.2% of cases, Appendix could not be visualized in the remaining 12 patients (21.8% out of 55. CONCLUSION Combined method of graded compression with posterior manual compression technique is better than the graded compression technique alone in diagnostic accuracy and detection rate of the vermiform appendix.

  15. Le poin: Mondialisation, Agro-industries, Globalisation des Modes alimentaires et Nouveaux Enjeux Sanitaires : L’Agriculture Biologique, peut-elle être, une solution du problème [Globalization, Agri-industries, food globalization and sanitary new challenges : Organic agriculture, can it be a solution

    OpenAIRE

    Hassini TSAKI

    2016-01-01

    Les nutritionnistes, épidémiologistes, chercheurs de biologie, médecins, agrobiologistes et autres compétences de vocations connexes demeurent, aujourd’hui et pour une plus juste définition et diagnostic de ces phénomènes liés à la nutrition et santé publique, dans la nécessité pour une meilleure approche d’intégration des informations, de privilégier l’établissement d’une synthèse collégiale sur les différents aspects d’incidences de nos nouveaux modes alimentaires en relation avec les nom...

  16. A statistical–mechanical view on source coding: physical compression and data compression

    International Nuclear Information System (INIS)

    Merhav, Neri

    2011-01-01

    We draw a certain analogy between the classical information-theoretic problem of lossy data compression (source coding) of memoryless information sources and the statistical–mechanical behavior of a certain model of a chain of connected particles (e.g. a polymer) that is subjected to a contracting force. The free energy difference pertaining to such a contraction turns out to be proportional to the rate-distortion function in the analogous data compression model, and the contracting force is proportional to the derivative of this function. Beyond the fact that this analogy may be interesting in its own right, it may provide a physical perspective on the behavior of optimum schemes for lossy data compression (and perhaps also an information-theoretic perspective on certain physical system models). Moreover, it triggers the derivation of lossy compression performance for systems with memory, using analysis tools and insights from statistical mechanics

  17. Nonlinear viscoelasticity of pre-compressed layered polymeric composite under oscillatory compression

    KAUST Repository

    Xu, Yangguang

    2018-05-03

    Describing nonlinear viscoelastic properties of polymeric composites when subjected to dynamic loading is essential for development of practical applications of such materials. An efficient and easy method to analyze nonlinear viscoelasticity remains elusive because the dynamic moduli (storage modulus and loss modulus) are not very convenient when the material falls into nonlinear viscoelastic range. In this study, we utilize two methods, Fourier transform and geometrical nonlinear analysis, to quantitatively characterize the nonlinear viscoelasticity of a pre-compressed layered polymeric composite under oscillatory compression. We discuss the influences of pre-compression, dynamic loading, and the inner structure of polymeric composite on the nonlinear viscoelasticity. Furthermore, we reveal the nonlinear viscoelastic mechanism by combining with other experimental results from quasi-static compressive tests and microstructural analysis. From a methodology standpoint, it is proved that both Fourier transform and geometrical nonlinear analysis are efficient tools for analyzing the nonlinear viscoelasticity of a layered polymeric composite. From a material standpoint, we consequently posit that the dynamic nonlinear viscoelasticity of polymeric composites with complicated inner structures can also be well characterized using these methods.

  18. On-Chip Neural Data Compression Based On Compressed Sensing With Sparse Sensing Matrices.

    Science.gov (United States)

    Zhao, Wenfeng; Sun, Biao; Wu, Tong; Yang, Zhi

    2018-02-01

    On-chip neural data compression is an enabling technique for wireless neural interfaces that suffer from insufficient bandwidth and power budgets to transmit the raw data. The data compression algorithm and its implementation should be power and area efficient and functionally reliable over different datasets. Compressed sensing is an emerging technique that has been applied to compress various neurophysiological data. However, the state-of-the-art compressed sensing (CS) encoders leverage random but dense binary measurement matrices, which incur substantial implementation costs on both power and area that could offset the benefits from the reduced wireless data rate. In this paper, we propose two CS encoder designs based on sparse measurement matrices that could lead to efficient hardware implementation. Specifically, two different approaches for the construction of sparse measurement matrices, i.e., the deterministic quasi-cyclic array code (QCAC) matrix and -sparse random binary matrix [-SRBM] are exploited. We demonstrate that the proposed CS encoders lead to comparable recovery performance. And efficient VLSI architecture designs are proposed for QCAC-CS and -SRBM encoders with reduced area and total power consumption.

  19. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  20. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  1. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  2. Evaluation of a wavelet-based compression algorithm applied to the silicon drift detectors data of the ALICE experiment at CERN

    International Nuclear Information System (INIS)

    Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

    2004-01-01

    This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain

  3. Windowless gas target with gas-dynamical focussing of an ultrasonic neutral gas flow

    International Nuclear Information System (INIS)

    Tietsch, W.; Bethge, K.; Feist, H.; Schopper, E.

    1975-11-01

    The construction of a gas jet target for heavy ion reaction is reported on. The spatial compression strockwaves in a supersonic flow behind a laval nozzle are used as a target. The target thickness can be varied by the choice of the nozzle pressure and the static pressure in the expansion room. All gases can be used. (WL) [de

  4. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  5. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  6. Drift Compression and Final Focus for Intense Heavy Ion Beams with Non-periodic, Time-dependent Lattice

    International Nuclear Information System (INIS)

    Hong Qin; Davidson, Ronald C.; Barnard, John J.; Lee, Edward P.

    2005-01-01

    In the currently envisioned configurations for heavy ion fusion, it is necessary to longitudinally compress the beam bunches by a large factor after the acceleration phase. Because the space-charge force increases as the beam is compressed, the beam size in the transverse direction will increase in a periodic quadrupole lattice. If an active control of the beam size is desired, a larger focusing force is needed to confine the beam in the transverse direction, and a non-periodic quadrupole lattice along the beam path is necessary. In this paper, we describe the design of such a focusing lattice using the transverse envelope equations. A drift compression and final focus lattice should focus the entire beam pulse onto the same focal spot on the target. This is difficult with a fixed lattice, because different slices of the beam may have different perveance and emittance. Four time-dependent magnets are introduced in the upstream of drift compression to focus the entire pulse onto the sam e focal spot. Drift compression and final focusing schemes are developed for a typical heavy ion fusion driver and for the Integrated Beam Experiment (IBX) being designed by the Heavy Ion Fusion Virtual National Laboratory

  7. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  8. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  9. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  10. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  11. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  12. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  13. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  14. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  15. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    Science.gov (United States)

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  16. Compression of the digitized X-ray images

    International Nuclear Information System (INIS)

    Terae, Satoshi; Miyasaka, Kazuo; Fujita, Nobuyuki; Takamura, Akio; Irie, Goro; Inamura, Kiyonari.

    1987-01-01

    Medical images are using an increased amount of space in the hospitals, while they are not accessed easily. Thus, suitable data filing system and precise data compression will be necessitated. Image quality was evaluated before and after image data compression, using local filing system (MediFile 1000, NEC Co.) and forty-seven modes of compression parameter. For this study X-ray images of 10 plain radiographs and 7 contrast examinations were digitized using a film reader of CCD sensor in MediFile 1000. Those images were compressed into forty-seven kinds of image data to save in an optical disc and then the compressed images were reconstructed. Each reconstructed image was compared with non-compressed images in respect to several regions of our interest by four radiologists. Compression and extension of radiological images were promptly made by employing the local filing system. Image quality was much more affected by the ratio of data compression than by the mode of parameter itself. In another word, the higher compression ratio became, the worse the image quality were. However, image quality was not significantly degraded until the compression ratio was about 15: 1 on plain radiographs and about 8: 1 on contrast studies. Image compression by this technique will be admitted by diagnostic radiology. (author)

  17. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  18. Development and assessment of compression technique for medical images using neural network. I. Assessment of lossless compression

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi

    2007-01-01

    This paper describes assessment of the lossless compression of a new efficient compression technique (JIS system) using neural network that the author and co-workers have recently developed. At first, theory is explained for encoding and decoding the data. Assessment is done on 55 images each of chest digital roentgenography, digital mammography, 64-row multi-slice CT, 1.5 Tesla MRI, positron emission tomography (PET) and digital subtraction angiography, which are lossless-compressed by the present JIS system to see the compression rate and loss. For comparison, those data are also JPEG lossless-compressed. Personal computer (PC) is an Apple MacBook Pro with configuration of Boot Camp for Windows environment. The present JIS system is found to have a more than 4 times higher efficiency than the usual compressions which compressing the file volume to only 1/11 in average, and thus to be importantly responsible to the increasing medical imaging data. (R.T.)

  19. A comparative experimental study on engine operating on premixed charge compression ignition and compression ignition mode

    Directory of Open Access Journals (Sweden)

    Bhiogade Girish E.

    2017-01-01

    Full Text Available New combustion concepts have been recently developed with the purpose to tackle the problem of high emissions level of traditional direct injection Diesel engines. A good example is the premixed charge compression ignition combustion. A strategy in which early injection is used causing a burning process in which the fuel burns in the premixed condition. In compression ignition engines, soot (particulate matter and NOx emissions are an extremely unsolved issue. Premixed charge compression ignition is one of the most promising solutions that combine the advantages of both spark ignition and compression ignition combustion modes. It gives thermal efficiency close to the compression ignition engines and resolves the associated issues of high NOx and particulate matter, simultaneously. Premixing of air and fuel preparation is the challenging part to achieve premixed charge compression ignition combustion. In the present experimental study a diesel vaporizer is used to achieve premixed charge compression ignition combustion. A vaporized diesel fuel was mixed with the air to form premixed charge and inducted into the cylinder during the intake stroke. Low diesel volatility remains the main obstacle in preparing premixed air-fuel mixture. Exhaust gas re-circulation can be used to control the rate of heat release. The objective of this study is to reduce exhaust emission levels with maintaining thermal efficiency close to compression ignition engine.

  20. High-energy few-cycle pulse compression through self-channeling in gases

    International Nuclear Information System (INIS)

    Hauri, C.; Merano, M.; Trisorio, A.; Canova, F.; Canova, L.; Lopez-Martens, R.; Ruchon, T.; Engquist, A.; Varju, K.; Gustafsson, E.

    2006-01-01

    successfully scaled this pulse compression regime to the multi-mJ level. In this case, temporally clean 10-fs pulses with energies up to 1.8 mJ could be generated through self-channeling of 3.5 mJ, 40-fs pulses. Again, the spectrally broadened pulses are seen to carry large negative chirps and the shortest measured pulse duration of 9.5-fs was achieved by inserting more than 1 cm of glass in order to compensate for the negative GDD. Single-shot measurements show the exceptional shot-to-shot stability of this pulse compression scheme. In conclusion, we demonstrate a stable nonlinear pulse compression technique based on self-channeling of intense 40-fs pulses in gases, in which ultrashort pulses are efficiently generated with unexpected large negative chirp. The strongly pre-compensated spectral phase characteristics of such few-cycle pulses makes them a practical driving source for further high-field applications since the shortest pulse durations can be achieved on target by simple propagation through bulk material (e.g. vacuum window). Simulations involving the solution to the nonlinear propagation equation are underway in order to understand this unexpected pulse compression regime and to find optimal conditions for further scaling in energy.

  1. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  2. AMPLIFICATION AND COMPRESSION OF THE TEXT AND ITS TITLE AS A MEANS OF CONVEYING THE INFORMATION STRUCTURE

    Directory of Open Access Journals (Sweden)

    Buyanova, E.V.

    2017-03-01

    Full Text Available This article takes stock of the basic notions of information structure. There are two communicative goals to satisfy: making the information conveyed by the discourse easier for the reader/hearer to understand; indicating what the enunciator considers to be the most important. When translating from one language into another the information structure in most cases remains unchanged. However the text in the target language may not always be completely clear to the new recipient for a number of reasons, such as social and national differences between speakers of the two languages, or lack of realia in the target language. In this case the information structure needs extension in the form of descriptions, definitions, commentaries. This results either in amplification of the text in the target language or in its compression. The present work is based on an analysis of papers from American and British journals and periodicals. The article also deals with the peculiarities of the metaphor as a means of broader text compression in the titles of newspaper articles.

  3. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  4. La ruée vers l’or. Nouveaux écrans, nouvelles recettes ?

    Directory of Open Access Journals (Sweden)

    Joël Augros

    2012-06-01

    Full Text Available Autrefois assises sur l’unique marché des salles, les sources des recettes du cinéma se sont multipliées à l’apparition de nouveaux écrans (télévision, vidéo pour adopter un mode séquentiel (les fenêtres de diffusion. Après une certaine stabilisation dans les années 80, l’arrivée sur le marché d’autres écrans (internet, écrans portables met à mal en France et aux Etats-Unis la périodisation installée. Alors qu’en France, le mode de financement du cinéma et la place cruciale de Canal Plus dans les recettes du cinéma conduisent à un raccourcissement de la séquence sans que les professionnels n’envisagent une disparition du modèle, aux Etats-Unis, le déclin du marché vidéo met à l’ordre du jour une abolition pure et simple du séquençage. Les compagnies indépendantes notamment expérimentent de façon croissante les sorties de films sur tous les écrans disponibles sous l’œil intéressé des majors, prêtes à adopter le modèle s’il s’avère pertinent.Stemming ab initio exclusively from the theatre box-office, revenues of films diversified when television and video became alternative ways of screening movies. Thus, a business model was established and stabilized in the 1980s along screening windows. The digital revolution broke this business model in France as well as in the United States. However, while in France, the film investment schemes and Canal Plus Group’s established position as a major source of finance led to the shortening of release windows but not to the disappearance of the release windows format, in the United States, the declining DVD sales and revenues forced the major studios to study new methods of distribution and new business models, based on day-and-date releases and on the gradual dissolution of the windows model. The major companies have been carefully watching the independent film companies experimentations with release windows ready to adopt this new business model if

  5. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  6. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  7. Laser targets: introduction

    International Nuclear Information System (INIS)

    Rosen, M.D.

    1985-01-01

    The laser target design group was engaged in three main tasks in 1984: (1) analyzing Novette implosion and hohlraum-scaling data, (2) planning for the first experiments on Nova, and (3) designing laboratory x-ray laser targets and experiments. The Novette implosion and hohlraum scaling data are mostly classified and are therefore not discussed in detail here. The authors achieved average final/initial pusher pr ratios of about 50, some 3 times higher than the value achieved in the best Shiva shots. These pr values imply a fuel compression to 100 times liquid density, although this figure and other aspects of the experiments are subject to further interpretation because of detailed questions of target symmetry and stability. Their main long-term goal for Nova is to produce a so-called hydrodynamically equivalent target (HET) - that is, a target whose hydrodynamic behavior (implosion velocity, convergence ratio, symmetry and stability requirements, etc.) is very much like that of a high-gain target, but one that is scaled down in size to match the energy available from Nova and is too small to achieve enough hot-spot pr to ignite the cold, near-Fermi-degenerate fuel around it. Their goal for Nova's first year is to do experiments that will teach them how to achieve the symmetry and stability conditions required by an HET

  8. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  9. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  10. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  11. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  12. Application of current guidelines for chest compression depth on different surfaces and using feedback devices: a randomized cross-over study.

    Science.gov (United States)

    Schober, P; Krage, R; Lagerburg, V; Van Groeningen, D; Loer, S A; Schwarte, L A

    2014-04-01

    Current cardiopulmonary resuscitation (CPR)-guidelines recommend an increased chest compression depth and rate compared to previous guidelines, and the use of automatic feedback devices is encouraged. However, it is unclear whether this compression depth can be maintained at an increased frequency. Moreover, the underlying surface may influence accuracy of feedback devices. We investigated compression depths over time and evaluated the accuracy of a feedback device on different surfaces. Twenty-four volunteers performed four two-minute blocks of CPR targeting at current guideline recommendations on different surfaces (floor, mattress, 2 backboards) on a patient simulator. Participants rested for 2 minutes between blocks. Influences of time and different surfaces on chest compression depth (ANOVA, mean [95% CI]) and accuracy of a feedback device to determine compression depth (Bland-Altman) were assessed. Mean compression depth did not reach recommended depth and decreased over time during all blocks (first block: from 42 mm [39-46 mm] to 39 mm [37-42 mm]). A two-minute resting period was insufficient to restore compression depth to baseline. No differences in compression depth were observed on different surfaces. The feedback device slightly underestimated compression depth on the floor (bias -3.9 mm), but markedly overestimated on the mattress (bias +12.6 mm). This overestimation was eliminated after correcting compression depth by a second sensor between manikin and mattress. Strategies are needed to improve chest compression depth, and more than two providers should alternate with chest compressions. The underlying surface does not necessarily adversely affect CPR performance but influences accuracy of feedback devices. Accuracy is improved by a second, posterior, sensor.

  13. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  14. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  15. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  16. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  17. Limiting density ratios in piston-driven compressions

    International Nuclear Information System (INIS)

    Lee, S.

    1985-07-01

    By using global energy and pressure balance applied to a shock model it is shown that for a piston-driven fast compression, the maximum compression ratio is not dependent on the absolute magnitude of the piston power, but rather on the power pulse shape. Specific cases are considered and a maximum density compression ratio of 27 is obtained for a square-pulse power compressing a spherical pellet with specific heat ratio of 5/3. Double pulsing enhances the density compression ratio to 1750 in the case of linearly rising compression pulses. Using this method further enhancement by multiple pulsing becomes obvious. (author)

  18. Cyclops: single-pixel imaging lidar system based on compressive sensing

    Science.gov (United States)

    Magalhães, F.; Correia, M. V.; Farahi, F.; Pereira do Carmo, J.; Araújo, F. M.

    2017-11-01

    Mars and the Moon are envisaged as major destinations of future space exploration missions in the upcoming decades. Imaging LIDARs are seen as a key enabling technology in the support of autonomous guidance, navigation and control operations, as they can provide very accurate, wide range, high-resolution distance measurements as required for the exploration missions. Imaging LIDARs can be used at critical stages of these exploration missions, such as descent and selection of safe landing sites, rendezvous and docking manoeuvres, or robotic surface navigation and exploration. Despite these devices have been commercially available and used for long in diverse metrology and ranging applications, their size, mass and power consumption are still far from being suitable and attractive for space exploratory missions. Here, we describe a compact Single-Pixel Imaging LIDAR System that is based on a compressive sensing technique. The application of the compressive codes to a DMD array enables compression of the spatial information, while the collection of timing histograms correlated to the pulsed laser source ensures image reconstruction at the ranged distances. Single-pixel cameras have been compared with raster scanning and array based counterparts in terms of noise performance, and proved to be superior. Since a single photodetector is used, a better SNR and higher reliability is expected in contrast with systems using large format photodetector arrays. Furthermore, the event of failure of one or more micromirror elements in the DMD does not prevent full reconstruction of the images. This brings additional robustness to the proposed 3D imaging LIDAR. The prototype that was implemented has three modes of operation. Range Finder: outputs the average distance between the system and the area of the target under illumination; Attitude Meter: provides the slope of the target surface based on distance measurements in three areas of the target; 3D Imager: produces 3D ranged

  19. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2013-01-01

    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and an extensive discussion of the various approaches used in predicting both free shear and wall bounded flows is presented. Experimental measurement techniques common to the compressible flow regime are introduced with particular emphasis on the unique challenges presented by high speed flows. Both experimental and numerical simulation work is supplied throughout to provide the reader with an overall perspective of current tre...

  20. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  1. Beam collimation and energy spectrum compression of laser-accelerated proton beams using solenoid field and RF cavity

    Energy Technology Data Exchange (ETDEWEB)

    Teng, J.; Gu, Y.Q., E-mail: tengjian@mail.ustc.edu.cn; Zhu, B.; Hong, W.; Zhao, Z.Q.; Zhou, W.M.; Cao, L.F.

    2013-11-21

    This paper presents a new method of laser produced proton beam collimation and spectrum compression using a combination of a solenoid field and a RF cavity. The solenoid collects laser-driven protons efficiently within an angle that is smaller than 12 degrees because it is mounted few millimeters from the target, and collimates protons with energies around 2.3 MeV. The collimated proton beam then passes through a RF cavity to allow compression of the spectrum. Particle-in-cell (PIC) simulations demonstrate the proton beam transport in the solenoid and RF electric fields. Excellent energy compression and collection efficiency of protons are presented. This method for proton beam optimization is suitable for high repetition-rate laser acceleration proton beams, which could be used as an injector for a conventional proton accelerator.

  2. Beam collimation and energy spectrum compression of laser-accelerated proton beams using solenoid field and RF cavity

    Science.gov (United States)

    Teng, J.; Gu, Y. Q.; Zhu, B.; Hong, W.; Zhao, Z. Q.; Zhou, W. M.; Cao, L. F.

    2013-11-01

    This paper presents a new method of laser produced proton beam collimation and spectrum compression using a combination of a solenoid field and a RF cavity. The solenoid collects laser-driven protons efficiently within an angle that is smaller than 12 degrees because it is mounted few millimeters from the target, and collimates protons with energies around 2.3 MeV. The collimated proton beam then passes through a RF cavity to allow compression of the spectrum. Particle-in-cell (PIC) simulations demonstrate the proton beam transport in the solenoid and RF electric fields. Excellent energy compression and collection efficiency of protons are presented. This method for proton beam optimization is suitable for high repetition-rate laser acceleration proton beams, which could be used as an injector for a conventional proton accelerator.

  3. Beam collimation and energy spectrum compression of laser-accelerated proton beams using solenoid field and RF cavity

    International Nuclear Information System (INIS)

    Teng, J.; Gu, Y.Q.; Zhu, B.; Hong, W.; Zhao, Z.Q.; Zhou, W.M.; Cao, L.F.

    2013-01-01

    This paper presents a new method of laser produced proton beam collimation and spectrum compression using a combination of a solenoid field and a RF cavity. The solenoid collects laser-driven protons efficiently within an angle that is smaller than 12 degrees because it is mounted few millimeters from the target, and collimates protons with energies around 2.3 MeV. The collimated proton beam then passes through a RF cavity to allow compression of the spectrum. Particle-in-cell (PIC) simulations demonstrate the proton beam transport in the solenoid and RF electric fields. Excellent energy compression and collection efficiency of protons are presented. This method for proton beam optimization is suitable for high repetition-rate laser acceleration proton beams, which could be used as an injector for a conventional proton accelerator

  4. Investigation of fusion gain in fast ignition with conical targets

    Directory of Open Access Journals (Sweden)

    MJ Tabatabaei

    2011-03-01

    Full Text Available Fast ignition is a new scheme for inertial confinement fusion (ICF. In this scheme, at first the interaction of ultraintense laser beam with the hohlraum wall surrounding a capsule containing deuterium-tritium (D-T fuel causes implosion and compression of fuel to high density and then laser produced protons penetrate in the compressed fuel and deposit their energy in it as the ignition hot spot is created. In this paper, following the energy gain of spherical target and considering relationship of the burn fraction to burn duration, we have obtained the energy gain of conical targets characterized by the angle β, and found a hemispherical capsule (β=π/2 has a gain as high as 96% of that of the whole spherical capsule. The results obtained in this study are qualitatively consistent with Atzeni et al.'s studies of simulations.

  5. 30 CFR 77.412 - Compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  6. Two divergent paths: compression vs. non-compression in deep venous thrombosis and post thrombotic syndrome

    Directory of Open Access Journals (Sweden)

    Eduardo Simões Da Matta

    Full Text Available Abstract Use of compression therapy to reduce the incidence of postthrombotic syndrome among patients with deep venous thrombosis is a controversial subject and there is no consensus on use of elastic versus inelastic compression, or on the levels and duration of compression. Inelastic devices with a higher static stiffness index, combine relatively small and comfortable pressure at rest with pressure while standing strong enough to restore the “valve mechanism” generated by plantar flexion and dorsiflexion of the foot. Since the static stiffness index is dependent on the rigidity of the compression system and the muscle strength within the bandaged area, improvement of muscle mass with muscle-strengthening programs and endurance training should be encouraged. Therefore, in the acute phase of deep venous thrombosis events, anticoagulation combined with inelastic compression therapy can reduce the extension of the thrombus. Notwithstanding, prospective studies evaluating the effectiveness of inelastic therapy in deep venous thrombosis and post-thrombotic syndrome are needed.

  7. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  8. Poor chest compression quality with mechanical compressions in simulated cardiopulmonary resuscitation: a randomized, cross-over manikin study.

    Science.gov (United States)

    Blomberg, Hans; Gedeborg, Rolf; Berglund, Lars; Karlsten, Rolf; Johansson, Jakob

    2011-10-01

    Mechanical chest compression devices are being implemented as an aid in cardiopulmonary resuscitation (CPR), despite lack of evidence of improved outcome. This manikin study evaluates the CPR-performance of ambulance crews, who had a mechanical chest compression device implemented in their routine clinical practice 8 months previously. The objectives were to evaluate time to first defibrillation, no-flow time, and estimate the quality of compressions. The performance of 21 ambulance crews (ambulance nurse and emergency medical technician) with the authorization to perform advanced life support was studied in an experimental, randomized cross-over study in a manikin setup. Each crew performed two identical CPR scenarios, with and without the aid of the mechanical compression device LUCAS. A computerized manikin was used for data sampling. There were no substantial differences in time to first defibrillation or no-flow time until first defibrillation. However, the fraction of adequate compressions in relation to total compressions was remarkably low in LUCAS-CPR (58%) compared to manual CPR (88%) (95% confidence interval for the difference: 13-50%). Only 12 out of the 21 ambulance crews (57%) applied the mandatory stabilization strap on the LUCAS device. The use of a mechanical compression aid was not associated with substantial differences in time to first defibrillation or no-flow time in the early phase of CPR. However, constant but poor chest compressions due to failure in recognizing and correcting a malposition of the device may counteract a potential benefit of mechanical chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  10. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  11. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation.

    Science.gov (United States)

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-07-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects were positioned in a 30° lateral recumbent position, and a 2-kgf compression was applied. For expiratory rib cage compression, the rib cage was compressed unilaterally; for expiratory abdominal compression, the area directly above the navel was compressed. Tidal volume values were the actual measured values divided by body weight. [Results] Tidal volume values were as follows: at rest, 7.2 ± 1.7 mL/kg; during expiratory rib cage compression, 8.3 ± 2.1 mL/kg; during expiratory abdominal compression, 9.1 ± 2.2 mL/kg. There was a significant difference between the tidal volume during expiratory abdominal compression and that at rest. The tidal volume in expiratory rib cage compression was strongly correlated with that in expiratory abdominal compression. [Conclusion] These results indicate that expiratory abdominal compression may be an effective alternative to the manual breathing assist procedure.

  12. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  13. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  14. Fast and Adaptive Lossless On-Board Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Aranki, Nazeeh; Bakhshi, Alireza; Keymeulen, Didier; Klimesh, Matthew

    2009-01-01

    Efficient on-board lossless hyperspectral data compression reduces the data volume necessary to meet NASA and DoD limited downlink capabilities. The techniques also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware, which makes it practical for flight implementations of pushbroom instruments. A prototype of the compressor (and decompressor) of the algorithm is available in software, but this implementation may not meet speed and real-time requirements of some space applications. Hardware acceleration provides performance improvements of 10x-100x vs. the software implementation (about 1M samples/sec on a Pentium IV machine). This paper describes a hardware implementation of the JPL-developed 'Fast Lossless' compression algorithm on a Field Programmable Gate Array (FPGA). The FPGA implementation targets the current state of the art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for Space applications.

  15. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  16. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  17. Information theory and rate distortion theory for communications and compression

    CERN Document Server

    Gibson, Jerry

    2013-01-01

    This book is very specifically targeted to problems in communications and compression by providing the fundamental principles and results in information theory and rate distortion theory for these applications and presenting methods that have proved and will prove useful in analyzing and designing real systems. The chapters contain treatments of entropy, mutual information, lossless source coding, channel capacity, and rate distortion theory; however, it is the selection, ordering, and presentation of the topics within these broad categories that is unique to this concise book. While the cover

  18. Inertial Confinement Fusion as an Extreme Example of Dynamic Compression

    Science.gov (United States)

    Moses, E.

    2013-06-01

    Initiating and controlling thermonuclear burn at the national ignition facility (NIF) will require the manipulation of matter to extreme energy densities. We will discuss recent advances in both controlling the dynamic compression of ignition targets and our understanding of the physical states and processes leading to ignition. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory in part under Contract W-7405-Eng-48 and in part under Contract DE-AC52-07NA27344.

  19. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  20. Systolic Compression of Epicardial Coronary and Intramural Arteries

    Science.gov (United States)

    Mohiddin, Saidi A.; Fananapazir, Lameh

    2002-01-01

    It has been suggested that systolic compression of epicardial coronary arteries is an important cause of myocardial ischemia and sudden death in children with hypertrophic cardiomyopathy. We examined the associations between sudden death, systolic coronary compression of intra- and epicardial arteries, myocardial perfusion abnormalities, and severity of hypertrophy in children with hypertrophic cardiomyopathy. We reviewed the angiograms from 57 children with hypertrophic cardiomyopathy for the presence of coronary and septal artery compression; coronary compression was present in 23 (40%). The left anterior descending artery was most often affected, and multiple sites were found in 4 children. Myocardial perfusion abnormalities were more frequently present in children with coronary compression than in those without (94% vs 47%, P = 0.002). Coronary compression was also associated with more severe septal hypertrophy and greater left ventricular outflow gradient. Septal branch compression was present in 65% of the children and was significantly associated with coronary compression, severity of septal hypertrophy, and outflow obstruction. Multivariate analysis showed that septal thickness and septal branch compression, but not coronary compression, were independent predictors of perfusion abnormalities. Coronary compression was not associated with symptom severity, ventricular tachycardia, or a worse prognosis. We conclude that compression of coronary arteries and their septal branches is common in children with hypertrophic cardiomyopathy and is related to the magnitude of left ventricular hypertrophy. Our findings suggest that coronary compression does not make an important contribution to myocardial ischemia in hypertrophic cardiomyopathy; however, left ventricular hypertrophy and compression of intramural arteries may contribute significantly. (Tex Heart Inst J 2002;29:290–8) PMID:12484613

  1. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  2. Energy Conservation In Compressed Air Systems

    International Nuclear Information System (INIS)

    Yusuf, I.Y.; Dewu, B.B.M.

    2004-01-01

    Compressed air is an essential utility that accounts for a substantial part of the electricity consumption (bill) in most industrial plants. Although the general saying Air is free of charge is not true for compressed air, the utility's cost is not accorded the rightful importance due to its by most industries. The paper will show that the cost of 1 unit of energy in the form of compressed air is at least 5 times the cost electricity (energy input) required to produce it. The paper will also provide energy conservation tips in compressed air systems

  3. Compressed Data Structures for Range Searching

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...

  4. Compression therapy after ankle fracture surgery

    DEFF Research Database (Denmark)

    Winge, R; Bayer, L; Gottlieb, H

    2017-01-01

    PURPOSE: The main purpose of this systematic review was to investigate the effect of compression treatment on the perioperative course of ankle fractures and describe its effect on edema, pain, ankle joint mobility, wound healing complication, length of stay (LOS) and time to surgery (TTS). The aim...... undergoing surgery, testing either intermittent pneumatic compression, compression bandage and/or compression stocking and reporting its effect on edema, pain, ankle joint mobility, wound healing complication, LOS and TTS. To conclude on data a narrative synthesis was performed. RESULTS: The review included...

  5. Exploding pusher targets for the SHIVA laser system

    International Nuclear Information System (INIS)

    Rosen, M.D.; Larsen, J.T.; Nuckolls, J.H.

    1977-01-01

    The first targets for the 20 TW SHIVA laser system were designed. They are simple glass micro-balloons, approximately 300 μm in diameter and 2 μm thick, filled with D-T gas. Using LASNEX, whose model physics was utilized successfully for ARGUS targets, we optimize for both gain and yield. The target behaves as an exploding pusher. Different simple analytic models for the physics of this mode are presented, and are tested by comparing their scaling predictions, at constant absorbed power, with those demonstrated by LASNEX. Emphasis is placed on successful prediction of the basic quantities of peak ion temperature and compression, rather than neutron yield or n tau

  6. Effect of Kollidon VA®64 particle size and morphology as directly compressible excipient on tablet compression properties.

    Science.gov (United States)

    Chaudhary, R S; Patel, C; Sevak, V; Chan, M

    2018-01-01

    The study evaluates use of Kollidon VA ® 64 and a combination of Kollidon VA ® 64 with Kollidon VA ® 64 Fine as excipient in direct compression process of tablets. The combination of the two grades of material is evaluated for capping, lamination and excessive friability. Inter particulate void space is higher for such excipient due to the hollow structure of the Kollidon VA ® 64 particles. During tablet compression air remains trapped in the blend exhibiting poor compression with compromised physical properties of the tablets. Composition of Kollidon VA ® 64 and Kollidon VA ® 64 Fine is evaluated by design of experiment (DoE). A scanning electron microscopy (SEM) of two grades of Kollidon VA ® 64 exhibits morphological differences between coarse and fine grade. The tablet compression process is evaluated with a mix consisting of entirely Kollidon VA ® 64 and two mixes containing Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23 and 65:35. A statistical modeling on the results from the DoE trials resulted in the optimum composition for direct tablet compression as combination of Kollidon VA ® 64 and Kollidon VA ® 64 Fine in ratio of 77:23. This combination compressed with the predicted parameters based on the statistical modeling and applying main compression force between 5 and 15 kN, pre-compression force between 2 and 3 kN, feeder speed fixed at 25 rpm and compression range of 45-49 rpm produced tablets with hardness ranging between 19 and 21 kp, with no friability, capping, or lamination issue.

  7. Baroreflex Coupling Assessed by Cross-Compression Entropy

    Directory of Open Access Journals (Sweden)

    Andy Schumann

    2017-05-01

    Full Text Available Estimating interactions between physiological systems is an important challenge in modern biomedical research. Here, we explore a new concept for quantifying information common in two time series by cross-compressibility. Cross-compression entropy (CCE exploits the ZIP data compression algorithm extended to bivariate data analysis. First, time series are transformed into symbol vectors. Symbols of the target time series are coded by the symbols of the source series. Uncoupled and linearly coupled surrogates were derived from cardiovascular recordings of 36 healthy controls obtained during rest to demonstrate suitability of this method for assessing physiological coupling. CCE at rest was compared to that of isometric handgrip exercise. Finally, spontaneous baroreflex interaction assessed by CCEBRS was compared between 21 patients suffering from acute schizophrenia and 21 matched controls. The CCEBRS of original time series was significantly higher than in uncoupled surrogates in 89% of the subjects and higher than in linearly coupled surrogates in 47% of the subjects. Handgrip exercise led to sympathetic activation and vagal inhibition accompanied by reduced baroreflex sensitivity. CCEBRS decreased from 0.553 ± 0.030 at rest to 0.514 ± 0.035 during exercise (p < 0.001. In acute schizophrenia, heart rate, and blood pressure were elevated. Heart rate variability indicated a change of sympathovagal balance. The CCEBRS of patients with schizophrenia was reduced compared to healthy controls (0.546 ± 0.042 vs. 0.507 ± 0.046, p < 0.01 and revealed a decrease of blood pressure influence on heart rate in patients with schizophrenia. Our results indicate that CCE is suitable for the investigation of linear and non-linear coupling in cardiovascular time series. CCE can quantify causal interactions in short, noisy and non-stationary physiological time series.

  8. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  9. Tokamak heating by neutral beams and adiabatic compression

    International Nuclear Information System (INIS)

    Furth, H.P.

    1973-08-01

    ''Realistic'' models of tokamak energy confinement strongly favor reactor operation at the maximum MHD-stable β-value, in order to maximize plasma density. Ohmic heating is unsuitable for this purpose. Neutral-beam heating plus compression is well suited; however, very large requirements on device size and injection power seem likely for a DT ignition experiment using a Maxwellian plasma. Results of the ATC experiment are reviewed, including Ohmic heating, neutral-beam heating, and production of two-energy-component plasmas (energetic deuteron population in deuterium ''target plasma''). A modest extrapolation of present ATC parameters could give zero-power conditions in a DT experiment of the two-energy-component type. (U.S.)

  10. Target design for shock ignition

    International Nuclear Information System (INIS)

    Schurtz, G; Ribeyre, X; Lafon, M

    2010-01-01

    The conventional approach of laser driven inertial fusion involves the implosion of cryogenic shells of deuterium-tritium ice. At sufficiently high implosion velocities, the fuel ignites by itself from a central hot spot. In order to reduce the risks of hydrodynamic instabilities inherent to large implosion velocities, it was proposed to compress the fuel at low velocity, and ignite the compressed fuel by means of a convergent shock wave driven by an intense spike at the end of the laser pulse. This scheme, known as shock ignition, reduces the risks of shell break-up during the acceleration phase, but it may be impeded by a low coupling efficiency of the laser pulse with plasma at high intensities. This work provides a relationship between the implosion velocity and the laser intensity required to ignite the target by a shock. The operating domain of shock ignition at different energies is described.

  11. Confounding compression: the effects of posture, sizing and garment type on measured interface pressure in sports compression clothing.

    Science.gov (United States)

    Brophy-Williams, Ned; Driller, Matthew William; Shing, Cecilia Mary; Fell, James William; Halson, Shona Leigh; Halson, Shona Louise

    2015-01-01

    The purpose of this investigation was to measure the interface pressure exerted by lower body sports compression garments, in order to assess the effect of garment type, size and posture in athletes. Twelve national-level boxers were fitted with sports compression garments (tights and leggings), each in three different sizes (undersized, recommended size and oversized). Interface pressure was assessed across six landmarks on the lower limb (ranging from medial malleolus to upper thigh) as athletes assumed sitting, standing and supine postures. Sports compression leggings exerted a significantly higher mean pressure than sports compression tights (P sports compression garments is significantly affected by garment type, size and posture assumed by the wearer.

  12. Use of customised pressure-guided elastic bandages to improve efficacy of compression bandaging for venous ulcers.

    Science.gov (United States)

    Sermsathanasawadi, Nuttawut; Chatjaturapat, Choedpong; Pianchareonsin, Rattana; Puangpunngam, Nattawut; Wongwanit, Chumpol; Chinsakchai, Khamin; Ruangsetakit, Chanean; Mutirangura, Pramook

    2017-08-01

    Compression bandaging is a major treatment of chronic venous ulcers. Its efficacy depends on the applied pressure, which is dependent on the skill of the individual applying the bandage. To improve the quality of bandaging by reducing the variability in compression bandage interface pressures, we changed elastic bandages into a customised version by marking them with circular ink stamps, applied when the stretch achieves an interface pressure between 35 and 45 mmHg. Repeated applications by 20 residents of the customised bandage and non-marked bandage to one smaller and one larger leg were evaluated by measuring the sub-bandage pressure. The results demonstrated that the target pressure range is more often attained with the customised bandage compared with the non-marked bandage. The customised bandage improved the efficacy of compression bandaging for venous ulcers, with optimal sub-bandage pressure. © 2016 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  13. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  14. Compression force behaviours: An exploration of the beliefs and values influencing the application of breast compression during screening mammography

    International Nuclear Information System (INIS)

    Murphy, Fred; Nightingale, Julie; Hogg, Peter; Robinson, Leslie; Seddon, Doreen; Mackay, Stuart

    2015-01-01

    This research project investigated the compression behaviours of practitioners during screening mammography. The study sought to provide a qualitative understanding of ‘how’ and ‘why’ practitioners apply compression force. With a clear conflict in the existing literature and little scientific evidence base to support the reasoning behind the application of compression force, this research project investigated the application of compression using a phenomenological approach. Following ethical approval, six focus group interviews were conducted at six different breast screening centres in England. A sample of 41 practitioners were interviewed within the focus groups together with six one-to-one interviews of mammography educators or clinical placement co-ordinators. The findings revealed two broad humanistic and technological categories consisting of 10 themes. The themes included client empowerment, white-lies, time for interactions, uncertainty of own practice, culture, power, compression controls, digital technology, dose audit-safety nets, numerical scales. All of these themes were derived from 28 units of significant meaning (USM). The results demonstrate a wide variation in the application of compression force, thus offering a possible explanation for the difference between practitioner compression forces found in quantitative studies. Compression force was applied in many different ways due to individual practitioner experiences and behaviour. Furthermore, the culture and the practice of the units themselves influenced beliefs and attitudes of practitioners in compression force application. The strongest recommendation to emerge from this study was the need for peer observation to enable practitioners to observe and compare their own compression force practice to that of their colleagues. The findings are significant for clinical practice in order to understand how and why compression force is applied

  15. Memory hierarchy using row-based compression

    Science.gov (United States)

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  16. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  17. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  18. On the characterisation of the dynamic compressive behaviour of silicon carbides subjected to isentropic compression experiments

    Directory of Open Access Journals (Sweden)

    Zinszner Jean-Luc

    2015-01-01

    Full Text Available Ceramic materials are commonly used as protective materials particularly due to their very high hardness and compressive strength. However, the microstructure of a ceramic has a great influence on its compressive strength and on its ballistic efficiency. To study the influence of microstructural parameters on the dynamic compressive behaviour of silicon carbides, isentropic compression experiments have been performed on two silicon carbide grades using a high pulsed power generator called GEPI. Contrary to plate impact experiments, the use of the GEPI device and of the lagrangian analysis allows determining the whole loading path. The two SiC grades studied present different Hugoniot elastic limit (HEL due to their different microstructures. For these materials, the experimental technique allowed evaluating the evolution of the equivalent stress during the dynamic compression. It has been observed that these two grades present a work hardening more or less pronounced after the HEL. The densification of the material seems to have more influence on the HEL than the grain size.

  19. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  20. Image Quality Assessment of JPEG Compressed Mars Science Laboratory Mastcam Images using Convolutional Neural Networks

    Science.gov (United States)

    Kerner, H. R.; Bell, J. F., III; Ben Amor, H.

    2017-12-01

    The Mastcam color imaging system on the Mars Science Laboratory Curiosity rover acquires images within Gale crater for a variety of geologic and atmospheric studies. Images are often JPEG compressed before being downlinked to Earth. While critical for transmitting images on a low-bandwidth connection, this compression can result in image artifacts most noticeable as anomalous brightness or color changes within or near JPEG compression block boundaries. In images with significant high-frequency detail (e.g., in regions showing fine layering or lamination in sedimentary rocks), the image might need to be re-transmitted losslessly to enable accurate scientific interpretation of the data. The process of identifying which images have been adversely affected by compression artifacts is performed manually by the Mastcam science team, costing significant expert human time. To streamline the tedious process of identifying which images might need to be re-transmitted, we present an input-efficient neural network solution for predicting the perceived quality of a compressed Mastcam image. Most neural network solutions require large amounts of hand-labeled training data for the model to learn the target mapping between input (e.g. distorted images) and output (e.g. quality assessment). We propose an automatic labeling method using joint entropy between a compressed and uncompressed image to avoid the need for domain experts to label thousands of training examples by hand. We use automatically labeled data to train a convolutional neural network to estimate the probability that a Mastcam user would find the quality of a given compressed image acceptable for science analysis. We tested our model on a variety of Mastcam images and found that the proposed method correlates well with image quality perception by science team members. When assisted by our proposed method, we estimate that a Mastcam investigator could reduce the time spent reviewing images by a minimum of 70%.

  1. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  2. Underwater Acoustic Matched Field Imaging Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Huichen Yan

    2015-10-01

    Full Text Available Matched field processing (MFP is an effective method for underwater target imaging and localizing, but its performance is not guaranteed due to the nonuniqueness and instability problems caused by the underdetermined essence of MFP. By exploiting the sparsity of the targets in an imaging area, this paper proposes a compressive sensing MFP (CS-MFP model from wave propagation theory by using randomly deployed sensors. In addition, the model’s recovery performance is investigated by exploring the lower bounds of the coherence parameter of the CS dictionary. Furthermore, this paper analyzes the robustness of CS-MFP with respect to the displacement of the sensors. Subsequently, a coherence-excluding coherence optimized orthogonal matching pursuit (CCOOMP algorithm is proposed to overcome the high coherent dictionary problem in special cases. Finally, some numerical experiments are provided to demonstrate the effectiveness of the proposed CS-MFP method.

  3. Hydrogen Station Compression, Storage, and Dispensing Technical Status and Costs: Systems Integration

    Energy Technology Data Exchange (ETDEWEB)

    Parks, G.; Boyd, R.; Cornish, J.; Remick, R.

    2014-05-01

    At the request of the U.S. Department of Energy Fuel Cell Technologies Office (FCTO), the National Renewable Energy Laboratory commissioned an independent review of hydrogen compression, storage, and dispensing (CSD) for pipeline delivery of hydrogen and forecourt hydrogen production. The panel was asked to address the (1) cost calculation methodology, (2) current cost/technical status, (3) feasibility of achieving the FCTO's 2020 CSD levelized cost targets, and to (4) suggest research areas that will help the FCTO reach its targets. As the panel neared the completion of these tasks, it was also asked to evaluate CSD costs for the delivery of hydrogen by high-pressure tube trailer. This report details these findings.

  4. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  5. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  6. Magnetized Target Fusion Driven by Plasma Liners

    Science.gov (United States)

    Thio, Y. C. Francis; Cassibry, Jason; Eskridge, Richard; Kirkpatrick, Ronald C.; Knapp, Charles E.; Lee, Michael; Martin, Adam; Smith, James; Wu, S. T.; Rodgers, Stephen L. (Technical Monitor)

    2001-01-01

    For practical applications of magnetized target fusion, standoff drivers to deliver the imploding momentum flux to the target plasma remotely are required. Quasi-spherically converging plasma jets have been proposed as standoff drivers for this purpose. The concept involves the dynamic formation of a quasi-spherical plasma liner by the merging of plasma jets, and the use of the liner so formed to compress a spheromak or a field reversed configuration (FRC). Theoretical analysis and computer modeling of the concept are presented. It is shown that, with the appropriate choice of the flow parameters in the liner and the target, the impact between the liner and the target plasma can be made to be shockless in the liner or to generate at most a very weak shock in the liner. Additional information is contained in the original extended abstract.

  7. A hybrid data compression approach for online backup service

    Science.gov (United States)

    Wang, Hua; Zhou, Ke; Qin, MingKang

    2009-08-01

    With the popularity of Saas (Software as a service), backup service has becoming a hot topic of storage application. Due to the numerous backup users, how to reduce the massive data load is a key problem for system designer. Data compression provides a good solution. Traditional data compression application used to adopt a single method, which has limitations in some respects. For example data stream compression can only realize intra-file compression, de-duplication is used to eliminate inter-file redundant data, compression efficiency cannot meet the need of backup service software. This paper proposes a novel hybrid compression approach, which includes two levels: global compression and block compression. The former can eliminate redundant inter-file copies across different users, the latter adopts data stream compression technology to realize intra-file de-duplication. Several compressing algorithms were adopted to measure the compression ratio and CPU time. Adaptability using different algorithm in certain situation is also analyzed. The performance analysis shows that great improvement is made through the hybrid compression policy.

  8. Induction of a shorter compression phase is correlated with a deeper chest compression during metronome-guided cardiopulmonary resuscitation: a manikin study.

    Science.gov (United States)

    Chung, Tae Nyoung; Bae, Jinkun; Kim, Eui Chung; Cho, Yun Kyung; You, Je Sung; Choi, Sung Wook; Kim, Ok Jun

    2013-07-01

    Recent studies have shown that there may be an interaction between duty cycle and other factors related to the quality of chest compression. Duty cycle represents the fraction of compression phase. We aimed to investigate the effect of shorter compression phase on average chest compression depth during metronome-guided cardiopulmonary resuscitation. Senior medical students performed 12 sets of chest compressions following the guiding sounds, with three down-stroke patterns (normal, fast and very fast) and four rates (80, 100, 120 and 140 compressions/min) in random sequence. Repeated-measures analysis of variance was used to compare the average chest compression depth and duty cycle among the trials. The average chest compression depth increased and the duty cycle decreased in a linear fashion as the down-stroke pattern shifted from normal to very fast (pmetronome-guided cardiopulmonary resuscitation.

  9. Compression Characteristics of Solid Wastes as Backfill Materials

    OpenAIRE

    Meng Li; Jixiong Zhang; Rui Gao

    2016-01-01

    A self-made large-diameter compression steel chamber and a SANS material testing machine were chosen to perform a series of compression tests in order to fully understand the compression characteristics of differently graded filling gangue samples. The relationship between the stress-deformation modulus and stress-compression degree was analyzed comparatively. The results showed that, during compression, the deformation modulus of gangue grew linearly with stress, the overall relationship bet...

  10. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  11. Stress analysis of shear/compression test

    International Nuclear Information System (INIS)

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-01-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed

  12. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  13. Externally guided target for inertial fusion

    International Nuclear Information System (INIS)

    Martinez-Val, J.M.; Piera, M.

    1996-01-01

    A totally new concept is proposed to reach fusion conditions by externally guided inertial confinement. The acceleration and compression of the fuel is guided by a cannon-like external duct with a conical section ending in a small-size cavity around the central point of the tube. The fuel pellets coming from each cannon mouth collide in the central cavity where the implosion and final compression of the fuel take place. Both the tube material density and its areal density must be much higher than the initial density and areal density of the fuel. The external tube will explode into pieces as a consequence of the inner pressures achieved after the fuel central collision. If the collision is suitably driven, a fusion burst can take place before the tube disassembly. because of the features of the central collision needed to trigger ignition, this concept could be considered as tamped impact fusion. Both the fusion products and the debris from the guide tube are caught by a liquid-lithium curtain surrounding the target. Only two driving beams are necessary. The system can be applied to any type of driver and could use a solid pellet at room temperature as the initial target. 54 refs., 24 figs., 1 tab

  14. Tracking a convoy of multiple targets using acoustic sensor data

    Science.gov (United States)

    Damarla, T. R.

    2003-08-01

    In this paper we present an algorithm to track a convoy of several targets in a scene using acoustic sensor array data. The tracking algorithm is based on template of the direction of arrival (DOA) angles for the leading target. Often the first target is the closest target to the sensor array and hence the loudest with good signal to noise ratio. Several steps were used to generate a template of the DOA angle for the leading target, namely, (a) the angle at the present instant should be close to the angle at the previous instant and (b) the angle at the present instant should be within error bounds of the predicted value based on the previous values. Once the template of the DOA angles of the leading target is developed, it is used to predict the DOA angle tracks of the remaining targets. In order to generate the tracks for the remaining targets, a track is established if the angles correspond to the initial track values of the first target. Second the time delay between the first track and the remaining tracks are estimated at the highest correlation points between the first track and the remaining tracks. As the vehicles move at different speeds the tracks either compress or expand depending on whether a target is moving fast or slow compared to the first target. The expansion and compression ratios are estimated and used to estimate the predicted DOA angle values of the remaining targets. Based on these predicted DOA angles of the remaining targets the DOA angles obtained from the MVDR or Incoherent MUSIC will be appropriately assigned to proper tracks. Several other rules were developed to avoid mixing the tracks. The algorithm is tested on data collected at Aberdeen Proving Ground with a convoy of 3, 4 and 5 vehicles. Some of the vehicles are tracked and some are wheeled vehicles. The tracking algorithm results are found to be good. The results will be presented at the conference and in the paper.

  15. Prevention of deep vein thrombosis in potential neurosurgical patients. A randomized trial comparing graduated compression stockings alone or graduated compression stockings plus intermittent pneumatic compression with control

    International Nuclear Information System (INIS)

    Turpie, A.G.; Hirsh, J.; Gent, M.; Julian, D.; Johnson, J.

    1989-01-01

    In a randomized trial of neurosurgical patients, groups wearing graduated compression stockings alone (group 1) or graduated compression stockings plus intermittent pneumatic compression (IPC) (group 2) were compared with an untreated control group in the prevention of deep vein thrombosis (DVT). In both active treatment groups, the graduated compression stockings were continued for 14 days or until hospital discharge, if earlier. In group 2, IPC was continued for seven days. All patients underwent DVT surveillance with iodine 125-labeled fibrinogen leg scanning and impedance plethysmography. Venography was carried out if either test became abnormal. Deep vein thrombosis occurred in seven (8.8%) of 80 patients in group 1, in seven (9.0%) of 78 patients in group 2, and in 16 (19.8%) of 81 patients in the control group. The observed differences among these rates are statistically significant. The results of this study indicate that graduated compression stockings alone or in combination with IPC are effective methods of preventing DVT in neurosurgical patients

  16. Compression-absorption (resorption) refrigerating machinery. Modeling of reactors; Machine frigorifique a compression-absorption (resorption). Modelisation des reacteurs

    Energy Technology Data Exchange (ETDEWEB)

    Lottin, O; Feidt, M; Benelmir, R [LEMTA-UHP Nancy-1, 54 - Vandoeuvre-les-Nancy (France)

    1998-12-31

    This paper is a series of transparencies presenting a comparative study of the thermal performances of different types of refrigerating machineries: di-thermal with vapor compression, tri-thermal with moto-compressor, with ejector, with free piston, adsorption-type, resorption-type, absorption-type, compression-absorption-type. A prototype of ammonia-water compression-absorption heat pump is presented and modeled. (J.S.)

  17. Compression-absorption (resorption) refrigerating machinery. Modeling of reactors; Machine frigorifique a compression-absorption (resorption). Modelisation des reacteurs

    Energy Technology Data Exchange (ETDEWEB)

    Lottin, O.; Feidt, M.; Benelmir, R. [LEMTA-UHP Nancy-1, 54 - Vandoeuvre-les-Nancy (France)

    1997-12-31

    This paper is a series of transparencies presenting a comparative study of the thermal performances of different types of refrigerating machineries: di-thermal with vapor compression, tri-thermal with moto-compressor, with ejector, with free piston, adsorption-type, resorption-type, absorption-type, compression-absorption-type. A prototype of ammonia-water compression-absorption heat pump is presented and modeled. (J.S.)

  18. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  19. Vertebral compression fractures after spine irradiation using conventional fractionation in patients with metastatic colorectal cancer

    Energy Technology Data Exchange (ETDEWEB)

    Ree, Woo Joong; Kim, Kyung Hwan; Chang, Jee Suk; Kim, Hyun Ju; Choi, Seo Hee; Koom, Woong Sub [Dept.of Radiation Oncology, Yonsei Cancer Center, Yonsei University Health System, Seoul (Korea, Republic of)

    2014-12-15

    To evaluate the risk of vertebral compression fracture (VCF) after conventional radiotherapy (RT) for colorectal cancer (CRC) with spine metastasis and to identify risk factors for VCF in metastatic and non-metastatic irradiated spines. We retrospectively reviewed 68 spinal segments in 16 patients who received conventional RT between 2009 and 2012. Fracture was defined as a newly developed VCF or progression of an existing fracture. The target volume included all metastatic spinal segments and one additional non-metastatic vertebra adjacent to the tumor-involved spines. The median follow-up was 7.8 months. Among all 68 spinal segments, there were six fracture events (8.8%) including three new VCFs and three fracture progressions. Observed VCF rates in vertebral segments with prior irradiation or pre-existing compression fracture were 30.0% and 75.0% respectively, compared with 5.2% and 4.7% for segments without prior irradiation or pre-existing compression fracture, respectively (both p < 0.05). The 1-year fracture-free probability was 87.8% (95% CI, 78.2-97.4). On multivariate analysis, prior irradiation (HR, 7.30; 95% CI, 1.31-40.86) and pre-existing compression fracture (HR, 18.45; 95% CI, 3.42-99.52) were independent risk factors for VCF. The incidence of VCF following conventional RT to the spine is not particularly high, regardless of metastatic tumor involvement. Spines that received irradiation and/or have pre-existing compression fracture before RT have an increased risk of VCF and require close observation.

  20. Spherical implosion experiments on OMEGA: measurements of the cold, compressed shell

    Energy Technology Data Exchange (ETDEWEB)

    Yaakobi, B.; Smalyuk, V.A.; Delettrez, J.A.; Town, R.P.J.; Marshall, F.J.; Glebov, V.Y.; Petrasso, R.D.; Soures, J.M.; Meyerhofer, D.D.; Seka, W. [Rochester Univ., NY (United States). Lab. for Laser Energetics

    2000-07-01

    Targets in which a titanium-doped layer is incorporated into the shell provide a variety of diagnostic signatures (absorption lines, K-edge absorption, K{alpha} imaging) for determining the areal density and dimensions of the shell around peak compression. Here we apply these methods to demonstrate the improvement in target performance when SSD is implemented on slow-rising laser pulses. We introduce a new method to study the uniformity of imploded shells: using a recently developed pinhole-array x-ray spectrometer, we obtain core images at energies below and above the K-edge energy of titanium. The ratio between such images reflects the nonuniformity of the shell alone. Finally, we compare the results with those of 1-D LILAC simulations, as well as 2-D ORCHID simulations that allow for the imprinting of laser non-uniformity on the target. The experimental results are replicated much better by ORCHID than by LILAC. (authors)

  1. Thermonuclear targets for direct-drive ignition by a megajoule laser pulse

    Energy Technology Data Exchange (ETDEWEB)

    Bel’kov, S. A.; Bondarenko, S. V. [Russian Federal Nuclear Center, All-Russia Research Institute of Experimental Physics (Russian Federation); Vergunova, G. A. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation); Garanin, S. G. [Russian Federal Nuclear Center, All-Russia Research Institute of Experimental Physics (Russian Federation); Gus’kov, S. Yu., E-mail: guskov@sci.lebedev.ru; Demchenko, N. N.; Doskoch, I. Ya.; Kuchugov, P. A. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation); Zmitrenko, N. V. [Russian Academy of Sciences, Keldysh Institute of Applied Mathematics (Russian Federation); Rozanov, V. B.; Stepanov, R. V.; Yakhin, R. A. [Russian Academy of Sciences, Lebedev Physical Institute (Russian Federation)

    2015-10-15

    Central ignition of a thin two-layer-shell fusion target that is directly driven by a 2-MJ profiled pulse of Nd laser second-harmonic radiation has been studied. The parameters of the target were selected so as to provide effective acceleration of the shell toward the center, which was sufficient for the onset of ignition under conditions of increased hydrodynamic stability of the ablator acceleration and compression. The aspect ratio of the inner deuterium-tritium layer of the shell does not exceed 15, provided that a major part (above 75%) of the outer layer (plastic ablator) is evaporated by the instant of maximum compression. The investigation is based on two series of numerical calculations that were performed using one-dimensional (1D) hydrodynamic codes. The first 1D code was used to calculate the absorption of the profiled laser-radiation pulse (including calculation of the total absorption coefficient with allowance for the inverse bremsstrahlung and resonance mechanisms) and the spatial distribution of target heating for a real geometry of irradiation using 192 laser beams in a scheme of focusing with a cubo-octahedral symmetry. The second 1D code was used for simulating the total cycle of target evolution under the action of absorbed laser radiation and for determining the thermonuclear gain that was achieved with a given target.

  2. Tokamak plasma variations under rapid compression

    International Nuclear Information System (INIS)

    Holmes, J.A.; Peng, Y.K.M.; Lynch, S.J.

    1980-04-01

    Changes in plasmas undergoing large, rapid compressions are examined numerically over the following range of aspect ratios A:3 greater than or equal to A greater than or equal to 1.5 for major radius compressions of circular, elliptical, and D-shaped cross sections; and 3 less than or equal to A less than or equal to 6 for minor radius compressions of circular and D-shaped cross sections. The numerical approach combines the computation of fixed boundary MHD equilibria with single-fluid, flux-surface-averaged energy balance, particle balance, and magnetic flux diffusion equations. It is found that the dependences of plasma current I/sub p/ and poloidal beta anti β/sub p/ on the compression ratio C differ significantly in major radius compressions from those proposed by Furth and Yoshikawa. The present interpretation is that compression to small A dramatically increases the plasma current, which lowers anti β/sub p/ and makes the plasma more paramagnetic. Despite large values of toroidal beta anti β/sub T/ (greater than or equal to 30% with q/sub axis/ approx. = 1, q/sub edge/ approx. = 3), this tends to concentrate more toroidal flux near the magnetic axis, which means that a reduced minor radius is required to preserve the continuity of the toroidal flux function F at the plasma edge. Minor radius compressions to large aspect ratio agree well with the Furth-Yoshikawa scaling laws

  3. Benign compression fractures of the spine: signal patterns

    International Nuclear Information System (INIS)

    Ryu, Kyung Nam; Choi, Woo Suk; Lee, Sun Wha; Lim, Jae Hoon

    1992-01-01

    Fifteen patients with 38 compression fractures of the spine underwent magnetic resonance(MR) imaging. We retrospectively evaluated MR images in those benign compression fractures. MR images showed four patterns in T1-weighted images. MR imaging patterns were normal signal(21), band like low signal(8), low signal with preservation of peripheral portion of the body(8), and diffuse low signal through the vertebral body(1). The low signal portions were changed to high signal intensities in T2-weighted images. In 7 of 15 patients (11 compression fractures), there was a history of trauma, and the remaining 8 patients (27 compression fractures) had no history of trauma. Benign compression fractures of trauma, remained 8 patients (27 compression fractures) were non-traumatic. Benign compression fractures of the spine reveal variable signal intensities in MR imagings. These patterns of benign compression fractures may be useful in interpretation of MR imagings of the spine

  4. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  5. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  6. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  7. Compact compressive arc and beam switchyard for energy recovery linac-driven ultraviolet free electron lasers

    Science.gov (United States)

    Akkermans, J. A. G.; Di Mitri, S.; Douglas, D.; Setija, I. D.

    2017-08-01

    High gain free electron lasers (FELs) driven by high repetition rate recirculating accelerators have received considerable attention in the scientific and industrial communities in recent years. Cost-performance optimization of such facilities encourages limiting machine size and complexity, and a compact machine can be realized by combining bending and bunch length compression during the last stage of recirculation, just before lasing. The impact of coherent synchrotron radiation (CSR) on electron beam quality during compression can, however, limit FEL output power. When methods to counteract CSR are implemented, appropriate beam diagnostics become critical to ensure that the target beam parameters are met before lasing, as well as to guarantee reliable, predictable performance and rapid machine setup and recovery. This article describes a beam line for bunch compression and recirculation, and beam switchyard accessing a diagnostic line for EUV lasing at 1 GeV beam energy. The footprint is modest, with 12 m compressive arc diameter and ˜20 m diagnostic line length. The design limits beam quality degradation due to CSR both in the compressor and in the switchyard. Advantages and drawbacks of two switchyard lines providing, respectively, off-line and on-line measurements are discussed. The entire design is scalable to different beam energies and charges.

  8. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  9. Determination of the pr of laser fusion targets using the α-particle TOF technique

    International Nuclear Information System (INIS)

    Slivinsky, V.W.; Lent, E.; Shay, H.D.; Manes, K.R.

    1975-01-01

    A computer code was written to describe the alpha particle energy loss. The problem of a symmetric compression of the DT gas by an exploding microsphere is analyzed. The code calculates the energy spectrum of a Gaussian distribution of alpha particles after passing through the compressed gas and the exploded glass. The calculations are being used to determine design parameters for diagnostic instruments for measuring charged particle energy distributions from laser fusion targets

  10. A biological compression model and its applications.

    Science.gov (United States)

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  11. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  12. The compressed word problem for groups

    CERN Document Server

    Lohrey, Markus

    2014-01-01

    The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...

  13. ERGC: an efficient referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Nonpainful wide-area compression inhibits experimental pain.

    Science.gov (United States)

    Honigman, Liat; Bar-Bachar, Ofrit; Yarnitsky, David; Sprecher, Elliot; Granovsky, Yelena

    2016-09-01

    Compression therapy, a well-recognized treatment for lymphoedema and venous disorders, pressurizes limbs and generates massive non-noxious afferent sensory barrages. The aim of this study was to study whether such afferent activity has an analgesic effect when applied on the lower limbs, hypothesizing that larger compression areas will induce stronger analgesic effects, and whether this effect correlates with conditioned pain modulation (CPM). Thirty young healthy subjects received painful heat and pressure stimuli (47°C for 30 seconds, forearm; 300 kPa for 15 seconds, wrist) before and during 3 compression protocols of either SMALL (up to ankles), MEDIUM (up to knees), or LARGE (up to hips) compression areas. Conditioned pain modulation (heat pain conditioned by noxious cold water) was tested before and after each compression protocol. The LARGE protocol induced more analgesia for heat than the SMALL protocol (P < 0.001). The analgesic effect interacted with gender (P = 0.015). The LARGE protocol was more efficient for females, whereas the MEDIUM protocol was more efficient for males. Pressure pain was reduced by all protocols (P < 0.001) with no differences between protocols and no gender effect. Conditioned pain modulation was more efficient than the compression-induced analgesia. For the LARGE protocol, precompression CPM efficiency positively correlated with compression-induced analgesia. Large body area compression exerts an area-dependent analgesic effect on experimental pain stimuli. The observed correlation with pain inhibition in response to robust non-noxious sensory stimulation may suggest that compression therapy shares similar mechanisms with inhibitory pain modulation assessed through CPM.

  15. Density ratios in compressions driven by radiation pressure

    International Nuclear Information System (INIS)

    Lee, S.

    1988-01-01

    It has been suggested that in the cannonball scheme of laser compression the pellet may be considered to be compressed by the 'brute force' of the radiation pressure. For such a radiation-driven compression, an energy balance method is applied to give an equation fixing the radius compression ratio K which is a key parameter for such intense compressions. A shock model is used to yield specific results. For a square-pulse driving power compressing a spherical pellet with a specific heat ratio of 5/3, a density compression ratio Γ of 27 is computed. Double (stepped) pulsing with linearly rising power enhances Γ to 1750. The value of Γ is not dependent on the absolute magnitude of the piston power, as long as this is large enough. Further enhancement of compression by multiple (stepped) pulsing becomes obvious. The enhanced compression increases the energy gain factor G for a 100 μm DT pellet driven by radiation power of 10 16 W from 6 for a square pulse power with 0.5 MJ absorbed energy to 90 for a double (stepped) linearly rising pulse with absorbed energy of 0.4 MJ assuming perfect coupling efficiency. (author)

  16. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  17. The Compressed Baryonic Matter experiment

    Directory of Open Access Journals (Sweden)

    Seddiki Sélim

    2014-04-01

    Full Text Available The Compressed Baryonic Matter (CBM experiment is a next-generation fixed-target detector which will operate at the future Facility for Antiproton and Ion Research (FAIR in Darmstadt. The goal of this experiment is to explore the QCD phase diagram in the region of high net baryon densities using high-energy nucleus-nucleus collisions. Its research program includes the study of the equation-of-state of nuclear matter at high baryon densities, the search for the deconfinement and chiral phase transitions and the search for the QCD critical point. The CBM detector is designed to measure both bulk observables with a large acceptance and rare diagnostic probes such as charm particles, multi-strange hyperons, and low mass vector mesons in their di-leptonic decay. The physics program of CBM will be summarized, followed by an overview of the detector concept, a selection of the expected physics performance, and the status of preparation of the experiment.

  18. Compression of surface myoelectric signals using MP3 encoding.

    Science.gov (United States)

    Chan, Adrian D C

    2011-01-01

    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  19. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Relationship between medical compression and intramuscular pressure as an explanation of a compression paradox.

    Science.gov (United States)

    Uhl, J-F; Benigni, J-P; Cornu-Thenard, A; Fournier, J; Blin, E

    2015-06-01

    Using standing magnetic resonance imaging (MRI), we recently showed that medical compression, providing an interface pressure (IP) of 22 mmHg, significantly compressed the deep veins of the leg but not, paradoxically, superficial varicose veins. To provide an explanation for this compression paradox by studying the correlation between the IP exerted by medical compression and intramuscular pressure (IMP). In 10 legs of five healthy subjects, we studied the effects of different IPs on the IMP of the medial gastrocnemius muscle. The IP produced by a cuff manometer was verified by a Picopress® device. The IMP was measured with a 21G needle connected to a manometer. Pressure data were recorded in the prone and standing positions with cuff manometer pressures from 0 to 50 mmHg. In the prone position, an IP of less than 20 did not significantly change the IMP. On the contrary, a perfect linear correlation with the IMP (r = 0.99) was observed with an IP from 20 to 50 mmHg. We found the same correlation in the standing position. We found that an IP of 22 mmHg produced a significant IMP increase from 32 to 54 mmHg, in the standing position. At the same time, the subcutaneous pressure is only provided by the compression device, on healthy subjects. In other words, the subcutaneous pressure plus the IP is only a little higher than 22 mmHg-a pressure which is too low to reduce the caliber of the superficial veins. This is in accordance with our standing MRI 3D anatomical study which showed that, paradoxically, when applying low pressures (IP), the deep veins are compressed while the superficial veins are not. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  1. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  2. Halftoning processing on a JPEG-compressed image

    Science.gov (United States)

    Sibade, Cedric; Barizien, Stephane; Akil, Mohamed; Perroton, Laurent

    2003-12-01

    Digital image processing algorithms are usually designed for the raw format, that is on an uncompressed representation of the image. Therefore prior to transforming or processing a compressed format, decompression is applied; then, the result of the processing application is finally re-compressed for further transfer or storage. The change of data representation is resource-consuming in terms of computation, time and memory usage. In the wide format printing industry, this problem becomes an important issue: e.g. a 1 m2 input color image, scanned at 600 dpi exceeds 1.6 GB in its raw representation. However, some image processing algorithms can be performed in the compressed-domain, by applying an equivalent operation on the compressed format. This paper is presenting an innovative application of the halftoning processing operation by screening, to be applied on JPEG-compressed image. This compressed-domain transform is performed by computing the threshold operation of the screening algorithm in the DCT domain. This algorithm is illustrated by examples for different halftone masks. A pre-sharpening operation, applied on a JPEG-compressed low quality image is also described; it allows to de-noise and to enhance the contours of this image.

  3. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  4. Crystal and Particle Engineering Strategies for Improving Powder Compression and Flow Properties to Enable Continuous Tablet Manufacturing by Direct Compression.

    Science.gov (United States)

    Chattoraj, Sayantan; Sun, Changquan Calvin

    2018-04-01

    Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  5. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  6. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  7. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  8. 30 CFR 57.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  9. Relationship between the edgewise compression strength of ...

    African Journals Online (AJOL)

    The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

  10. ROI-based DICOM image compression for telemedicine

    Indian Academy of Sciences (India)

    ground and reconstruct the image portions losslessly. The compressed image can ... If the image is compressed by 8:1 compression without any perceptual distortion, the ... Figure 2. Cross-sectional view of medical image (statistical representation). ... The Integer Wavelet Transform (IWT) is used to have lossless processing.

  11. A model for the acceleration of laser irradiated targets

    International Nuclear Information System (INIS)

    Babonneau, D.; Di Bona, G.; Fortin, X.

    1986-11-01

    Starting from the self-similar propagation of an electronic conduction wave and the consequent ablation pressure, we describe, in a simplified way, the shock ahead of this wave, then the effects of the rarefaction and compression waves which follow the shock emergence at the target rear surface. So, we obtain the temporal evolution of the rear velocity which is compared with the experimental one. For thick targets, the shock alone is able to emerge during the experimental time and consequently gives the velocity vsub(min). For thin targets, besides the shock accumulation mechanism, it is necessary to take into account the electronic heat wave emergence that is to say the ''complete'' ablation of the target which gives the velocity vsub(max)

  12. Eccentric crank variable compression ratio mechanism

    Science.gov (United States)

    Lawrence, Keith Edward [Kobe, JP; Moser, William Elliott [Peoria, IL; Roozenboom, Stephan Donald [Washington, IL; Knox, Kevin Jay [Peoria, IL

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  13. How Wage Compression Affects Job Turnover

    OpenAIRE

    Heyman, Fredrik

    2008-01-01

    I use Swedish establishment-level panel data to test Bertola and Rogerson’s (1997) hypothesis of a positive relation between the degree of wage compression and job reallocation. Results indicate that the effect of wage compression on job turnover is positive and significant in the manufacturing sector. The wage compression effect is stronger on job destruction than on job creation, consistent with downward wage rigidity. Further results include a strong positive relationship between the fract...

  14. CoGI: Towards Compressing Genomes as an Image.

    Science.gov (United States)

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  15. 30 CFR 56.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  16. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  17. An efficient adaptive arithmetic coding image compression technology

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Yun Jiao-Jiao; Zhang Yong-Lei

    2011-01-01

    This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  18. Modeling the mechanical and compression properties of polyamide/elastane knitted fabrics used in compression sportswear

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2016-01-01

    A compression sportswear fabric should have excellent stretch and recovery properties in order to improve the performance of the sportsman. The objective of this study was to investigate the effect of elastane linear density and loop length on the stretch, recovery, and compression properties of the

  19. An analysis of the efficacy of bag-valve-mask ventilation and chest compression during different compression-ventilation ratios in manikin-simulated paediatric resuscitation.

    Science.gov (United States)

    Kinney, S B; Tibballs, J

    2000-01-01

    The ideal chest compression and ventilation ratio for children during performance of cardiopulmonary resuscitation (CPR) has not been determined. The efficacy of chest compression and ventilation during compression ventilation ratios of 5:1, 10:2 and 15:2 was examined. Eighteen nurses, working in pairs, were instructed to provide chest compression and bag-valve-mask ventilation for 1 min with each ratio in random on a child-sized manikin. The subjects had been previously taught paediatric CPR within the last 3 or 5 months. The efficacy of ventilation was assessed by measurement of the expired tidal volume and the number of breaths provided. The rate of chest compression was guided by a metronome set at 100/min. The efficacy of chest compressions was assessed by measurement of the rate and depth of compression. There was no significant difference in the mean tidal volume or the percentage of effective chest compressions delivered for each compression-ventilation ratio. The number of breaths delivered was greatest with the ratio of 5:1. The percentage of effective chest compressions was equal with all three methods but the number of effective chest compressions was greatest with a ratio of 5:1. This study supports the use of a compression-ventilation ratio of 5:1 during two-rescuer paediatric cardiopulmonary resuscitation.

  20. Task-oriented lossy compression of magnetic resonance images

    Science.gov (United States)

    Anderson, Mark C.; Atkins, M. Stella; Vaisey, Jacques

    1996-04-01

    A new task-oriented image quality metric is used to quantify the effects of distortion introduced into magnetic resonance images by lossy compression. This metric measures the similarity between a radiologist's manual segmentation of pathological features in the original images and the automated segmentations performed on the original and compressed images. The images are compressed using a general wavelet-based lossy image compression technique, embedded zerotree coding, and segmented using a three-dimensional stochastic model-based tissue segmentation algorithm. The performance of the compression system is then enhanced by compressing different regions of the image volume at different bit rates, guided by prior knowledge about the location of important anatomical regions in the image. Application of the new system to magnetic resonance images is shown to produce compression results superior to the conventional methods, both subjectively and with respect to the segmentation similarity metric.

  1. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  2. New bean products to improve food security | CRDI - Centre de ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    21 avr. 2016 ... De nouveaux produits à base de haricots pour améliorer la sécurité alimentaire. De nouveaux produits à base de haricots qui cuisent facilement devraient améliorer la sécurité alimentaire et nutritionnelle des ménages à faible e. Voir davantageDe nouveaux produits à base de haricots pour améliorer la ...

  3. Oil-free centrifugal hydrogen compression technology demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Heshmat, Hooshang [Mohawk Innovative Technology Inc., Albany, NY (United States)

    2014-05-31

    One of the key elements in realizing a mature market for hydrogen vehicles is the deployment of a safe and efficient hydrogen production and delivery infrastructure on a scale that can compete economically with current fuels. The challenge, however, is that hydrogen, being the lightest and smallest of gases with a lower viscosity and density than natural gas, readily migrates through small spaces and is difficult to compresses efficiently. While efficient and cost effective compression technology is crucial to effective pipeline delivery of hydrogen, the compression methods used currently rely on oil lubricated positive displacement (PD) machines. PD compression technology is very costly, has poor reliability and durability, especially for components subjected to wear (e.g., valves, rider bands and piston rings) and contaminates hydrogen with lubricating fluid. Even so called “oil-free” machines use oil lubricants that migrate into and contaminate the gas path. Due to the poor reliability of PD compressors, current hydrogen producers often install duplicate units in order to maintain on-line times of 98-99%. Such machine redundancy adds substantially to system capital costs. As such, DOE deemed that low capital cost, reliable, efficient and oil-free advanced compressor technologies are needed. MiTi’s solution is a completely oil-free, multi-stage, high-speed, centrifugal compressor designed for flow capacity of 500,000 kg/day with a discharge pressure of 1200 psig. The design employs oil-free compliant foil bearings and seals to allow for very high operating speeds, totally contamination free operation, long life and reliability. This design meets the DOE’s performance targets and achieves an extremely aggressive, specific power metric of 0.48 kW-hr/kg and provides significant improvements in reliability/durability, energy efficiency, sealing and freedom from contamination. The multi-stage compressor system concept has been validated through full scale

  4. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  5. Comparison of compression properties of stretchable knitted fabrics and bi-stretch woven fabrics for compression garments

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2017-01-01

    Stretchable fabrics have diverse applications ranging from casual apparel to performance sportswear and compression therapy. Compression therapy is the universally accepted treatment for the management of hypertrophic scarring after severe burns. Mostly stretchable knitted fabrics are used in

  6. Compressed Air/Vacuum Transportation Techniques

    Science.gov (United States)

    Guha, Shyamal

    2011-03-01

    General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.

  7. Wave energy devices with compressible volumes.

    Science.gov (United States)

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  8. Isostatic compression of buffer blocks. Middle scale

    International Nuclear Information System (INIS)

    Ritola, J.; Pyy, E.

    2012-01-01

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  9. Isostatic compression of buffer blocks. Middle scale

    Energy Technology Data Exchange (ETDEWEB)

    Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2012-01-15

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  10. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  11. An unusual case: right proximal ureteral compression by the ovarian vein and distal ureteral compression by the external iliac vein

    Directory of Open Access Journals (Sweden)

    Halil Ibrahim Serin

    2015-12-01

    Full Text Available A 32-years old woman presented to the emergency room of Bozok University Research Hospital with right renal colic. Multidetector computed tomography (MDCT showed compression of the proximal ureter by the right ovarian vein and compression of the right distal ureter by the right external iliac vein. To the best of our knowledge, right proximal ureteral compression by the ovarian vein together with distal ureteral compression by the external iliac vein have not been reported in the literature. Ovarian vein and external iliac vein compression should be considered in patients presenting to the emergency room with renal colic or low back pain and a dilated collecting system.

  12. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  13. Pulse Compression of Phase-matched High Harmonic Pulses from a Time-Delay Compensated Monochromator

    Directory of Open Access Journals (Sweden)

    Ito Motohiko

    2013-03-01

    Full Text Available Pulse compression of single 32.6-eV high harmonic pulses from a time-delay compensated monochromator was demonstrated down to 11±3 fs by compensating the pulse front tilt. The photon flux was intensified up to 5.7×109 photons/s on target by implementing high harmonic generation under a phase matching condition in a hollow fiber used for increasing the interaction length.

  14. Current concepts of percutaneous balloon kyphoplasty for the treatment of osteoporotic vertebral compression fractures: Evidence-based review

    Directory of Open Access Journals (Sweden)

    Ming-Kai Hsieh

    2013-08-01

    Full Text Available Vertebral compression fractures constitute a major health care problem, not only because of their high incidence but also due to both direct and indirect consequences on health-related quality of life and health care expenditures. The mainstay of management for symptomatic vertebral compression fractures is targeted medical therapy, including analgesics, bed rest, external fixation, and rehabilitation. However, anti-inflammatory drugs and certain types of analgesics can be poorly tolerated by elderly patients, and surgical fixation often fails due to the poor quality of osteoporotic bone. Balloon kyphoplasty and vertebroplasty are two minimally invasive percutaneous surgical approaches that have recently been developed for the management of symptomatic vertebral compression fractures. The purpose of this study was to perform a comprehensive review of the literature and conduct a meta-analysis to compare clinical outcomes of pain relief and function, radiographic outcomes of the restoration of anterior vertebral height and kyphotic angles, and subsequent complications associated with these two techniques.

  15. Compressed Air Production Using Vehicle Suspension

    OpenAIRE

    Ninad Arun Malpure; Sanket Nandlal Bhansali

    2015-01-01

    Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are co...

  16. A Novel CAE Method for Compression Molding Simulation of Carbon Fiber-Reinforced Thermoplastic Composite Sheet Materials

    Directory of Open Access Journals (Sweden)

    Yuyang Song

    2018-06-01

    Full Text Available Its high-specific strength and stiffness with lower cost make discontinuous fiber-reinforced thermoplastic (FRT materials an ideal choice for lightweight applications in the automotive industry. Compression molding is one of the preferred manufacturing processes for such materials as it offers the opportunity to maintain a longer fiber length and higher volume production. In the past, we have demonstrated that compression molding of FRT in bulk form can be simulated by treating melt flow as a continuum using the conservation of mass and momentum equations. However, the compression molding of such materials in sheet form using a similar approach does not work well. The assumption of melt flow as a continuum does not hold for such deformation processes. To address this challenge, we have developed a novel simulation approach. First, the draping of the sheet was simulated as a structural deformation using the explicit finite element approach. Next, the draped shape was compressed using fluid mechanics equations. The proposed method was verified by building a physical part and comparing the predicted fiber orientation and warpage measurements performed on the physical parts. The developed method and tools are expected to help in expediting the development of FRT parts, which will help achieve lightweight targets in the automotive industry.

  17. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  18. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  19. Source-to-target simulation of simultaneous longitudinal and transverse focusing of heavy ion beams

    Directory of Open Access Journals (Sweden)

    D. R. Welch

    2008-06-01

    Full Text Available Longitudinal bunching factors in excess of 70 of a 300-keV, 27-mA K^{+} ion beam have been demonstrated in the neutralized drift compression experiment [P. K. Roy et al., Phys. Rev. Lett. 95, 234801 (2005PRLTAO0031-900710.1103/PhysRevLett.95.234801] in rough agreement with particle-in-cell source-to-target simulations. A key aspect of these experiments is that a preformed plasma provides charge neutralization of the ion beam in the last one meter drift region where the beam perveance becomes large. The simulations utilize the measured ion source temperature, diode voltage, and induction-bunching-module voltage waveforms in order to determine the initial beam longitudinal phase space which is critical to accurate modeling of the longitudinal compression. To enable simultaneous longitudinal and transverse compression, numerical simulations were used in the design of the solenoidal focusing system that compensated for the impact of the applied velocity tilt on the transverse phase space of the beam. Complete source-to-target simulations, that include detailed modeling of the diode, magnetic transport, induction bunching module, and plasma neutralized transport, were critical to understanding the interplay between the various accelerator components in the experiment. Here, we compare simulation results with the experiment and discuss the contributions to longitudinal and transverse emittance that limit the final compression.

  20. LSP Simulations of the Neutralized Drift Compression Experiment

    CERN Document Server

    Thoma, Carsten H; Gilson, Erik P; Henestroza, Enrique; Roy, Prabir K; Welch, Dale; Yu, Simon

    2005-01-01

    The Neutralized Drift Compression Experiment (NDCX) at Lawrence Berkeley National Laboratory involves the longitudinal compression of a singly-stripped K ion beam with a mean energy of 250 keV in a meter long plasma. We present simulation results of compression of the NDCX beam using the PIC code LSP. The NDCX beam encounters an acceleration gap with a time-dependent voltage that decelerates the front and accelerates the tail of a 500 ns pulse which is to be compressed 110 cm downstream. The simulations model both ideal and experimental voltage waveforms. Results show good longitudinal compression without significant emittance growth.

  1. Rectal perforation by compressed air.

    Science.gov (United States)

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  2. Comparison of Open-Hole Compression Strength and Compression After Impact Strength on Carbon Fiber/Epoxy Laminates for the Ares I Composite Interstage

    Science.gov (United States)

    Hodge, Andrew J.; Nettles, Alan T.; Jackson, Justin R.

    2011-01-01

    Notched (open hole) composite laminates were tested in compression. The effect on strength of various sizes of through holes was examined. Results were compared to the average stress criterion model. Additionally, laminated sandwich structures were damaged from low-velocity impact with various impact energy levels and different impactor geometries. The compression strength relative to damage size was compared to the notched compression result strength. Open-hole compression strength was found to provide a reasonable bound on compression after impact.

  3. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  4. Logarithmic compression methods for spectral data

    Science.gov (United States)

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  5. Realizing Technologies for Magnetized Target Fusion

    Energy Technology Data Exchange (ETDEWEB)

    Wurden, Glen A. [Los Alamos National Laboratory

    2012-08-24

    Researchers are making progress with a range of magneto-inertial fusion (MIF) concepts. All of these approaches use the addition of a magnetic field to a target plasma, and then compress the plasma to fusion conditions. The beauty of MIF is that driver power requirements are reduced, compared to classical inertial fusion approaches, and simultaneously the compression timescales can be longer, and required implosion velocities are slower. The presence of a sufficiently large Bfield expands the accessibility to ignition, even at lower values of the density-radius product, and can confine fusion alphas. A key constraint is that the lifetime of the MIF target plasma has to be matched to the timescale of the driver technology (whether liners, heavy ions, or lasers). To achieve sufficient burn-up fraction, scaling suggests that larger yields are more effective. To handle the larger yields (GJ level), thick liquid wall chambers are certainly desired (no plasma/neutron damage materials problem) and probably required. With larger yields, slower repetition rates ({approx}0.1-1 Hz) for this intrinsically pulsed approach to fusion are possible, which means that chamber clearing between pulses can be accomplished on timescales that are compatible with simple clearing techniques (flowing liquid droplet curtains). However, demonstration of the required reliable delivery of hundreds of MJ of energy, for millions of pulses per year, is an ongoing pulsed power technical challenge.

  6. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  7. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Cording, Patrick Hagge

    2017-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  8. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2016-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  9. H.264/AVC Video Compression on Smartphones

    Science.gov (United States)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  10. Lossless medical image compression with a hybrid coder

    Science.gov (United States)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  11. Parallel Algorithm for Wireless Data Compression and Encryption

    Directory of Open Access Journals (Sweden)

    Qin Jiancheng

    2017-01-01

    Full Text Available As the wireless network has limited bandwidth and insecure shared media, the data compression and encryption are very useful for the broadcasting transportation of big data in IoT (Internet of Things. However, the traditional techniques of compression and encryption are neither competent nor efficient. In order to solve this problem, this paper presents a combined parallel algorithm named “CZ algorithm” which can compress and encrypt the big data efficiently. CZ algorithm uses a parallel pipeline, mixes the coding of compression and encryption, and supports the data window up to 1 TB (or larger. Moreover, CZ algorithm can encrypt the big data as a chaotic cryptosystem which will not decrease the compression speed. Meanwhile, a shareware named “ComZip” is developed based on CZ algorithm. The experiment results show that ComZip in 64 b system can get better compression ratio than WinRAR and 7-zip, and it can be faster than 7-zip in the big data compression. In addition, ComZip encrypts the big data without extra consumption of computing resources.

  12. Reducing acquisition time in clinical MRI by data undersampling and compressed sensing reconstruction

    Science.gov (United States)

    Hollingsworth, Kieren Grant

    2015-11-01

    MRI is often the most sensitive or appropriate technique for important measurements in clinical diagnosis and research, but lengthy acquisition times limit its use due to cost and considerations of patient comfort and compliance. Once an image field of view and resolution is chosen, the minimum scan acquisition time is normally fixed by the amount of raw data that must be acquired to meet the Nyquist criteria. Recently, there has been research interest in using the theory of compressed sensing (CS) in MR imaging to reduce scan acquisition times. The theory argues that if our target MR image is sparse, having signal information in only a small proportion of pixels (like an angiogram), or if the image can be mathematically transformed to be sparse then it is possible to use that sparsity to recover a high definition image from substantially less acquired data. This review starts by considering methods of k-space undersampling which have already been incorporated into routine clinical imaging (partial Fourier imaging and parallel imaging), and then explains the basis of using compressed sensing in MRI. The practical considerations of applying CS to MRI acquisitions are discussed, such as designing k-space undersampling schemes, optimizing adjustable parameters in reconstructions and exploiting the power of combined compressed sensing and parallel imaging (CS-PI). A selection of clinical applications that have used CS and CS-PI prospectively are considered. The review concludes by signposting other imaging acceleration techniques under present development before concluding with a consideration of the potential impact and obstacles to bringing compressed sensing into routine use in clinical MRI.

  13. A cascadable circular concentrator with parallel compressed structure for increasing the energy density

    Science.gov (United States)

    Ku, Nai-Lun; Chen, Yi-Yung; Hsieh, Wei-Che; Whang, Allen Jong-Woei

    2012-02-01

    Due to the energy crisis, the principle of green energy gains popularity. This leads the increasing interest in renewable energy such as solar energy. Thus, how to collect the sunlight for indoor illumination becomes our ultimate target. With the environmental awareness increasing, we use the nature light as the light source. Then we start to devote the development of solar collecting system. The Natural Light Guiding System includes three parts, collecting, transmitting and lighting part. The idea of our solar collecting system design is a concept for combining the buildings with a combination of collecting modules. Therefore, we can use it anyplace where the sunlight can directly impinges on buildings with collecting elements. In the meantime, while collecting the sunlight with high efficiency, we can transmit the sunlight into indoor through shorter distance zone by light pipe where we needs the light. We proposed a novel design including disk-type collective lens module. With the design, we can let the incident light and exit light be parallel and compressed. By the parallel and compressed design, we make every output light become compressed in the proposed optical structure. In this way, we can increase the ratio about light compression, get the better efficiency and let the energy distribution more uniform for indoor illumination. By the definition of "KPI" as an performance index about light density as following: lm/(mm)2, the simulation results show that the proposed Concentrator is 40,000,000 KPI much better than the 800,000 KPI measured from the traditional ones.

  14. NRGC: a novel referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2016-11-15

    Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Utility of a simple lighting device to improve chest compressions learning.

    Science.gov (United States)

    González-Calvete, L; Barcala-Furelos, R; Moure-González, J D; Abelairas-Gómez, C; Rodríguez-Núñez, A

    2017-11-01

    The recommendations on cardiopulmonary resuscitation (CPR) emphasize the quality of the manoeuvres, especially chest compressions (CC). Audiovisual feedback devices could improve the quality of the CC during CPR. The aim of this study was to evaluate the usefulness of a simple lighting device as a visual aid during CPR on a mannequin. Twenty-two paediatricians who attended an accredited paediatric CPR course performed, in random order, 2min of CPR on a mannequin without and with the help of a simple lighting device, which flashes at a frequency of 100 cycles per minute. The following CC variables were analyzed using a validated compression quality meter (CPRmeter ® ): depth, decompression, rate, CPR time and percentage of compressions. With the lighting device, participants increased average quality (60.23±54.50 vs. 79.24±9.80%; P=.005), percentage in target depth (48.86±42.67 vs. 72.95±20.25%; P=.036) and rate (35.82±37.54 vs. 67.09±31.95%; P=.024). A simple light device that flashes at the recommended frequency improves the quality of CC performed by paediatric residents on a mannequin. The usefulness of this CPR aid system should be assessed in real patients. Copyright © 2017 Sociedad Española de Anestesiología, Reanimación y Terapéutica del Dolor. Publicado por Elsevier España, S.L.U. All rights reserved.

  16. Adiabatic liquid piston compressed air energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Petersen, Tage [Danish Technological Institute, Aarhus (Denmark); Elmegaard, B. [Technical Univ. of Denmark. DTU Mechanical Engineering, Kgs. Lyngby (Denmark); Schroeder Pedersen, A. [Technical Univ. of Denmark. DTU Energy Conversion, Risoe Campus, Roskilde (Denmark)

    2013-01-15

    This project investigates the potential of a Compressed Air Energy Storage system (CAES system). CAES systems are used to store mechanical energy in the form of compressed air. The systems use electricity to drive the compressor at times of low electricity demand with the purpose of converting the mechanical energy into electricity at times of high electricity demand. Two such systems are currently in operation; one in Germany (Huntorf) and one in the USA (Macintosh, Alabama). In both cases, an underground cavern is used as a pressure vessel for the storage of the compressed air. Both systems are in the range of 100 MW electrical power output with several hours of production stored as compressed air. In this range, enormous volumes are required, which make underground caverns the only economical way to design the pressure vessel. Both systems use axial turbine compressors to compress air when charging the system. The compression leads to a significant increase in temperature, and the heat generated is dumped into the ambient. This energy loss results in a low efficiency of the system, and when expanding the air, the expansion leads to a temperature drop reducing the mechanical output of the expansion turbines. To overcome this, fuel is burned to heat up the air prior to expansion. The fuel consumption causes a significant cost for the storage. Several suggestions have been made to store compression heat for later use during expansion and thereby avoid the use of fuel (so called Adiabatic CAES units), but no such units are in operation at present. The CAES system investigated in this project uses a different approach to avoid compression heat loss. The system uses a pre-compressed pressure vessel full of air. A liquid is pumped into the bottom of the vessel when charging and the same liquid is withdrawn through a turbine when discharging. In this case, the liquid works effectively as a piston compressing the gas in the vessel, hence the name &apos

  17. Ignition target and laser-plasma instabilities

    International Nuclear Information System (INIS)

    Laffite, S.; Loiseau, P.

    2010-01-01

    For the first time indirect drive ignition targets have been designed with the constraint of limiting laser-plasma instabilities. The amplification of these instabilities is directly proportional to the luminous flux density, it means to the sizes of the focal spots too. This study shows that increasing the sizes of the focal spots does not reduce linear amplification gains in a proportional way because the global optimization of the target implies changes in hydrodynamical conditions that in turn have an impact on the value of the amplification gain. The design of the target is a 2-step approach: the first step aims at assuring a uniform irradiation and compression of the target. The first step requires information concerning the laser focusing spots, the dimensions of the hohlraum, the inert gas contained in it, the materials of the wall. The second step is an optimization approach whose aim is to reduce the risk of laser-plasmas instabilities. This optimization is made through simulations of the amplification gains of stimulated Raman and Brillouin backscattering. This method has allowed us to design an optimized target for a rugby-shaped hohlraum. (A.C.)

  18. Sudden viscous dissipation in compressing plasma turbulence

    Science.gov (United States)

    Davidovits, Seth; Fisch, Nathaniel

    2015-11-01

    Compression of a turbulent plasma or fluid can cause amplification of the turbulent kinetic energy, if the compression is fast compared to the turnover and viscous dissipation times of the turbulent eddies. The consideration of compressing turbulent flows in inviscid fluids has been motivated by the suggestion that amplification of turbulent kinetic energy occurred on experiments at the Weizmann Institute of Science Z-Pinch. We demonstrate a sudden viscous dissipation mechanism whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, which further increases the temperature, feeding back to further enhance the dissipation. Application of this mechanism in compression experiments may be advantageous, if the plasma can be kept comparatively cold during much of the compression, reducing radiation and conduction losses, until the plasma suddenly becomes hot. This work was supported by DOE through contract 67350-9960 (Prime # DOE DE-NA0001836) and by the DTRA.

  19. Advanced and standardized evaluation of neurovascular compression syndromes

    Science.gov (United States)

    Hastreiter, Peter; Vega Higuera, Fernando; Tomandl, Bernd; Fahlbusch, Rudolf; Naraghi, Ramin

    2004-05-01

    Caused by a contact between vascular structures and the root entry or exit zone of cranial nerves neurovascular compression syndromes are combined with different neurological diseases (trigeminal neurolagia, hemifacial spasm, vertigo, glossopharyngeal neuralgia) and show a relation with essential arterial hypertension. As presented previously, the semi-automatic segmentation and 3D visualization of strongly T2 weighted MR volumes has proven to be an effective strategy for a better spatial understanding prior to operative microvascular decompression. After explicit segmentation of coarse structures, the tiny target nerves and vessels contained in the area of cerebrospinal fluid are segmented implicitly using direct volume rendering. However, based on this strategy the delineation of vessels in the vicinity of the brainstem and those at the border of the segmented CSF subvolume are critical. Therefore, we suggest registration with MR angiography and introduce consecutive fusion after semi-automatic labeling of the vascular information. Additionally, we present an approach of automatic 3D visualization and video generation based on predefined flight paths. Thereby, a standardized evaluation of the fused image data is supported and the visualization results are optimally prepared for intraoperative application. Overall, our new strategy contributes to a significantly improved 3D representation and evaluation of vascular compression syndromes. Its value for diagnosis and surgery is demonstrated with various clinical examples.

  20. Compressed Baryonic Matter of Astrophysics

    OpenAIRE

    Guo, Yanjun; Xu, Renxin

    2013-01-01

    Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

  1. Efficient access of compressed data

    International Nuclear Information System (INIS)

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme

  2. Numerical Investigation of a Gasoline-Like Fuel in a Heavy-Duty Compression Ignition Engine Using Global Sensitivity Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Pal, Pinaki; Probst, Daniel; Pei, Yuanjiang; Zhang, Yu; Traver, Michael; Cleary, David; Som, Sibendu

    2017-03-28

    Fuels in the gasoline auto-ignition range (Research Octane Number (RON) > 60) have been demonstrated to be effective alternatives to diesel fuel in compression ignition engines. Such fuels allow more time for mixing with oxygen before combustion starts, owing to longer ignition delay. Moreover, by controlling fuel injection timing, it can be ensured that the in-cylinder mixture is “premixed enough” before combustion occurs to prevent soot formation while remaining “sufficiently inhomogeneous” in order to avoid excessive heat release rates. Gasoline compression ignition (GCI) has the potential to offer diesel-like efficiency at a lower cost and can be achieved with fuels such as low-octane straight run gasoline which require significantly less processing in the refinery compared to today’s fuels. To aid the design and optimization of a compression ignition (CI) combustion system using such fuels, a global sensitivity analysis (GSA) was conducted to understand the relative influence of various design parameters on efficiency, emissions and heat release rate. The design parameters included injection strategies, exhaust gas recirculation (EGR) fraction, temperature and pressure at intake valve closure and injector configuration. These were varied simultaneously to achieve various targets of ignition timing, combustion phasing, overall burn duration, emissions, fuel consumption, peak cylinder pressure and maximum pressure rise rate. The baseline case was a three-dimensional closed-cycle computational fluid dynamics (CFD) simulation with a sector mesh at medium load conditions. Eleven design parameters were considered and ranges of variation were prescribed to each of these. These input variables were perturbed in their respective ranges using the Monte Carlo (MC) method to generate a set of 256 CFD simulations and the targets were calculated from the simulation results. GSA was then applied as a screening tool to identify the input parameters having the most

  3. On Compression of a Heavy Compressible Layer of an Elastoplastic or Elastoviscoplastic Medium

    Science.gov (United States)

    Kovtanyuk, L. V.; Panchenko, G. L.

    2017-11-01

    The problem of deformation of a horizontal plane layer of a compressible material is solved in the framework of the theory of small strains. The upper boundary of the layer is under the action of shear and compressing loads, and the no-slip condition is satisfied on the lower boundary of the layer. The loads increase in absolute value with time, then become constant, and then decrease to zero.Various plasticity conditions are consideredwith regard to the material compressibility, namely, the Coulomb-Mohr plasticity condition, the von Mises-Schleicher plasticity condition, and the same conditions with the viscous properties of the material taken into account. To solve the system of partial differential equations for the components of irreversible strains, a finite-difference scheme is developed for a spatial domain increasing with time. The laws of motion of elastoplastic boundaries are presented, the stresses, strains, rates of strain, and displacements are calculated, and the residual stresses and strains are found.

  4. Burn performance of deuterium-tritium, deuterium-deuterium, and catalyzed deuterium ICF targets

    International Nuclear Information System (INIS)

    Harris, D.B.; Blue, T.E.

    1983-01-01

    The University of Illinois hydrodynamic burn code, AFBURN, has been used to model the performance of homogeneous D-T, D 2 , and catalyzed deuterium ICF targets. Yields and gains are compared for power-producing targets. AFBURN is a one-dimensional, two-temperature, single-fluid hydrodynamic code with non-local fusion product energy deposition. The initial conditions for AFBURN are uniformly compressed targets with central hot spots. AFBURN predicts that maximum D 2 target gains are obtained for target rhoR and spark rhoR about seven times larger than the target and spark rhoR for maximum D-T target gains, that the maximum D 2 target gain is approximately one third of the maximum D-T target gain, and that the corresponding yields are approximately equal. By recycling tritium and 3 He from previous targets, D 2 target performance can be improved by about 10%. (author)

  5. Compressible turbulent flows: aspects of prediction and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, R. [TU Muenchen, Garching (Germany). Fachgebiet Stroemungsmechanik

    2007-03-15

    Compressible turbulent flows are an important element of high-speed flight. Boundary layers developing along fuselage and wings of an aircraft and along engine compressor and turbine blades are compressible and mostly turbulent. The high-speed flow around rockets and through rocket nozzles involves compressible turbulence and flow separation. Turbulent mixing and combustion in scramjet engines is another example where compressibility dominates the flow physics. Although compressible turbulent flows have attracted researchers since the fifties of the last century, they are not completely understood. Especially interactions between compressible turbulence and combustion lead to challenging, yet unsolved problems. Direct numerical simulation (DNS) and large-eddy simulation (LES) represent modern powerful research tools which allow to mimic such flows in great detail and to analyze underlying physical mechanisms, even those which cannot be accessed by the experiment. The present lecture provides a short description of these tools and some of their numerical characteristics. It then describes DNS and LES results of fully-developed channel and pipe flow and highlights effects of compressibility on the turbulence structure. The analysis of pressure fluctuations in such flows with isothermal cooled walls leads to the conclusion that the pressure-strain correlation tensor decreases in the wall layer and that the turbulence anisotropy increases, since the mean density falls off relative to the incompressible flow case. Similar increases in turbulence anisotropy due to compressibility are observed in inert and reacting temporal mixing layers. The nature of the pressure fluctuations is however two-facetted. While inert compressible mixing layers reveal wave-propagation effects in the pressure and density fluctuations, compressible reacting mixing layers seem to generate pressure fluctuations that are controlled by the time-rate of change of heat release and mean density

  6. A 172 $\\mu$W Compressively Sampled Photoplethysmographic (PPG) Readout ASIC With Heart Rate Estimation Directly From Compressively Sampled Data.

    Science.gov (United States)

    Pamula, Venkata Rajesh; Valero-Sarmiento, Jose Manuel; Yan, Long; Bozkurt, Alper; Hoof, Chris Van; Helleputte, Nick Van; Yazicioglu, Refet Firat; Verhelst, Marian

    2017-06-01

    A compressive sampling (CS) photoplethysmographic (PPG) readout with embedded feature extraction to estimate heart rate (HR) directly from compressively sampled data is presented. It integrates a low-power analog front end together with a digital back end to perform feature extraction to estimate the average HR over a 4 s interval directly from compressively sampled PPG data. The application-specified integrated circuit (ASIC) supports uniform sampling mode (1x compression) as well as CS modes with compression ratios of 8x, 10x, and 30x. CS is performed through nonuniformly subsampling the PPG signal, while feature extraction is performed using least square spectral fitting through Lomb-Scargle periodogram. The ASIC consumes 172  μ W of power from a 1.2 V supply while reducing the relative LED driver power consumption by up to 30 times without significant loss of relevant information for accurate HR estimation.

  7. Fluffy dust forms icy planetesimals by static compression

    Science.gov (United States)

    Kataoka, Akimasa; Tanaka, Hidekazu; Okuzumi, Satoshi; Wada, Koji

    2013-09-01

    Context. Several barriers have been proposed in planetesimal formation theory: bouncing, fragmentation, and radial drift problems. Understanding the structure evolution of dust aggregates is a key in planetesimal formation. Dust grains become fluffy by coagulation in protoplanetary disks. However, once they are fluffy, they are not sufficiently compressed by collisional compression to form compact planetesimals. Aims: We aim to reveal the pathway of dust structure evolution from dust grains to compact planetesimals. Methods: Using the compressive strength formula, we analytically investigate how fluffy dust aggregates are compressed by static compression due to ram pressure of the disk gas and self-gravity of the aggregates in protoplanetary disks. Results: We reveal the pathway of the porosity evolution from dust grains via fluffy aggregates to form planetesimals, circumventing the barriers in planetesimal formation. The aggregates are compressed by the disk gas to a density of 10-3 g/cm3 in coagulation, which is more compact than is the case with collisional compression. Then, they are compressed more by self-gravity to 10-1 g/cm3 when the radius is 10 km. Although the gas compression decelerates the growth, the aggregates grow rapidly enough to avoid the radial drift barrier when the orbital radius is ≲6 AU in a typical disk. Conclusions: We propose a fluffy dust growth scenario from grains to planetesimals. It enables icy planetesimal formation in a wide range beyond the snowline in protoplanetary disks. This result proposes a concrete initial condition of planetesimals for the later stages of the planet formation.

  8. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  9. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  10. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  11. Football Equipment Removal Improves Chest Compression and Ventilation Efficacy.

    Science.gov (United States)

    Mihalik, Jason P; Lynall, Robert C; Fraser, Melissa A; Decoster, Laura C; De Maio, Valerie J; Patel, Amar P; Swartz, Erik E

    2016-01-01

    Airway access recommendations in potential catastrophic spine injury scenarios advocate for facemask removal, while keeping the helmet and shoulder pads in place for ensuing emergency transport. The anecdotal evidence to support these recommendations assumes that maintaining the helmet and shoulder pads assists inline cervical stabilization and that facial access guarantees adequate airway access. Our objective was to determine the effect of football equipment interference on performing chest compressions and delivering adequate ventilations on patient simulators. We hypothesized that conditions with more football equipment would decrease chest compression and ventilation efficacy. Thirty-two certified athletic trainers were block randomized to participate in six different compression conditions and six different ventilation conditions using human patient simulators. Data for chest compression (mean compression depth, compression rate, percentage of correctly released compressions, and percentage of adequate compressions) and ventilation (total ventilations, mean ventilation volume, and percentage of ventilations delivering adequate volume) conditions were analyzed across all conditions. The fully equipped athlete resulted in the lowest mean compression depth (F5,154 = 22.82; P Emergency medical personnel should remove the helmet and shoulder pads from all football athletes who require cardiopulmonary resuscitation, while maintaining appropriate cervical spine stabilization when injury is suspected. Further research is needed to confirm our findings supporting full equipment removal for chest compression and ventilation delivery.

  12. Data compression considerations for detectors with local intelligence

    International Nuclear Information System (INIS)

    Garcia-Sciveres, M; Wang, X

    2014-01-01

    This note summarizes the outcome of discussions about how data compression considerations apply to tracking detectors with local intelligence. The method for analyzing data compression efficiency is taken from a previous publication and applied to module characteristics from the WIT2014 workshop. We explore local intelligence and coupled layer structures in the language of data compression. In this context the original intelligent tracker concept of correlating hits to find matches of interest and discard others is just a form of lossy data compression. We now explore how these features (intelligence and coupled layers) can be exploited for lossless compression, which could enable full readout at higher trigger rates than previously envisioned, or even triggerless

  13. Efficiency of Compressed Air Energy Storage

    DEFF Research Database (Denmark)

    Elmegaard, Brian; Brix, Wiebke

    2011-01-01

    The simplest type of a Compressed Air Energy Storage (CAES) facility would be an adiabatic process consisting only of a compressor, a storage and a turbine, compressing air into a container when storing and expanding when producing. This type of CAES would be adiabatic and would if the machines...... were reversible have a storage efficiency of 100%. However, due to the specific capacity of the storage and the construction materials the air is cooled during and after compression in practice, making the CAES process diabatic. The cooling involves exergy losses and thus lowers the efficiency...... of the storage significantly. The efficiency of CAES as an electricity storage may be defined in several ways, we discuss these and find that the exergetic efficiency of compression, storage and production together determine the efficiency of CAES. In the paper we find that the efficiency of the practical CAES...

  14. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2009-01-01

    This book introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. For the computation of turbulent compressible flows, current methods of averaging and filtering are presented so that the reader is exposed to a consistent development of applicable equation sets for both the mean or resolved fields as well as the transport equations for the turbulent stress field. For the measurement of turbulent compressible flows, current techniques ranging from hot-wire anemometry to PIV are evaluated and limitations assessed. Characterizing dynamic features of free shear flows, including jets, mixing layers and wakes, and wall-bounded flows, including shock-turbulence and shock boundary-layer interactions, obtained from computations, experiments and simulations are discussed. Key features: * Describes prediction methodologies in...

  15. FRC translation into a compression coil

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1985-01-01

    Several features of the problem of FRC translation into a compression coil are considered. First, the magnitude of the guide field is calculated and found to exceed that which would be applied to a flux conserver. Second, energy conservation is applied to FRC translation from a flux conserver into a compression coil. It is found that a significant temperature decrease is required for translation to be energetically possible. The temperature change depends on the external inductance in the compression circuit. An analogous case is that of a compression region composed of a compound magnet; in this case the temperature change depends on the ratio of inner and outer coil radii. Finally, the kinematics of intermediate translation states are calculated using an ''abrupt transition'' model. It is found, in this model, that the FRC must overcome a potential hill during translation, which requires a small initial velocity

  16. Monochromatic x-ray radiography of laser-driven spherical targets using high-energy, picoseconds LFEX laser

    Science.gov (United States)

    Sawada, Hiroshi; Fujioka, S.; Lee, S.; Arikawa, Y.; Shigemori, K.; Nagatomo, H.; Nishimura, H.; Sunahara, A.; Theobald, W.; Perez, F.; Patel, P. K.; Beg, F. N.

    2015-11-01

    Formation of a high density fusion fuel is essential in both conventional and advanced Inertial Confinement Fusion (ICF) schemes for the self-sustaining fusion process. In cone-guided Fast Ignition (FI), a metal cone is attached to a spherical target to maintain the path for the injection of an intense short-pulse ignition laser from blow-off plasma created when nanoseconds compression lasers drive the target. We have measured a temporal evolution of a compressed deuterated carbon (CD) sphere using 4.5 keV K-alpha radiography with the Kilo-Joule, picosecond LFEX laser at the Institute of Laser Engineering. A 200 μm CD sphere attached to the tip of a Au cone was directly driven by 9 Gekko XII beams with 300 J/beam in a 1.3 ns Gaussian pulse. The LFEX laser irradiated on a Ti foil to generate 4.51 Ti K-alpha x-ray. By varying the delay between the compression and backlighter lasers, the measured radiograph images show an increase of the areal density of the imploded target. The detail of the quantitative analyses to infer the areal density and comparisons to hydrodynamics simulations will be presented. This work was performed with the support and under the auspices of the NIFS Collaboration Research program (NIFS13KUGK072). H.S. was supported by the UNR's International Activities Grant program.

  17. Breast compression in mammography: how much is enough?

    Science.gov (United States)

    Poulos, Ann; McLean, Donald; Rickard, Mary; Heard, Robert

    2003-06-01

    The amount of breast compression that is applied during mammography potentially influences image quality and the discomfort experienced. The aim of this study was to determine the relationship between applied compression force, breast thickness, reported discomfort and image quality. Participants were women attending routine breast screening by mammography at BreastScreen New South Wales Central and Eastern Sydney. During the mammographic procedure, an 'extra' craniocaudal (CC) film was taken at a reduced level of compression ranging from 10 to 30 Newtons. Breast thickness measurements were recorded for both the normal and the extra CC film. Details of discomfort experienced, cup size, menstrual status, existing breast pain and breast problems were also recorded. Radiologists were asked to compare the image quality of the normal and manipulated film. The results indicated that 24% of women did not experience a difference in thickness when the compression was reduced. This is an important new finding because the aim of breast compression is to reduce breast thickness. If breast thickness is not reduced when compression force is applied then discomfort is increased with no benefit in image quality. This has implications for mammographic practice when determining how much breast compression is sufficient. Radiologists found a decrease in contrast resolution within the fatty area of the breast between the normal and the extra CC film, confirming a decrease in image quality due to insufficient applied compression force.

  18. Fundamental study of compression for movie files of coronary angiography

    Science.gov (United States)

    Ando, Takekazu; Tsuchiya, Yuichiro; Kodera, Yoshie

    2005-04-01

    When network distribution of movie files was considered as reference, it could be useful that the lossy compression movie files which has small file size. We chouse three kinds of coronary stricture movies with different moving speed as an examination object; heart rate of slow, normal and fast movies. The movies of MPEG-1, DivX5.11, WMV9 (Windows Media Video 9), and WMV9-VCM (Windows Media Video 9-Video Compression Manager) were made from three kinds of AVI format movies with different moving speeds. Five kinds of movies that are four kinds of compression movies and non-compression AVI instead of the DICOM format were evaluated by Thurstone's method. The Evaluation factors of movies were determined as "sharpness, granularity, contrast, and comprehensive evaluation." In the virtual bradycardia movie, AVI was the best evaluation at all evaluation factors except the granularity. In the virtual normal movie, an excellent compression technique is different in all evaluation factors. In the virtual tachycardia movie, MPEG-1 was the best evaluation at all evaluation factors expects the contrast. There is a good compression form depending on the speed of movies because of the difference of compression algorithm. It is thought that it is an influence by the difference of the compression between frames. The compression algorithm for movie has the compression between the frames and the intra-frame compression. As the compression algorithm give the different influence to image by each compression method, it is necessary to examine the relation of the compression algorithm and our results.

  19. Free-beam soliton self-compression in air

    Science.gov (United States)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  20. Interactive computer graphics applications for compressible aerodynamics

    Science.gov (United States)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  1. Adiabatic compression of ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.

    1982-01-01

    A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

  2. POLYCOMP: Efficient and configurable compression of astronomical timelines

    Science.gov (United States)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  3. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  4. Compression device for feeding a waste material to a reactor

    Science.gov (United States)

    Williams, Paul M.; Faller, Kenneth M.; Bauer, Edward J.

    2001-08-21

    A compression device for feeding a waste material to a reactor includes a waste material feed assembly having a hopper, a supply tube and a compression tube. Each of the supply and compression tubes includes feed-inlet and feed-outlet ends. A feed-discharge valve assembly is located between the feed-outlet end of the compression tube and the reactor. A feed auger-screw extends axially in the supply tube between the feed-inlet and feed-outlet ends thereof. A compression auger-screw extends axially in the compression tube between the feed-inlet and feed-outlet ends thereof. The compression tube is sloped downwardly towards the reactor to drain fluid from the waste material to the reactor and is oriented at generally right angle to the supply tube such that the feed-outlet end of the supply tube is adjacent to the feed-inlet end of the compression tube. A programmable logic controller is provided for controlling the rotational speed of the feed and compression auger-screws for selectively varying the compression of the waste material and for overcoming jamming conditions within either the supply tube or the compression tube.

  5. Micro-Doppler Ambiguity Resolution Based on Short-Time Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Jing-bo Zhuang

    2015-01-01

    Full Text Available When using a long range radar (LRR to track a target with micromotion, the micro-Doppler embodied in the radar echoes may suffer from ambiguity problem. In this paper, we propose a novel method based on compressed sensing (CS to solve micro-Doppler ambiguity. According to the RIP requirement, a sparse probing pulse train with its transmitting time random is designed. After matched filtering, the slow-time echo signals of the micromotion target can be viewed as randomly sparse sampling of Doppler spectrum. Select several successive pulses to form a short-time window and the CS sensing matrix can be built according to the time stamps of these pulses. Then performing Orthogonal Matching Pursuit (OMP, the unambiguous micro-Doppler spectrum can be obtained. The proposed algorithm is verified using the echo signals generated according to the theoretical model and the signals with micro-Doppler signature produced using the commercial electromagnetic simulation software FEKO.

  6. Prechamber Compression-Ignition Engine Performance

    Science.gov (United States)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  7. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  8. Comparison of changes in tidal volume associated with expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation

    OpenAIRE

    Morino, Akira; Shida, Masahiro; Tanaka, Masashi; Sato, Kimihiro; Seko, Toshiaki; Ito, Shunsuke; Ogawa, Shunichi; Takahashi, Naoaki

    2015-01-01

    [Purpose] This study was designed to compare and clarify the relationship between expiratory rib cage compression and expiratory abdominal compression in patients on prolonged mechanical ventilation, with a focus on tidal volume. [Subjects and Methods] The subjects were 18 patients on prolonged mechanical ventilation, who had undergone tracheostomy. Each patient received expiratory rib cage compression and expiratory abdominal compression; the order of implementation was randomized. Subjects ...

  9. Thermo-fluid dynamic analysis of wet compression process

    Energy Technology Data Exchange (ETDEWEB)

    Mohan, Abhay; Kim, Heuy Dong [School of Mechanical Engineering, Andong National University, Andong (Korea, Republic of); Chidambaram, Palani Kumar [FMTRC, Daejoo Machinery Co. Ltd., Daegu (Korea, Republic of); Suryan, Abhilash [Dept. of Mechanical Engineering, College of Engineering Trivandrum, Kerala (India)

    2016-12-15

    Wet compression systems increase the useful power output of a gas turbine by reducing the compressor work through the reduction of air temperature inside the compressor. The actual wet compression process differs from the conventional single phase compression process due to the presence of latent heat component being absorbed by the evaporating water droplets. Thus the wet compression process cannot be assumed isentropic. In the current investigation, the gas-liquid two phase has been modeled as air containing dispersed water droplets inside a simple cylinder-piston system. The piston moves in the axial direction inside the cylinder to achieve wet compression. Effects on the thermodynamic properties such as temperature, pressure and relative humidity are investigated in detail for different parameters such as compression speeds and overspray. An analytical model is derived and the requisite thermodynamic curves are generated. The deviations of generated thermodynamic curves from the dry isentropic curves (PV{sup γ} = constant) are analyzed.

  10. Thermo-fluid dynamic analysis of wet compression process

    International Nuclear Information System (INIS)

    Mohan, Abhay; Kim, Heuy Dong; Chidambaram, Palani Kumar; Suryan, Abhilash

    2016-01-01

    Wet compression systems increase the useful power output of a gas turbine by reducing the compressor work through the reduction of air temperature inside the compressor. The actual wet compression process differs from the conventional single phase compression process due to the presence of latent heat component being absorbed by the evaporating water droplets. Thus the wet compression process cannot be assumed isentropic. In the current investigation, the gas-liquid two phase has been modeled as air containing dispersed water droplets inside a simple cylinder-piston system. The piston moves in the axial direction inside the cylinder to achieve wet compression. Effects on the thermodynamic properties such as temperature, pressure and relative humidity are investigated in detail for different parameters such as compression speeds and overspray. An analytical model is derived and the requisite thermodynamic curves are generated. The deviations of generated thermodynamic curves from the dry isentropic curves (PV γ = constant) are analyzed

  11. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  12. Modelling for Fuel Optimal Control of a Variable Compression Engine

    OpenAIRE

    Nilsson, Ylva

    2007-01-01

    Variable compression engines are a mean to meet the demand on lower fuel consumption. A high compression ratio results in high engine efficiency, but also increases the knock tendency. On conventional engines with fixed compression ratio, knock is avoided by retarding the ignition angle. The variable compression engine offers an extra dimension in knock control, since both ignition angle and compression ratio can be adjusted. The central question is thus for what combination of compression ra...

  13. Laser driven supersonic flow over a compressible foam surface on the Nike lasera)

    Science.gov (United States)

    Harding, E. C.; Drake, R. P.; Aglitskiy, Y.; Plewa, T.; Velikovich, A. L.; Gillespie, R. S.; Weaver, J. L.; Visco, A.; Grosskopf, M. J.; Ditmar, J. R.

    2010-05-01

    A laser driven millimeter-scale target was used to generate a supersonic shear layer in an attempt to create a Kelvin-Helmholtz (KH) unstable interface in a high-energy-density (HED) plasma. The KH instability is a fundamental fluid instability that remains unexplored in HED plasmas, which are relevant to the inertial confinement fusion and astrophysical environments. In the experiment presented here the Nike laser [S. P. Obenschain et al., Phys. Plasmas 3, 2098 (1996)] was used to create and drive Al plasma over a rippled foam surface. In response to the supersonic Al flow (Mach=2.6±1.1) shocks should form in the Al flow near the perturbations. The experimental data were used to infer the existence and location of these shocks. In addition, the interface perturbations show growth that has possible contributions from both KH and Richtmyer-Meshkov instabilities. Since compressible shear layers exhibit smaller growth, it is important to use the KH growth rate derived from the compressible dispersion relation.

  14. Laser driven supersonic flow over a compressible foam surface on the Nike laser

    International Nuclear Information System (INIS)

    Harding, E. C.; Drake, R. P.; Gillespie, R. S.; Visco, A.; Grosskopf, M. J.; Ditmar, J. R.; Aglitskiy, Y.; Velikovich, A. L.; Weaver, J. L.; Plewa, T.

    2010-01-01

    A laser driven millimeter-scale target was used to generate a supersonic shear layer in an attempt to create a Kelvin-Helmholtz (KH) unstable interface in a high-energy-density (HED) plasma. The KH instability is a fundamental fluid instability that remains unexplored in HED plasmas, which are relevant to the inertial confinement fusion and astrophysical environments. In the experiment presented here the Nike laser [S. P. Obenschain et al., Phys. Plasmas 3, 2098 (1996)] was used to create and drive Al plasma over a rippled foam surface. In response to the supersonic Al flow (Mach=2.6±1.1) shocks should form in the Al flow near the perturbations. The experimental data were used to infer the existence and location of these shocks. In addition, the interface perturbations show growth that has possible contributions from both KH and Richtmyer-Meshkov instabilities. Since compressible shear layers exhibit smaller growth, it is important to use the KH growth rate derived from the compressible dispersion relation.

  15. Effects of Instantaneous Multiband Dynamic Compression on Speech Intelligibility

    Directory of Open Access Journals (Sweden)

    Herzke Tobias

    2005-01-01

    Full Text Available The recruitment phenomenon, that is, the reduced dynamic range between threshold and uncomfortable level, is attributed to the loss of instantaneous dynamic compression on the basilar membrane. Despite this, hearing aids commonly use slow-acting dynamic compression for its compensation, because this was found to be the most successful strategy in terms of speech quality and intelligibility rehabilitation. Former attempts to use fast-acting compression gave ambiguous results, raising the question as to whether auditory-based recruitment compensation by instantaneous compression is in principle applicable in hearing aids. This study thus investigates instantaneous multiband dynamic compression based on an auditory filterbank. Instantaneous envelope compression is performed in each frequency band of a gammatone filterbank, which provides a combination of time and frequency resolution comparable to the normal healthy cochlea. The gain characteristics used for dynamic compression are deduced from categorical loudness scaling. In speech intelligibility tests, the instantaneous dynamic compression scheme was compared against a linear amplification scheme, which used the same filterbank for frequency analysis, but employed constant gain factors that restored the sound level for medium perceived loudness in each frequency band. In subjective comparisons, five of nine subjects preferred the linear amplification scheme and would not accept the instantaneous dynamic compression in hearing aids. Four of nine subjects did not perceive any quality differences. A sentence intelligibility test in noise (Oldenburg sentence test showed little to no negative effects of the instantaneous dynamic compression, compared to linear amplification. A word intelligibility test in quiet (one-syllable rhyme test showed that the subjects benefit from the larger amplification at low levels provided by instantaneous dynamic compression. Further analysis showed that the increase

  16. Magnetic force micropiston: An integrated force/microfluidic device for the application of compressive forces in a confined environment

    Science.gov (United States)

    Fisher, J. K.; Kleckner, N.

    2014-02-01

    Cellular biology takes place inside confining spaces. For example, bacteria grow in crevices, red blood cells squeeze through capillaries, and chromosomes replicate inside the nucleus. Frequently, the extent of this confinement varies. Bacteria grow longer and divide, red blood cells move through smaller and smaller passages as they travel to capillary beds, and replication doubles the amount of DNA inside the nucleus. This increase in confinement, either due to a decrease in the available space or an increase in the amount of material contained in a constant volume, has the potential to squeeze and stress objects in ways that may lead to changes in morphology, dynamics, and ultimately biological function. Here, we describe a device developed to probe the interplay between confinement and the mechanical properties of cells and cellular structures, and forces that arise due to changes in a structure's state. In this system, the manipulation of a magnetic bead exerts a compressive force upon a target contained in the confining space of a microfluidic channel. This magnetic force microfluidic piston is constructed in such a way that we can measure (a) target compliance and changes in compliance as induced by changes in buffer, extract, or biochemical composition, (b) target expansion force generated by changes in the same parameters, and (c) the effects of compression stress on a target's structure and function. Beyond these issues, our system has general applicability to a variety of questions requiring the combination of mechanical forces, confinement, and optical imaging.

  17. Emittance Growth during Bunch Compression in the CTF-II

    Energy Technology Data Exchange (ETDEWEB)

    Raubenheimer, Tor O

    1999-02-26

    Measurements of the beam emittance during bunch compression in the CLIC Test Facility (CTF-II) are described. The measurements were made with different beam charges and different energy correlations versus the bunch compressor settings which were varied from no compression through the point of full compression and to over-compression. Significant increases in the beam emittance were observed with the maximum emittance occurring near the point of full (maximal) compression. Finally, evaluation of possible emittance dilution mechanisms indicate that coherent synchrotron radiation was the most likely cause.

  18. Type-I cascaded quadratic soliton compression in lithium niobate: Compressing femtosecond pulses from high-power fiber lasers

    DEFF Research Database (Denmark)

    Bache, Morten; Wise, Frank W.

    2010-01-01

    The output pulses of a commercial high-power femtosecond fiber laser or amplifier are typically around 300–500 fs with wavelengths of approximately 1030 nm and tens of microjoules of pulse energy. Here, we present a numerical study of cascaded quadratic soliton compression of such pulses in LiNbO3....... However, the strong group-velocity dispersion implies that the pulses can achieve moderate compression to durations of less than 130 fs in available crystal lengths. Most of the pulse energy is conserved because the compression is moderate. The effects of diffraction and spatial walk-off are addressed......, and in particular the latter could become an issue when compressing such long crystals (around 10 cm long). We finally show that the second harmonic contains a short pulse locked to the pump and a long multi-picosecond red-shifted detrimental component. The latter is caused by the nonlocal effects...

  19. Compact torus compression of microwaves

    International Nuclear Information System (INIS)

    Hewett, D.W.; Langdon, A.B.

    1985-01-01

    The possibility that a compact torus (CT) might be accelerated to large velocities has been suggested by Hartman and Hammer. If this is feasible one application of these moving CTs might be to compress microwaves. The proposed mechanism is that a coaxial vacuum region in front of a CT is prefilled with a number of normal electromagnetic modes on which the CT impinges. A crucial assumption of this proposal is that the CT excludes the microwaves and therefore compresses them. Should the microwaves penetrate the CT, compression efficiency is diminished and significant CT heating results. MFE applications in the same parameters regime have found electromagnetic radiation capable of penetrating, heating, and driving currents. We report here a cursory investigation of rf penetration using a 1-D version of a direct implicit PIC code

  20. Compressibility characteristics of Sabak Bernam Marine Clay

    Science.gov (United States)

    Lat, D. C.; Ali, N.; Jais, I. B. M.; Baharom, B.; Yunus, N. Z. M.; Salleh, S. M.; Azmi, N. A. C.

    2018-04-01

    This study is carried out to determine the geotechnical properties and compressibility characteristics of marine clay collected at Sabak Bernam. The compressibility characteristics of this soil are determined from 1-D consolidation test and verified by existing correlations by other researchers. No literature has been found on the compressibility characteristics of Sabak Bernam Marine Clay. It is important to carry out this study since this type of marine clay covers large coastal area of west coast Malaysia. This type of marine clay was found on the main road connecting Klang to Perak and the road keeps experiencing undulation and uneven settlement which jeopardise the safety of the road users. The soil is indicated in the Generalised Soil Map of Peninsular Malaysia as a CLAY with alluvial soil on recent marine and riverine alluvium. Based on the British Standard Soil Classification and Plasticity Chart, the soil is classified as a CLAY with very high plasticity (CV). Results from laboratory test on physical properties and compressibility parameters show that Sabak Bernam Marine Clay (SBMC) is highly compressible, has low permeability and poor drainage characteristics. The compressibility parameters obtained for SBMC is in a good agreement with other researchers in the same field.

  1. Compressing bitmap indexes for faster search operations

    International Nuclear Information System (INIS)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-01-01

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed

  2. Compressing bitmap indexes for faster search operations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  3. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-01-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)

  4. Shock absorbing properties of toroidal shells under compression, 3

    International Nuclear Information System (INIS)

    Sugita, Yuji

    1985-01-01

    The author has previously presented the static load-deflection relations of a toroidal shell subjected to axisymmetric compression between rigid plates and those of its outer half when subjected to lateral compression. In both these cases, the analytical method was based on the incremental Rayleigh-Ritz method. In this paper, the effects of compression angle and strain rate on the load-deflection relations of the toroidal shell are investigated for its use as a shock absorber for the radioactive material shipping cask which must keep its structural integrity even after accidental falls at any angle. Static compression tests have been carried out at four angles of compression, 10 0 , 20 0 , 50 0 , 90 0 and the applications of the preceding analytical method have been discussed. Dynamic compression tests have also been performed using the free-falling drop hammer. The results are compared with those in the static compression tests. (author)

  5. Correlation and image compression for limited-bandwidth CCD.

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, Douglas G.

    2005-07-01

    As radars move to Unmanned Aerial Vehicles with limited-bandwidth data downlinks, the amount of data stored and transmitted with each image becomes more significant. This document gives the results of a study to determine the effect of lossy compression in the image magnitude and phase on Coherent Change Detection (CCD). We examine 44 lossy compression types, plus lossless zlib compression, and test each compression method with over 600 CCD image pairs. We also derive theoretical predictions for the correlation for most of these compression schemes, which compare favorably with the experimental results. We recommend image transmission formats for limited-bandwidth programs having various requirements for CCD, including programs which cannot allow performance degradation and those which have stricter bandwidth requirements at the expense of CCD performance.

  6. Ion-driver fast ignition: Reducing heavy-ion fusion driver energy and cost, simplifying chamber design, target fab, tritium fueling and power conversion

    International Nuclear Information System (INIS)

    Logan, G.; Callahan-Miller, D.; Perkins, J.; Caporaso, G.; Tabak, M.; Moir, R.; Meier, W.; Bangerter, Roger; Lee, Ed

    1998-01-01

    Ion fast ignition, like laser fast ignition, can potentially reduce driver energy for high target gain by an order of magnitude, while reducing fuel capsule implosion velocity, convergence ratio, and required precisions in target fabrication and illumination symmetry, all of which should further improve and simplify IFE power plants. From fast-ignition target requirements, we determine requirements for ion beam acceleration, pulse-compression, and final focus for advanced accelerators that must be developed for much shorter pulses and higher voltage gradients than today's accelerators, to deliver the petawatt peak powers and small focal spots (∼100 (micro)m) required. Although such peak powers and small focal spots are available today with lasers, development of such advanced accelerators is motivated by the greater likely efficiency of deep ion penetration and deposition into pre-compressed 1000x liquid density DT cores. Ion ignitor beam parameters for acceleration, pulse compression, and final focus are estimated for two examples based on a Dielectric Wall Accelerator; (1) a small target with ρr ∼ 2 g/cm 2 for a small demo/pilot plant producing ∼40 MJ of fusion yield per target, and (2) a large target with ρr ∼ 10 g/cm 2 producing ∼1 GJ yield for multi-unit electricity/hydrogen plants, allowing internal T-breeding with low T/D ratios, >75 % of the total fusion yield captured for plasma direct conversion, and simple liquid-protected chambers with gravity clearing. Key enabling development needs for ion fast ignition are found to be (1) ''Close-coupled'' target designs for single-ended illumination of both compressor and ignitor beams; (2) Development of high gradient (>25 MV/m) linacs with high charge-state (q ∼ 26) ion sources for short (∼5 ns) accelerator output pulses; (3) Small mm-scale laser-driven plasma lens of ∼10 MG fields to provide steep focusing angles close-in to the target (built-in as part of each target); (4) beam space charge

  7. Ion-driver fast ignition: Reducing heavy-ion fusion driver energy and cost, simplifying chamber design, target fab, tritium fueling and power conversion

    Energy Technology Data Exchange (ETDEWEB)

    Logan, G.; Callahan-Miller, D.; Perkins, J.; Caporaso, G.; Tabak, M.; Moir, R.; Meier, W.; Bangerter, Roger; Lee, Ed

    1998-04-01

    Ion fast ignition, like laser fast ignition, can potentially reduce driver energy for high target gain by an order of magnitude, while reducing fuel capsule implosion velocity, convergence ratio, and required precisions in target fabrication and illumination symmetry, all of which should further improve and simplify IFE power plants. From fast-ignition target requirements, we determine requirements for ion beam acceleration, pulse-compression, and final focus for advanced accelerators that must be developed for much shorter pulses and higher voltage gradients than today's accelerators, to deliver the petawatt peak powers and small focal spots ({approx}100 {micro}m) required. Although such peak powers and small focal spots are available today with lasers, development of such advanced accelerators is motivated by the greater likely efficiency of deep ion penetration and deposition into pre-compressed 1000x liquid density DT cores. Ion ignitor beam parameters for acceleration, pulse compression, and final focus are estimated for two examples based on a Dielectric Wall Accelerator; (1) a small target with {rho}r {approx} 2 g/cm{sup 2} for a small demo/pilot plant producing {approx}40 MJ of fusion yield per target, and (2) a large target with {rho}r {approx} 10 g/cm{sup 2} producing {approx}1 GJ yield for multi-unit electricity/hydrogen plants, allowing internal T-breeding with low T/D ratios, >75 % of the total fusion yield captured for plasma direct conversion, and simple liquid-protected chambers with gravity clearing. Key enabling development needs for ion fast ignition are found to be (1) ''Close-coupled'' target designs for single-ended illumination of both compressor and ignitor beams; (2) Development of high gradient (>25 MV/m) linacs with high charge-state (q {approx} 26) ion sources for short ({approx}5 ns) accelerator output pulses; (3) Small mm-scale laser-driven plasma lens of {approx}10 MG fields to provide steep focusing angles

  8. Large Eddy Simulation for Compressible Flows

    CERN Document Server

    Garnier, E; Sagaut, P

    2009-01-01

    Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...

  9. Compression and archiving of digital images

    International Nuclear Information System (INIS)

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  10. Development of position measurement unit for flying inertial fusion energy target

    International Nuclear Information System (INIS)

    Tsuji, R; Endo, T; Yoshida, H; Norimatsu, T

    2016-01-01

    We have reported the present status in the development of a position measurement unit (PMU) for a flying inertial fusion energy (IFE) target. The PMU, which uses Arago spot phenomena, is designed to have a measurement accuracy smaller than 1 μm. By employing divergent, pulsed orthogonal laser beam illumination, we can measure the time and the target position at the pulsed illumination. The two-dimensional Arago spot image is compressed into one-dimensional image by a cylindrical lens for real-time processing. The PMU are set along the injection path of the flying target. The local positions of the target in each PMU are transferred to the controller and analysed to calculate the target trajectory. Two methods are presented to calculate the arrival time and the arrival position of the target at the reactor centre. (paper)

  11. Development of position measurement unit for flying inertial fusion energy target

    Science.gov (United States)

    Tsuji, R.; Endo, T.; Yoshida, H.; Norimatsu, T.

    2016-03-01

    We have reported the present status in the development of a position measurement unit (PMU) for a flying inertial fusion energy (IFE) target. The PMU, which uses Arago spot phenomena, is designed to have a measurement accuracy smaller than 1 μm. By employing divergent, pulsed orthogonal laser beam illumination, we can measure the time and the target position at the pulsed illumination. The two-dimensional Arago spot image is compressed into one-dimensional image by a cylindrical lens for real-time processing. The PMU are set along the injection path of the flying target. The local positions of the target in each PMU are transferred to the controller and analysed to calculate the target trajectory. Two methods are presented to calculate the arrival time and the arrival position of the target at the reactor centre.

  12. Pulse heating and ignition for off-centre ignited targets

    International Nuclear Information System (INIS)

    Mahdy, A.I.; Takabe, H.; Mima, K.

    1999-01-01

    An off-centre ignition model has been used to study the ignition conditions for laser targets related to the fast ignition scheme. A 2-D hydrodynamic code has been used, including alpha particle heating. The main goal of the study is the possibility of obtaining a high gain ICF target with fast ignition. In order to determine the ignition conditions, samples with various compressed core densities having different spark density-radius product (i.e. areal density) values were selected. The study was carried out in the presence of an external heating source, with a constant heating rate. A dependence of the ignition conditions on the heating rate of the external pulse is demonstrated. For a given set of ignition conditions, our simulation showed that an 11 ps pulse with 17 kJ of injected energy into the spark area was required to achieve ignition for a compressed core with a density of 200 g/cm 3 and 0.5 g/cm 2 spark areal density. It is shown that the ignition conditions are highly dependent on the heating rate of the external pulse. (author)

  13. 46 CFR 112.50-7 - Compressed air starting.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Compressed air starting. 112.50-7 Section 112.50-7... air starting. A compressed air starting system must meet the following: (a) The starting, charging... air compressors addressed in paragraph (c)(3)(i) of this section. (b) The compressed air starting...

  14. Experimental Compressibility of Molten Hedenbergite at High Pressure

    Science.gov (United States)

    Agee, C. B.; Barnett, R. G.; Guo, X.; Lange, R. A.; Waller, C.; Asimow, P. D.

    2010-12-01

    Experiments using the sink/float method have bracketed the density of molten hedenbergite (CaFeSi2O6) at high pressures and temperatures. The experiments are the first of their kind to determine the compressibility of molten hedenbergite at high pressure and are part of a collaborative effort to establish a new database for an array of silicate melt compositions, which will contribute to the development of an empirically based predictive model that will allow calculation of silicate liquid density and compressibility over a wide range of P-T-X conditions where melting could occur in the Earth. Each melt composition will be measured using: (i) double-bob Archimedean method for melt density and thermal expansion at ambient pressure, (ii) sound speed measurements on liquids to constrain melt compressibility at ambient pressure, (iii) sink/float technique to measure melt density to 15 GPa, and (iv) shock wave measurements of P-V-E equation of state and temperature between 10 and 150 GPa. Companion abstracts on molten fayalite (Waller et al., 2010) and liquid mixes of hedenbergite-diopside and anorthite-hedenbergite-diopside (Guo and Lange, 2010) are also presented at this meeting. In the present study, the hedenbergite starting material was synthesized at the Experimental Petrology Lab, University of Michigan, where melt density, thermal expansion, and sound speed measurements were also carried out. The starting material has also been loaded into targets at the Caltech Shockwave Lab, and experiments there are currently underway. We report here preliminary results from static compression measurement performed at the Department of Petrology, Vrije Universiteit, Amsterdam, and the High Pressure Lab, Institute of Meteoritics, University of New Mexico. Experiments were carried out in Quick Press piston-cylinder devices and a Walker-style multi-anvil device. Sink/float marker spheres implemented were gem quality synthetic forsterite (Fo100), San Carlos olivine (Fo90), and

  15. Bitshuffle: Filter for improving compression of typed binary data

    Science.gov (United States)

    Masui, Kiyoshi

    2017-12-01

    Bitshuffle rearranges typed, binary data for improving compression; the algorithm is implemented in a python/C package within the Numpy framework. The library can be used alongside HDF5 to compress and decompress datasets and is integrated through the dynamically loaded filters framework. Algorithmically, Bitshuffle is closely related to HDF5's Shuffle filter except it operates at the bit level instead of the byte level. Arranging a typed data array in to a matrix with the elements as the rows and the bits within the elements as the columns, Bitshuffle "transposes" the matrix, such that all the least-significant-bits are in a row, etc. This transposition is performed within blocks of data roughly 8kB long; this does not in itself compress data, but rearranges it for more efficient compression. A compression library is necessary to perform the actual compression. This scheme has been used for compression of radio data in high performance computing.

  16. Heterogeneous Compression of Large Collections of Evolutionary Trees.

    Science.gov (United States)

    Matthews, Suzanne J

    2015-01-01

    Compressing heterogeneous collections of trees is an open problem in computational phylogenetics. In a heterogeneous tree collection, each tree can contain a unique set of taxa. An ideal compression method would allow for the efficient archival of large tree collections and enable scientists to identify common evolutionary relationships over disparate analyses. In this paper, we extend TreeZip to compress heterogeneous collections of trees. TreeZip is the most efficient algorithm for compressing homogeneous tree collections. To the best of our knowledge, no other domain-based compression algorithm exists for large heterogeneous tree collections or enable their rapid analysis. Our experimental results indicate that TreeZip averages 89.03 percent (72.69 percent) space savings on unweighted (weighted) collections of trees when the level of heterogeneity in a collection is moderate. The organization of the TRZ file allows for efficient computations over heterogeneous data. For example, consensus trees can be computed in mere seconds. Lastly, combining the TreeZip compressed (TRZ) file with general-purpose compression yields average space savings of 97.34 percent (81.43 percent) on unweighted (weighted) collections of trees. Our results lead us to believe that TreeZip will prove invaluable in the efficient archival of tree collections, and enables scientists to develop novel methods for relating heterogeneous collections of trees.

  17. Nonlinear parameter estimation in inviscid compressible flows in presence of uncertainties

    International Nuclear Information System (INIS)

    Jemcov, A.; Mathur, S.

    2004-01-01

    The focus of this paper is on the formulation and solution of inverse problems of parameter estimation using algorithmic differentiation. The inverse problem formulated here seeks to determine the input parameters that minimize a least squares functional with respect to certain target data. The formulation allows for uncertainty in the target data by considering the least squares functional in a stochastic basis described by the covariance of the target data. Furthermore, to allow for robust design, the formulation also accounts for uncertainties in the input parameters. This is achieved using the method of propagation of uncertainties using the directional derivatives of the output parameters with respect to unknown parameters. The required derivatives are calculated simultaneously with the solution using generic programming exploiting the template and operator overloading features of the C++ language. The methodology described here is general and applicable to any numerical solution procedure for any set of governing equations but for the purpose of this paper we consider a finite volume solution of the compressible Euler equations. In particular, we illustrate the method for the case of supersonic flow in a duct with a wedge. The parameter to be determined is the inlet Mach number and the target data is the axial component of velocity at the exit of the duct. (author)

  18. Irradiation uniformity of spherical targets by multiple uv beams from OMEGA

    International Nuclear Information System (INIS)

    Beich, W.; Dunn, M.; Hutchison, R.

    1984-01-01

    Direct-drive laser fusion demands extremely high levels of irradiation uniformity to ensure uniform compression of spherical targets. The assessment of illumination uniformity of targets irradiated by multiple beams from the OMEGA facility is made with the aid of multiple beams spherical superposition codes, which take into account ray tracing and absorption and a detailed knowledge of the intensity distribution of each beam in the target plane. In this report, recent estimates of the irradiation uniformity achieved with 6 and 12 uv beams of OMEGA will be compared with previous measurements in the IR, and predictions will be made for the uv illumination uniformity achievable with 24 beams of OMEGA

  19. Numerical Investigation of the Influences of Wellbore Flow on Compressed Air Energy Storage in Aquifers

    Directory of Open Access Journals (Sweden)

    Yi Li

    2017-01-01

    Full Text Available With the blossoming of intermittent energy, compressed air energy storage (CAES has attracted much attention as a potential large-scale energy storage technology. Compared with caverns as storage vessels, compressed air energy storage in aquifers (CAESA has the advantages of wide availability and lower costs. The wellbore can play an important role as the energy transfer mechanism between the surroundings and the air in CAESA system. In this paper, we investigated the influences of the well screen length on CAESA system performance using an integrated wellbore-reservoir simulator (T2WELL/EOS3. The results showed that the well screen length can affect the distribution of the initial gas bubble and that a system with a fully penetrating wellbore can obtain acceptably stable pressurized air and better energy efficiencies. Subsequently, we investigated the impact of the energy storage scale and the target aquifer depth on the performance of a CAESA system using a fully penetrating wellbore. The simulation results demonstrated that larger energy storage scales exhibit better performances of CAESA systems. In addition, deeper target aquifer systems, which could decrease the energy loss by larger storage density and higher temperature in surrounding formation, can obtain better energy efficiencies.

  20. Lossy image compression for digital medical imaging systems

    Science.gov (United States)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  1. Effect of large compressive strain on low field electrical transport in La0.88Sr0.12MnO3 thin films

    International Nuclear Information System (INIS)

    Prasad, Ravikant; Gaur, Anurag; Siwach, P K; Varma, G D; Kaur, A; Singh, H K

    2007-01-01

    We have investigated the effect of large in-plane compressive strain on the electrical transport in La 0.88 Sr 0.12 MnO 3 in thin films. For achieving large compressive strain, films have been deposited on single crystal LaAlO 3 (LAO, a = 3.798 A) substrate from a polycrystalline bulk target having average in-plane lattice parameter a av = (a b + b b )/2 = 3.925 A. The compressive strain was further relaxed by varying the film thickness in the range ∼6-75 nm. In the film having least thickness (∼6 nm) large increase (c = 3.929 A) in the out-of-plane lattice parameter is observed which gradually decreases towards the bulk value (c bulk = 3.87 A) for ∼75 nm thick film. This shows that the film having the least thickness is under large compressive strain, which partially relaxes with increasing film thickness. The T IM of the bulk target ∼145 K goes up to ∼235 K for the ∼6 nm thin film and even for partially strain relaxed ∼75 nm thick film T IM is as high as ∼200 K. This enhancement in T IM is explained in terms of suppression of Jahn-Teller distortion of the MnO 6 octahedra by the large in-plane compressive strain. We observe a large enhancement in the low field magnetoresistance (MR) just below T IM in the films having partial strain relaxation. Thick films of 6 and 20 nm have MR ∼14% at 3 kOe that almost doubles in 35 nm film to ∼27%. Similar enhancement is also obtained in the case of the temperature coefficient of resistivity. The near doubling of low field MR is explained in terms of delocalization of weakly localized carriers around T IM by small magnetic fields

  2. Thermal compression modulus of polarized neutron matter

    International Nuclear Information System (INIS)

    Abd-Alla, M.

    1990-05-01

    We applied the equation of state for pure polarized neutron matter at finite temperature, calculated previously, to calculate the compression modulus. The compression modulus of pure neutron matter at zero temperature is very large and reflects the stiffness of the equation of state. It has a little temperature dependence. Introducing the spin excess parameter in the equation of state calculations is important because it has a significant effect on the compression modulus. (author). 25 refs, 2 tabs

  3. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  4. Dual pathology proximal median nerve compression of the forearm.

    LENUS (Irish Health Repository)

    Murphy, Siun M

    2013-12-01

    We report an unusual case of synchronous pathology in the forearm- the coexistence of a large lipoma of the median nerve together with an osteochondroma of the proximal ulna, giving rise to a dual proximal median nerve compression. Proximal median nerve compression neuropathies in the forearm are uncommon compared to the prevalence of distal compression neuropathies (eg Carpal Tunnel Syndrome). Both neural fibrolipomas (Refs. 1,2) and osteochondromas of the proximal ulna (Ref. 3) in isolation are rare but well documented. Unlike that of a distal compression, a proximal compression of the median nerve will often have a definite cause. Neural fibrolipoma, also called fibrolipomatous hamartoma are rare, slow-growing, benign tumours of peripheral nerves, most often occurring in the median nerve of younger patients. To our knowledge, this is the first report of such dual pathology in the same forearm, giving rise to a severe proximal compression of the median nerve. In this case, the nerve was being pushed anteriorly by the osteochondroma, and was being compressed from within by the intraneural lipoma. This unusual case highlights the advantage of preoperative imaging as part of the workup of proximal median nerve compression.

  5. Compression of FASTQ and SAM format sequencing data.

    Directory of Open Access Journals (Sweden)

    James K Bonfield

    Full Text Available Storage and transmission of the data produced by modern DNA sequencing instruments has become a major concern, which prompted the Pistoia Alliance to pose the SequenceSqueeze contest for compression of FASTQ files. We present several compression entries from the competition, Fastqz and Samcomp/Fqzcomp, including the winning entry. These are compared against existing algorithms for both reference based compression (CRAM, Goby and non-reference based compression (DSRC, BAM and other recently published competition entries (Quip, SCALCE. The tools are shown to be the new Pareto frontier for FASTQ compression, offering state of the art ratios at affordable CPU costs. All programs are freely available on SourceForge. Fastqz: https://sourceforge.net/projects/fastqz/, fqzcomp: https://sourceforge.net/projects/fqzcomp/, and samcomp: https://sourceforge.net/projects/samcomp/.

  6. Packet Header Compression for the Internet of Things

    Directory of Open Access Journals (Sweden)

    Pekka KOSKELA

    2016-01-01

    Full Text Available Due to the extensive growth of Internet of Things (IoT, the number of wireless devices connected to the Internet is forecasted to grow to 26 billion units installed in 2020. This will challenge both the energy efficiency of wireless battery powered devices and the bandwidth of wireless networks. One solution for both challenges could be to utilize packet header compression. This paper reviews different packet compression, and especially packet header compression, methods and studies the performance of Robust Header Compression (ROHC in low speed radio networks such as XBEE, and in high speed radio networks such as LTE and WLAN. In all networks, the compressing and decompressing processing causes extra delay and power consumption, but in low speed networks, energy can still be saved due to the shorter transmission time.

  7. Plant for compacting compressible radioactive waste

    International Nuclear Information System (INIS)

    Baatz, H.; Rittscher, D.; Lueer, H.J.; Ambros, R.

    1983-01-01

    The waste is filled into auxiliary barrels made of sheet steel and compressed with the auxiliary barrels into steel jackets. These can be stacked in storage barrels. A hydraulic press is included in the plant, which has a horizontal compression chamber and a horizontal pressure piston, which works against a counter bearing slider. There is a filling and emptying device for the pressure chamber behind the counter bearing slider. The auxiliary barrels can be introduced into the compression chamber by the filling and emptying device. The pressure piston also pushes out the steel jackets formed, so that they are taken to the filling and emptying device. (orig./HP) [de

  8. Survived ileocecal blowout from compressed air.

    Science.gov (United States)

    Weber, Marco; Kolbus, Frank; Dressler, Jan; Lessig, Rüdiger

    2011-03-01

    Industrial accidents with compressed air entering the gastro-intestinal tract often run fatally. The pressures usually over-exceed those used by medical applications such as colonoscopy and lead to vast injuries of the intestines with high mortality. The case described in this report is of a 26-year-old man who was harmed by compressed air that entered through the anus. He survived because of fast emergency operation. This case underlines necessity of explicit instruction considering hazards handling compressed air devices to maintain safety at work. Further, our observations support the hypothesis that the mucosa is the most elastic layer of the intestine wall.

  9. Lagrangian statistics in compressible isotropic homogeneous turbulence

    Science.gov (United States)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  10. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  11. Compressibility of rotating black holes

    International Nuclear Information System (INIS)

    Dolan, Brian P.

    2011-01-01

    Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).

  12. Compressive behavior of fine sand.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  13. THz-SAR Vibrating Target Imaging via the Bayesian Method

    Directory of Open Access Journals (Sweden)

    Bin Deng

    2017-01-01

    Full Text Available Target vibration bears important information for target recognition, and terahertz, due to significant micro-Doppler effects, has strong advantages for remotely sensing vibrations. In this paper, the imaging characteristics of vibrating targets with THz-SAR are at first analyzed. An improved algorithm based on an excellent Bayesian approach, that is, the expansion-compression variance-component (ExCoV method, has been proposed for reconstructing scattering coefficients of vibrating targets, which provides more robust and efficient initialization and overcomes the deficiencies of sidelobes as well as artifacts arising from the traditional correlation method. A real vibration measurement experiment of idle cars was performed to validate the range model. Simulated SAR data of vibrating targets and a tank model in a real background in 220 GHz show good performance at low SNR. Rapidly evolving high-power terahertz devices will offer viable THz-SAR application at a distance of several kilometers.

  14. Rupture of esophagus by compressed air.

    Science.gov (United States)

    Wu, Jie; Tan, Yuyong; Huo, Jirong

    2016-11-01

    Currently, beverages containing compressed air such as cola and champagne are widely used in our daily life. Improper ways to unscrew the bottle, usually by teeth, could lead to an injury, even a rupture of the esophagus. This letter to editor describes a case of esophageal rupture caused by compressed air.

  15. Compressibility Analysis of the Tongue During Speech

    National Research Council Canada - National Science Library

    Unay, Devrim

    2001-01-01

    .... In this paper, 3D compression and expansion analysis of the tongue will be presented. Patterns of expansion and compression have been compared for different syllables and various repetitions of each syllable...

  16. Compression garments and exercise: no influence of pressure applied.

    Science.gov (United States)

    Beliard, Samuel; Chauveau, Michel; Moscatiello, Timothée; Cros, François; Ecarnot, Fiona; Becker, François

    2015-03-01

    Compression garments on the lower limbs are increasingly popular among athletes who wish to improve performance, reduce exercise-induced discomfort, and reduce the risk of injury. However, the beneficial effects of compression garments have not been clearly established. We performed a review of the literature for prospective, randomized, controlled studies, using quantified lower limb compression in order to (1) describe the beneficial effects that have been identified with compression garments, and in which conditions; and (2) investigate whether there is a relation between the pressure applied and the reported effects. The pressure delivered were measured either in laboratory conditions on garments identical to those used in the studies, or derived from publication data. Twenty three original articles were selected for inclusion in this review. The effects of wearing compression garments during exercise are controversial, as most studies failed to demonstrate a beneficial effect on immediate or performance recovery, or on delayed onset of muscle soreness. There was a trend towards a beneficial effect of compression garments worn during recovery, with performance recovery found to be improved in the five studies in which this was investigated, and delayed-onset muscle soreness was reportedly reduced in three of these five studies. There is no apparent relation between the effects of compression garments worn during or after exercise and the pressures applied, since beneficial effects were obtained with both low and high pressures. Wearing compression garments during recovery from exercise seems to be beneficial for performance recovery and delayed-onset muscle soreness, but the factors explaining this efficacy remain to be elucidated. Key pointsWe observed no relationship between the effects of compression and the pressures applied.The pressure applied at the level of the lower limb by compression garments destined for use by athletes varies widely between

  17. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  18. On-board image compression for the RAE lunar mission

    Science.gov (United States)

    Miller, W. H.; Lynch, T. J.

    1976-01-01

    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  19. Target recognition of ladar range images using even-order Zernike moments.

    Science.gov (United States)

    Liu, Zheng-Jun; Li, Qi; Xia, Zhi-Wei; Wang, Qi

    2012-11-01

    Ladar range images have attracted considerable attention in automatic target recognition fields. In this paper, Zernike moments (ZMs) are applied to classify the target of the range image from an arbitrary azimuth angle. However, ZMs suffer from high computational costs. To improve the performance of target recognition based on small samples, even-order ZMs with serial-parallel backpropagation neural networks (BPNNs) are applied to recognize the target of the range image. It is found that the rotation invariance and classified performance of the even-order ZMs are both better than for odd-order moments and for moments compressed by principal component analysis. The experimental results demonstrate that combining the even-order ZMs with serial-parallel BPNNs can significantly improve the recognition rate for small samples.

  20. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  1. After microvascular decompression to treat trigeminal neuralgia, both immediate pain relief and recurrence rates are higher in patients with arterial compression than with venous compression.

    Science.gov (United States)

    Shi, Lei; Gu, Xiaoyan; Sun, Guan; Guo, Jun; Lin, Xin; Zhang, Shuguang; Qian, Chunfa

    2017-07-04

    We explored differences in postoperative pain relief achieved through decompression of the trigeminal nerve compressed by arteries and veins. Clinical characteristics, intraoperative findings, and postoperative curative effects were analyzed in 72 patients with trigeminal neuralgia who were treated by microvascular decompression. The patients were divided into arterial and venous compression groups based on intraoperative findings. Surgical curative effects included immediate relief, delayed relief, obvious reduction, and invalid result. Among the 40 patients in the arterial compression group, 32 had immediate pain relief of pain (80.0%), 5 cases had delayed relief (12.5%), and 3 cases had an obvious reduction (7.5%). In the venous compression group, 12 patients had immediate relief of pain (37.5%), 13 cases had delayed relief (40.6%), and 7 cases had an obvious reduction (21.9%). During 2-year follow-up period, 6 patients in the arterial compression group experienced recurrence of trigeminal neuralgia, but there were no recurrences in the venous compression group. Simple artery compression was followed by early relief of trigeminal neuralgia more often than simple venous compression. However, the trigeminal neuralgia recurrence rate was higher in the artery compression group than in the venous compression group.

  2. The task of control digital image compression

    OpenAIRE

    TASHMANOV E.B.; МАМАTOV М.S.

    2014-01-01

    In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

  3. Kilovoltage Imaging of Implanted Fiducials to Monitor Intrafraction Motion With Abdominal Compression During Stereotactic Body Radiation Therapy for Gastrointestinal Tumors

    Energy Technology Data Exchange (ETDEWEB)

    Yorke, Ellen, E-mail: yorke@mskcc.org [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Xiong, Ying [Department of Radiation Oncology, China-Japan Friendship Hospital, Beijing (China); Han, Qian [Department of Radiotherapy, Henan Provincial People' s Hospital, Zhengzhou (China); Zhang, Pengpeng; Mageras, Gikas; Lovelock, Michael; Pham, Hai; Xiong, Jian-Ping [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York (United States); Goodman, Karyn A. [Department of Radiation Oncology, Memorial Sloan Kettering Cancer Center, New York, New York (United States)

    2016-07-01

    Purpose: To assess intrafraction respiratory motion using a commercial kilovoltage imaging system for abdominal tumor patients with implanted fiducials and breathing constrained by pneumatic compression during stereotactic body radiation therapy (SBRT). Methods and Materials: A pneumatic compression belt limited respiratory motion in 19 patients with radiopaque fiducials in or near their tumor during SBRT for abdominal tumors. Kilovoltage images were acquired at 5- to 6-second intervals during treatment using a commercial system. Intrafractional fiducial displacements were measured using in-house software. The dosimetric effect of the observed displacements was calculated for 3 sessions for each patient. Results: Intrafraction displacement patterns varied between patients and between individual treatment sessions. Averaged over 19 patients, 73 sessions, 7.6% of craniocaudal displacements exceeded 0.5 cm, and 1.2% exceeded 0.75 cm. The calculated single-session dose to 95% of gross tumor volume differed from planned by an average of −1.2% (range, −11.1% to 4.8%) but only for 4 patients was the total 3-session calculated dose to 95% of gross tumor volume more than 3% different from planned. Conclusions: Our pneumatic compression limited intrafractional abdominal target motion, maintained target position established at setup, and was moderately effective in preserving coverage. Commercially available intrafractional imaging is useful for surveillance but can be made more effective and reliable.

  4. Kilovoltage Imaging of Implanted Fiducials to Monitor Intrafraction Motion With Abdominal Compression During Stereotactic Body Radiation Therapy for Gastrointestinal Tumors

    International Nuclear Information System (INIS)

    Yorke, Ellen; Xiong, Ying; Han, Qian; Zhang, Pengpeng; Mageras, Gikas; Lovelock, Michael; Pham, Hai; Xiong, Jian-Ping; Goodman, Karyn A.

    2016-01-01

    Purpose: To assess intrafraction respiratory motion using a commercial kilovoltage imaging system for abdominal tumor patients with implanted fiducials and breathing constrained by pneumatic compression during stereotactic body radiation therapy (SBRT). Methods and Materials: A pneumatic compression belt limited respiratory motion in 19 patients with radiopaque fiducials in or near their tumor during SBRT for abdominal tumors. Kilovoltage images were acquired at 5- to 6-second intervals during treatment using a commercial system. Intrafractional fiducial displacements were measured using in-house software. The dosimetric effect of the observed displacements was calculated for 3 sessions for each patient. Results: Intrafraction displacement patterns varied between patients and between individual treatment sessions. Averaged over 19 patients, 73 sessions, 7.6% of craniocaudal displacements exceeded 0.5 cm, and 1.2% exceeded 0.75 cm. The calculated single-session dose to 95% of gross tumor volume differed from planned by an average of −1.2% (range, −11.1% to 4.8%) but only for 4 patients was the total 3-session calculated dose to 95% of gross tumor volume more than 3% different from planned. Conclusions: Our pneumatic compression limited intrafractional abdominal target motion, maintained target position established at setup, and was moderately effective in preserving coverage. Commercially available intrafractional imaging is useful for surveillance but can be made more effective and reliable.

  5. A Posteriori Restoration of Block Transform-Compressed Data

    Science.gov (United States)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  6. Data compression of digital X-ray images from a clinical viewpoint

    International Nuclear Information System (INIS)

    Ando, Yutaka

    1992-01-01

    For the PACS (picture archiving and communication system), large storage capacity recording media and a fast data transfer network are necessary. When the PACS are working, these technology requirements become an large problem. So we need image data compression having a higher recording efficiency media and an improved transmission ratio. There are two kinds of data compression methods, one is reversible compression and other is the irreversible one. By these reversible compression methods, a compressed-expanded image is exactly equal to the original image. The ratio of data compression is about between 1/2 an d1/3. On the other hand, for irreversible data compression, the compressed-expanded image is a distorted image, and we can achieve a high compression ratio by using this method. In the medical field, the discrete cosine transform (DCT) method is popular because of the low distortion and fast performance. The ratio of data compression is actually from 1/10 to 1/20. It is important for us to decide the compression ratio according to the purposes and modality of the image. We must carefully select the ratio of the data compression because the suitable compression ratio alters in the usage of image for education, clinical diagnosis and reference. (author)

  7. Chest compression rates and survival following out-of-hospital cardiac arrest.

    Science.gov (United States)

    Idris, Ahamed H; Guffey, Danielle; Pepe, Paul E; Brown, Siobhan P; Brooks, Steven C; Callaway, Clifton W; Christenson, Jim; Davis, Daniel P; Daya, Mohamud R; Gray, Randal; Kudenchuk, Peter J; Larsen, Jonathan; Lin, Steve; Menegazzi, James J; Sheehan, Kellie; Sopko, George; Stiell, Ian; Nichol, Graham; Aufderheide, Tom P

    2015-04-01

    Guidelines for cardiopulmonary resuscitation recommend a chest compression rate of at least 100 compressions/min. A recent clinical study reported optimal return of spontaneous circulation with rates between 100 and 120/min during cardiopulmonary resuscitation for out-of-hospital cardiac arrest. However, the relationship between compression rate and survival is still undetermined. Prospective, observational study. Data is from the Resuscitation Outcomes Consortium Prehospital Resuscitation IMpedance threshold device and Early versus Delayed analysis clinical trial. Adults with out-of-hospital cardiac arrest treated by emergency medical service providers. None. Data were abstracted from monitor-defibrillator recordings for the first five minutes of emergency medical service cardiopulmonary resuscitation. Multiple logistic regression assessed odds ratio for survival by compression rate categories (compression fraction and depth, first rhythm, and study site. Compression rate data were available for 10,371 patients; 6,399 also had chest compression fraction and depth data. Age (mean±SD) was 67±16 years. Chest compression rate was 111±19 per minute, compression fraction was 0.70±0.17, and compression depth was 42±12 mm. Circulation was restored in 34%; 9% survived to hospital discharge. After adjustment for covariates without chest compression depth and fraction (n=10,371), a global test found no significant relationship between compression rate and survival (p=0.19). However, after adjustment for covariates including chest compression depth and fraction (n=6,399), the global test found a significant relationship between compression rate and survival (p=0.02), with the reference group (100-119 compressions/min) having the greatest likelihood for survival. After adjustment for chest compression fraction and depth, compression rates between 100 and 120 per minute were associated with greatest survival to hospital discharge.

  8. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    Energy Technology Data Exchange (ETDEWEB)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  9. Computer calculations of compressibility of natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M

    An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.

  10. Subband Coding Methods for Seismic Data Compression

    Science.gov (United States)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  11. Evolution Of Nonlinear Waves in Compressing Plasma

    International Nuclear Information System (INIS)

    Schmit, P.F.; Dodin, I.Y.; Fisch, N.J.

    2011-01-01

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size Δ during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches Δ. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  12. Evolution Of Nonlinear Waves in Compressing Plasma

    Energy Technology Data Exchange (ETDEWEB)

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  13. Effect of Compression Garments on Physiological Responses After Uphill Running.

    Science.gov (United States)

    Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc

    2018-03-01

    Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  14. Effect of Compression Garments on Physiological Responses After Uphill Running

    Directory of Open Access Journals (Sweden)

    Struhár Ivan

    2018-03-01

    Full Text Available Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60o·s-1, 120o·s-1 was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  15. Calculation of dose distribution in compressible breast tissues using finite element modeling, Monte Carlo simulation and thermoluminescence dosimeters

    Science.gov (United States)

    Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Rahim Hematiyan, Mohammad; Koontz, Craig; Meigooni, Ali S.

    2015-12-01

    Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney-Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5%  ±  5.9%.

  16. Calculation of dose distribution in compressible breast tissues using finite element modeling, Monte Carlo simulation and thermoluminescence dosimeters

    International Nuclear Information System (INIS)

    Mohammadyari, Parvin; Faghihi, Reza; Mosleh-Shirazi, Mohammad Amin; Lotfi, Mehrzad; Hematiyan, Mohammad Rahim; Koontz, Craig; Meigooni, Ali S

    2015-01-01

    Compression is a technique to immobilize the target or improve the dose distribution within the treatment volume during different irradiation techniques such as AccuBoost ® brachytherapy. However, there is no systematic method for determination of dose distribution for uncompressed tissue after irradiation under compression. In this study, the mechanical behavior of breast tissue between compressed and uncompressed states was investigated. With that, a novel method was developed to determine the dose distribution in uncompressed tissue after irradiation of compressed breast tissue. Dosimetry was performed using two different methods, namely, Monte Carlo simulations using the MCNP5 code and measurements using thermoluminescent dosimeters (TLD). The displacement of the breast elements was simulated using a finite element model and calculated using ABAQUS software. From these results, the 3D dose distribution in uncompressed tissue was determined. The geometry of the model was constructed from magnetic resonance images of six different women volunteers. The mechanical properties were modeled by using the Mooney–Rivlin hyperelastic material model. Experimental dosimetry was performed by placing the TLD chips into the polyvinyl alcohol breast equivalent phantom. The results determined that the nodal displacements, due to the gravitational force and the 60 Newton compression forces (with 43% contraction in the loading direction and 37% expansion in the orthogonal direction) were determined. Finally, a comparison of the experimental data and the simulated data showed agreement within 11.5%  ±  5.9%. (paper)

  17. Modelling studies for influence factors of gas bubble in compressed air energy storage in aquifers

    International Nuclear Information System (INIS)

    Guo, Chaobin; Zhang, Keni; Li, Cai; Wang, Xiaoyu

    2016-01-01

    CAES (Compressed air energy storage) is credited with its potential ability for large-scale energy storage. Generally, it is more convenient using deep aquifers than employing underground caverns for energy storage, because of extensive presence of aquifers. During the first stage in a typical process of CAESA (compressed air energy storage in aquifers), a large amount of compressed air is injected into the target aquifer to develop an initial space (a gas bubble) for energy storage. In this study, numerical simulations were conducted to investigate the influence of aquifer's permeability, geological structure and operation parameters on the formation of gas bubble and the sustainability for the later cycling operation. The SCT (system cycle times) was designed as a parameter to evaluate the reservoir performance and the effect of operation parameters. Simulation results for pressure and gas saturation results of basic model confirm the feasibility of compressed air energy storage in aquifers. The results of different permeability cases show that, for a certain scale of CAESA system, there is an optimum permeability range for a candidate aquifer. An aquifer within this permeability range will not only satisfy the injectivity requirement but also have the best energy efficiency. Structural impact analysis indicates that the anticline structure has the best performance to hold the bubble under the same daily cycling schedule with the same initial injected air mass. In addition, our results indicate that the SCT shows a logarithmic growth as the injected air mass increase. During the formation of gas bubble, compressed air should be injected into aquifers with moderate rate and the injection can be done in several stages with different injection rate to avoid onset pressure. - Highlights: • Impact of permeability, geological structure, operation parameters was investigated. • With certain air production rate, an optimum permeability exists for performance.

  18. Compressibility and thermal expansion of cubic silicon nitride

    DEFF Research Database (Denmark)

    Jiang, Jianzhong; Lindelov, H.; Gerward, Leif

    2002-01-01

    The compressibility and thermal expansion of the cubic silicon nitride (c-Si3N4) phase have been investigated by performing in situ x-ray powder-diffraction measurements using synchrotron radiation, complemented with computer simulations by means of first-principles calculations. The bulk...... compressibility of the c-Si3N4 phase originates from the average of both Si-N tetrahedral and octahedral compressibilities where the octahedral polyhedra are less compressible than the tetrahedral ones. The origin of the unit cell expansion is revealed to be due to the increase of the octahedral Si-N and N-N bond...

  19. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-08-17

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.

  20. Real-time lossless compression of depth streams

    KAUST Repository

    Schneider, Jens

    2017-01-01

    Various examples are provided for lossless compression of data streams. In one example, a Z-lossless (ZLS) compression method includes generating compacted depth information by condensing information of a depth image and a compressed binary representation of the depth image using histogram compaction and decorrelating the compacted depth information to produce bitplane slicing of residuals by spatial prediction. In another example, an apparatus includes imaging circuitry that can capture one or more depth images and processing circuitry that can generate compacted depth information by condensing information of a captured depth image and a compressed binary representation of the captured depth image using histogram compaction; decorrelate the compacted depth information to produce bitplane slicing of residuals by spatial prediction; and generate an output stream based upon the bitplane slicing.