WorldWideScience

Sample records for achieve high compression

  1. Thermofluidic compression effects to achieve combustion in a low-compression scramjet engine

    Science.gov (United States)

    Moura, A. F.; Wheatley, V.; Jahn, I.

    2017-12-01

    The compression provided by a scramjet inlet is an important parameter in its design. It must be low enough to limit thermal and structural loads and stagnation pressure losses, but high enough to provide the conditions favourable for combustion. Inlets are typically designed to achieve sufficient compression without accounting for the fluidic, and subsequently thermal, compression provided by the fuel injection, which can enable robust combustion in a low-compression engine. This is investigated using Reynolds-averaged Navier-Stokes numerical simulations of a simplified scramjet engine designed to have insufficient compression to auto-ignite fuel in the absence of thermofluidic compression. The engine was designed with a wide rectangular combustor and a single centrally located injector, in order to reduce three-dimensional effects of the walls on the fuel plume. By varying the injected mass flow rate of hydrogen fuel (equivalence ratios of 0.22, 0.17, and 0.13), it is demonstrated that higher equivalence ratios lead to earlier ignition and more rapid combustion, even though mean conditions in the combustor change by no more than 5% for pressure and 3% for temperature with higher equivalence ratio. By supplementing the lower equivalence ratio with helium to achieve a higher mass flow rate, it is confirmed that these benefits are primarily due to the local compression provided by the extra injected mass. Investigation of the conditions around the fuel plume indicated two connected mechanisms. The higher mass flow rate for higher equivalence ratios generated a stronger injector bow shock that compresses the free-stream gas, increasing OH radical production and promoting ignition. This was observed both in the higher equivalence ratio case and in the case with helium. This earlier ignition led to increased temperature and pressure downstream and, consequently, stronger combustion. The heat release from combustion provided thermal compression in the combustor, further

  2. Layered compression for high-precision depth data.

    Science.gov (United States)

    Miao, Dan; Fu, Jingjing; Lu, Yan; Li, Shipeng; Chen, Chang Wen

    2015-12-01

    With the development of depth data acquisition technologies, access to high-precision depth with more than 8-b depths has become much easier and determining how to efficiently represent and compress high-precision depth is essential for practical depth storage and transmission systems. In this paper, we propose a layered high-precision depth compression framework based on an 8-b image/video encoder to achieve efficient compression with low complexity. Within this framework, considering the characteristics of the high-precision depth, a depth map is partitioned into two layers: 1) the most significant bits (MSBs) layer and 2) the least significant bits (LSBs) layer. The MSBs layer provides rough depth value distribution, while the LSBs layer records the details of the depth value variation. For the MSBs layer, an error-controllable pixel domain encoding scheme is proposed to exploit the data correlation of the general depth information with sharp edges and to guarantee the data format of LSBs layer is 8 b after taking the quantization error from MSBs layer. For the LSBs layer, standard 8-b image/video codec is leveraged to perform the compression. The experimental results demonstrate that the proposed coding scheme can achieve real-time depth compression with satisfactory reconstruction quality. Moreover, the compressed depth data generated from this scheme can achieve better performance in view synthesis and gesture recognition applications compared with the conventional coding schemes because of the error control algorithm.

  3. High-speed and high-ratio referential genome compression.

    Science.gov (United States)

    Liu, Yuansheng; Peng, Hui; Wong, Limsoon; Li, Jinyan

    2017-11-01

    The rapidly increasing number of genomes generated by high-throughput sequencing platforms and assembly algorithms is accompanied by problems in data storage, compression and communication. Traditional compression algorithms are unable to meet the demand of high compression ratio due to the intrinsic challenging features of DNA sequences such as small alphabet size, frequent repeats and palindromes. Reference-based lossless compression, by which only the differences between two similar genomes are stored, is a promising approach with high compression ratio. We present a high-performance referential genome compression algorithm named HiRGC. It is based on a 2-bit encoding scheme and an advanced greedy-matching search on a hash table. We compare the performance of HiRGC with four state-of-the-art compression methods on a benchmark dataset of eight human genomes. HiRGC takes compress about 21 gigabytes of each set of the seven target genomes into 96-260 megabytes, achieving compression ratios of 217 to 82 times. This performance is at least 1.9 times better than the best competing algorithm on its best case. Our compression speed is also at least 2.9 times faster. HiRGC is stable and robust to deal with different reference genomes. In contrast, the competing methods' performance varies widely on different reference genomes. More experiments on 100 human genomes from the 1000 Genome Project and on genomes of several other species again demonstrate that HiRGC's performance is consistently excellent. The C ++ and Java source codes of our algorithm are freely available for academic and non-commercial use. They can be downloaded from https://github.com/yuansliu/HiRGC. jinyan.li@uts.edu.au. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  4. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  5. High intensity pulse self-compression in short hollow core capillaries

    OpenAIRE

    Butcher, Thomas J.; Anderson, Patrick N.; Horak, Peter; Frey, Jeremy G.; Brocklesby, William S.

    2011-01-01

    The drive for shorter pulses for use in techniques such as high harmonic generation and laser wakefield acceleration requires continual improvement in post-laser pulse compression techniques. The two most commonly used methods of pulse compression for high intensity pulses are hollow capillary compression via self-phase modulation (SPM) [1] and the more recently developed filamentation [2]. Both of these methods can require propagation distances of 1-3 m to achieve spectral broadening and com...

  6. Monte Carlo analysis of highly compressed fissile assemblies. Pt. 1

    International Nuclear Information System (INIS)

    Raspet, R.; Baird, G.E.

    1978-01-01

    Laserinduced fission of highly compressed bare fissionable spheres is analyzed using Monte Carlo techniques. The critical mass and critical radius as a function of density are calculated and the fission energy yield is calculated and compared with the input laser energy necessary to achieve compression to criticality. (orig.) [de

  7. Neutralized drift compression experiments with a high-intensity ion beam

    International Nuclear Information System (INIS)

    Roy, P.K.; Yu, S.S.; Waldron, W.L.; Anders, A.; Baca, D.; Barnard, J.J.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Eylon, S.; Friedman, A.; Gilson, E.P.; Greenway, W.G.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Sefkow, A.B.; Seidl, P.A.; Sharp, W.M.; Thoma, C.; Welch, D.R.

    2007-01-01

    To create high-energy density matter and fusion conditions, high-power drivers, such as lasers, ion beams, and X-ray drivers, may be employed to heat targets with short pulses compared to hydro-motion. Both high-energy density physics and ion-driven inertial fusion require the simultaneous transverse and longitudinal compression of an ion beam to achieve high intensities. We have previously studied the effects of plasma neutralization for transverse beam compression. The scaled experiment, the Neutralized Transport Experiment (NTX), demonstrated that an initially un-neutralized beam can be compressed transversely to ∼1 mm radius when charge neutralization by background plasma electrons is provided. Here, we report longitudinal compression of a velocity-tailored, intense, neutralized 25 mA K + beam at 300 keV. The compression takes place in a 1-2 m drift section filled with plasma to provide space-charge neutralization. An induction cell produces a head-to-tail velocity ramp that longitudinally compresses the neutralized beam, enhances the beam peak current by a factor of 50 and produces a pulse duration of about 3 ns. The physics of longitudinal compression, experimental procedure, and the results of the compression experiments are presented

  8. High speed fluorescence imaging with compressed ultrafast photography

    Science.gov (United States)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  9. High-energy few-cycle pulse compression through self-channeling in gases

    International Nuclear Information System (INIS)

    Hauri, C.; Merano, M.; Trisorio, A.; Canova, F.; Canova, L.; Lopez-Martens, R.; Ruchon, T.; Engquist, A.; Varju, K.; Gustafsson, E.

    2006-01-01

    Complete test of publication follows. Nonlinear spectral broadening of femtosecond optical pulses by intense propagation in a Kerr medium followed by temporal compression constitutes the Holy Grail for ultrafast science since it allows the generation of intense few-cycle optical transients from longer pulses provided by now commercially available femtosecond lasers. Tremendous progress in high-field and attosecond physics achieved in recent years has triggered the need for efficient pulse compression schemes producing few-cycle pulses beyond the mJ level. We studied a novel pulse compression scheme based on self-channeling in gases, which promises to overcome the energy constraints of hollow-core fiber compression techniques. Fundamentally, self-channeling at high laser powers in gases occurs when the self-focusing effect in the gas is balanced through the dispersion induced by the inhomogeneous refractive index resulting from optically-induced ionization. The high nonlinearity of the ionization process poses great technical challenges when trying to scale this pulse compression scheme to higher energies input energies. Light channels are known to be unstable under small fluctuations of the trapped field that can lead to temporal and spatial beam breakup, usually resulting in the generation of spectrally broad but uncompressible pulses. Here we present experimental results on high-energy pulse compression of self-channeled 40-fs pulses in pressure-gas cells. In the first experiment, performed at the Lund Laser Center in Sweden, we identified a particular self-channeling regime at lower pulse energies (0.8 mJ), in which the ultrashort pulses are generated with negative group delay dispersion (GDD) such that they can be readily compressed down to near 10-fs through simple material dispersion. Pulse compression is efficient (70%) and exhibits exceptional spatial and temporal beam stability. In a second experiment, performed at the LOA-Palaiseau in France, we

  10. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  11. Type-I cascaded quadratic soliton compression in lithium niobate: Compressing femtosecond pulses from high-power fiber lasers

    DEFF Research Database (Denmark)

    Bache, Morten; Wise, Frank W.

    2010-01-01

    The output pulses of a commercial high-power femtosecond fiber laser or amplifier are typically around 300–500 fs with wavelengths of approximately 1030 nm and tens of microjoules of pulse energy. Here, we present a numerical study of cascaded quadratic soliton compression of such pulses in LiNbO3....... However, the strong group-velocity dispersion implies that the pulses can achieve moderate compression to durations of less than 130 fs in available crystal lengths. Most of the pulse energy is conserved because the compression is moderate. The effects of diffraction and spatial walk-off are addressed......, and in particular the latter could become an issue when compressing such long crystals (around 10 cm long). We finally show that the second harmonic contains a short pulse locked to the pump and a long multi-picosecond red-shifted detrimental component. The latter is caused by the nonlocal effects...

  12. High-speed reconstruction of compressed images

    Science.gov (United States)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  13. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  14. Influence of curing regimes on compressive strength of ultra high

    Indian Academy of Sciences (India)

    The present paper is aimed to identify an efficient curing regime for ultra high performance concrete (UHPC), to achieve a target compressive strength more than 150 MPa, using indigenous materials. The thermal regime plays a vital role due to the limited fineness of ingredients and low water/binder ratio. By activation of the ...

  15. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2013-01-01

    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and an extensive discussion of the various approaches used in predicting both free shear and wall bounded flows is presented. Experimental measurement techniques common to the compressible flow regime are introduced with particular emphasis on the unique challenges presented by high speed flows. Both experimental and numerical simulation work is supplied throughout to provide the reader with an overall perspective of current tre...

  16. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  17. Double Compression Expansion Engine: A Parametric Study on a High-Efficiency Engine Concept

    KAUST Repository

    Bhavani Shankar, Vijai Shankar; Johansson, Bengt; Andersson, Arne

    2018-01-01

    The Double compression expansion engine (DCEE) concept has exhibited a potential for achieving high brake thermal efficiencies (BTE). The effect of different engine components on system efficiency was evaluated in this work using GT Power

  18. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    Science.gov (United States)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  19. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  20. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  1. Intelligent condition monitoring method for bearing faults from highly compressed measurements using sparse over-complete features

    Science.gov (United States)

    Ahmed, H. O. A.; Wong, M. L. D.; Nandi, A. K.

    2018-01-01

    Condition classification of rolling element bearings in rotating machines is important to prevent the breakdown of industrial machinery. A considerable amount of literature has been published on bearing faults classification. These studies aim to determine automatically the current status of a roller element bearing. Of these studies, methods based on compressed sensing (CS) have received some attention recently due to their ability to allow one to sample below the Nyquist sampling rate. This technology has many possible uses in machine condition monitoring and has been investigated as a possible approach for fault detection and classification in the compressed domain, i.e., without reconstructing the original signal. However, previous CS based methods have been found to be too weak for highly compressed data. The present paper explores computationally, for the first time, the effects of sparse autoencoder based over-complete sparse representations on the classification performance of highly compressed measurements of bearing vibration signals. For this study, the CS method was used to produce highly compressed measurements of the original bearing dataset. Then, an effective deep neural network (DNN) with unsupervised feature learning algorithm based on sparse autoencoder is used for learning over-complete sparse representations of these compressed datasets. Finally, the fault classification is achieved using two stages, namely, pre-training classification based on stacked autoencoder and softmax regression layer form the deep net stage (the first stage), and re-training classification based on backpropagation (BP) algorithm forms the fine-tuning stage (the second stage). The experimental results show that the proposed method is able to achieve high levels of accuracy even with extremely compressed measurements compared with the existing techniques.

  2. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  3. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  4. Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless and Near-Lossless Compression.

    Science.gov (United States)

    Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2018-02-01

    This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.

  5. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  6. Context-dependent JPEG backward-compatible high-dynamic range image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2013-10-01

    High-dynamic range (HDR) imaging is expected, together with ultrahigh definition and high-frame rate video, to become a technology that may change photo, TV, and film industries. Many cameras and displays capable of capturing and rendering both HDR images and video are already available in the market. The popularity and full-public adoption of HDR content is, however, hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of low-dynamic range (LDR) displays that are unable to render HDR. To facilitate the wide spread of HDR usage, the backward compatibility of HDR with commonly used legacy technologies for storage, rendering, and compression of video and images are necessary. Although many tone-mapping algorithms are developed for generating viewable LDR content from HDR, there is no consensus of which algorithm to use and under which conditions. We, via a series of subjective evaluations, demonstrate the dependency of the perceptual quality of the tone-mapped LDR images on the context: environmental factors, display parameters, and image content itself. Based on the results of subjective tests, it proposes to extend JPEG file format, the most popular image format, in a backward compatible manner to deal with HDR images also. An architecture to achieve such backward compatibility with JPEG is proposed. A simple implementation of lossy compression demonstrates the efficiency of the proposed architecture compared with the state-of-the-art HDR image compression.

  7. Thermal characteristics of highly compressed bentonite

    International Nuclear Information System (INIS)

    Sueoka, Tooru; Kobayashi, Atsushi; Imamura, S.; Ogawa, Terushige; Murata, Shigemi.

    1990-01-01

    In the disposal of high level radioactive wastes in strata, it is planned to protect the canisters enclosing wastes with buffer materials such as overpacks and clay, therefore, the examination of artificial barrier materials is an important problem. The concept of the disposal in strata and the soil mechanics characteristics of highly compressed bentonite as an artificial barrier material were already reported. In this study, the basic experiment on the thermal characteristics of highly compressed bentonite was carried out, therefore, it is reported. The thermal conductivity of buffer materials is important because the possibility that it determines the temperature of solidified bodies and canisters is high, and the buffer materials may cause the thermal degeneration due to high temperature. Thermophysical properties are roughly divided into thermodynamic property, transport property and optical property. The basic principle of measured thermal conductivity and thermal diffusivity, the kinds of the measuring method and so on are explained. As for the measurement of the thermal conductivity of highly compressed bentonite, the experimental setup, the procedure, samples and the results are reported. (K.I.)

  8. High-power rf pulse compression with SLED-II at SLAC

    International Nuclear Information System (INIS)

    Nantista, C.

    1993-04-01

    Increasing the peak rf power available from X-band microwave tubes by means of rf pulse compression is envisioned as a way of achieving the few-hundred-megawatt power levels needed to drive a next-generation linear collider with 50--100 MW klystrons. SLED-II is a method of pulse compression similar in principal to the SLED method currently in use on the SLC and the LEP injector linac. It utilizes low-los resonant delay lines in place of the storage cavities of the latter. This produces the added benefit of a flat-topped output pulse. At SLAC, we have designed and constructed a prototype SLED-II pulse-compression system which operates in the circular TE 01 mode. It includes a circular-guide 3-dB coupler and other novel components. Low-power and initial high-power tests have been made, yielding a peak power multiplication of 4.8 at an efficiency of 40%. The system will be used in providing power for structure tests in the ASTA (Accelerator Structures Test Area) bunker. An upgraded second prototype will have improved efficiency and will serve as a model for the pulse compression system of the NLCTA (Next Linear Collider Test Accelerator)

  9. High Bit-Depth Medical Image Compression With HEVC.

    Science.gov (United States)

    Parikh, Saurin S; Ruiz, Damian; Kalva, Hari; Fernandez-Escribano, Gerardo; Adzic, Velibor

    2018-03-01

    Efficient storing and retrieval of medical images has direct impact on reducing costs and improving access in cloud-based health care services. JPEG 2000 is currently the commonly used compression format for medical images shared using the DICOM standard. However, new formats such as high efficiency video coding (HEVC) can provide better compression efficiency compared to JPEG 2000. Furthermore, JPEG 2000 is not suitable for efficiently storing image series and 3-D imagery. Using HEVC, a single format can support all forms of medical images. This paper presents the use of HEVC for diagnostically acceptable medical image compression, focusing on compression efficiency compared to JPEG 2000. Diagnostically acceptable lossy compression and complexity of high bit-depth medical image compression are studied. Based on an established medically acceptable compression range for JPEG 2000, this paper establishes acceptable HEVC compression range for medical imaging applications. Experimental results show that using HEVC can increase the compression performance, compared to JPEG 2000, by over 54%. Along with this, a new method for reducing computational complexity of HEVC encoding for medical images is proposed. Results show that HEVC intra encoding complexity can be reduced by over 55% with negligible increase in file size.

  10. Compression of a mixed antiproton and electron non-neutral plasma to high densities

    Science.gov (United States)

    Aghion, Stefano; Amsler, Claude; Bonomi, Germano; Brusa, Roberto S.; Caccia, Massimo; Caravita, Ruggero; Castelli, Fabrizio; Cerchiari, Giovanni; Comparat, Daniel; Consolati, Giovanni; Demetrio, Andrea; Di Noto, Lea; Doser, Michael; Evans, Craig; Fanì, Mattia; Ferragut, Rafael; Fesel, Julian; Fontana, Andrea; Gerber, Sebastian; Giammarchi, Marco; Gligorova, Angela; Guatieri, Francesco; Haider, Stefan; Hinterberger, Alexander; Holmestad, Helga; Kellerbauer, Alban; Khalidova, Olga; Krasnický, Daniel; Lagomarsino, Vittorio; Lansonneur, Pierre; Lebrun, Patrice; Malbrunot, Chloé; Mariazzi, Sebastiano; Marton, Johann; Matveev, Victor; Mazzotta, Zeudi; Müller, Simon R.; Nebbia, Giancarlo; Nedelec, Patrick; Oberthaler, Markus; Pacifico, Nicola; Pagano, Davide; Penasa, Luca; Petracek, Vojtech; Prelz, Francesco; Prevedelli, Marco; Rienaecker, Benjamin; Robert, Jacques; Røhne, Ole M.; Rotondi, Alberto; Sandaker, Heidi; Santoro, Romualdo; Smestad, Lillian; Sorrentino, Fiodor; Testera, Gemma; Tietje, Ingmari C.; Widmann, Eberhard; Yzombard, Pauline; Zimmer, Christian; Zmeskal, Johann; Zurlo, Nicola; Antonello, Massimiliano

    2018-04-01

    We describe a multi-step "rotating wall" compression of a mixed cold antiproton-electron non-neutral plasma in a 4.46 T Penning-Malmberg trap developed in the context of the AEḡIS experiment at CERN. Such traps are routinely used for the preparation of cold antiprotons suitable for antihydrogen production. A tenfold antiproton radius compression has been achieved, with a minimum antiproton radius of only 0.17 mm. We describe the experimental conditions necessary to perform such a compression: minimizing the tails of the electron density distribution is paramount to ensure that the antiproton density distribution follows that of the electrons. Such electron density tails are remnants of rotating wall compression and in many cases can remain unnoticed. We observe that the compression dynamics for a pure electron plasma behaves the same way as that of a mixed antiproton and electron plasma. Thanks to this optimized compression method and the high single shot antiproton catching efficiency, we observe for the first time cold and dense non-neutral antiproton plasmas with particle densities n ≥ 1013 m-3, which pave the way for an efficient pulsed antihydrogen production in AEḡIS.

  11. A comparative experimental study on engine operating on premixed charge compression ignition and compression ignition mode

    Directory of Open Access Journals (Sweden)

    Bhiogade Girish E.

    2017-01-01

    Full Text Available New combustion concepts have been recently developed with the purpose to tackle the problem of high emissions level of traditional direct injection Diesel engines. A good example is the premixed charge compression ignition combustion. A strategy in which early injection is used causing a burning process in which the fuel burns in the premixed condition. In compression ignition engines, soot (particulate matter and NOx emissions are an extremely unsolved issue. Premixed charge compression ignition is one of the most promising solutions that combine the advantages of both spark ignition and compression ignition combustion modes. It gives thermal efficiency close to the compression ignition engines and resolves the associated issues of high NOx and particulate matter, simultaneously. Premixing of air and fuel preparation is the challenging part to achieve premixed charge compression ignition combustion. In the present experimental study a diesel vaporizer is used to achieve premixed charge compression ignition combustion. A vaporized diesel fuel was mixed with the air to form premixed charge and inducted into the cylinder during the intake stroke. Low diesel volatility remains the main obstacle in preparing premixed air-fuel mixture. Exhaust gas re-circulation can be used to control the rate of heat release. The objective of this study is to reduce exhaust emission levels with maintaining thermal efficiency close to compression ignition engine.

  12. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  13. Methods for compressible fluid simulation on GPUs using high-order finite differences

    Science.gov (United States)

    Pekkilä, Johannes; Väisälä, Miikka S.; Käpylä, Maarit J.; Käpylä, Petri J.; Anjum, Omer

    2017-08-01

    We focus on implementing and optimizing a sixth-order finite-difference solver for simulating compressible fluids on a GPU using third-order Runge-Kutta integration. Since graphics processing units perform well in data-parallel tasks, this makes them an attractive platform for fluid simulation. However, high-order stencil computation is memory-intensive with respect to both main memory and the caches of the GPU. We present two approaches for simulating compressible fluids using 55-point and 19-point stencils. We seek to reduce the requirements for memory bandwidth and cache size in our methods by using cache blocking and decomposing a latency-bound kernel into several bandwidth-bound kernels. Our fastest implementation is bandwidth-bound and integrates 343 million grid points per second on a Tesla K40t GPU, achieving a 3 . 6 × speedup over a comparable hydrodynamics solver benchmarked on two Intel Xeon E5-2690v3 processors. Our alternative GPU implementation is latency-bound and achieves the rate of 168 million updates per second.

  14. High-resolution quantization based on soliton self-frequency shift and spectral compression in a bi-directional comb-fiber architecture

    Science.gov (United States)

    Zhang, Xuyan; Zhang, Zhiyao; Wang, Shubing; Liang, Dong; Li, Heping; Liu, Yong

    2018-03-01

    We propose and demonstrate an approach that can achieve high-resolution quantization by employing soliton self-frequency shift and spectral compression. Our approach is based on a bi-directional comb-fiber architecture which is composed of a Sagnac-loop-based mirror and a comb-like combination of N sections of interleaved single-mode fibers and high nonlinear fibers. The Sagnac-loop-based mirror placed at the terminal of a bus line reflects the optical pulses back to the bus line to achieve additional N-stage spectral compression, thus single-stage soliton self-frequency shift (SSFS) and (2 N - 1)-stage spectral compression are realized in the bi-directional scheme. The fiber length in the architecture is numerically optimized, and the proposed quantization scheme is evaluated by both simulation and experiment in the case of N = 2. In the experiment, a quantization resolution of 6.2 bits is obtained, which is 1.2-bit higher than that of its uni-directional counterpart.

  15. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  16. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    Science.gov (United States)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  17. Achieving high aspect ratio wrinkles by modifying material network stress.

    Science.gov (United States)

    Chen, Yu-Cheng; Wang, Yan; McCarthy, Thomas J; Crosby, Alfred J

    2017-06-07

    Wrinkle aspect ratio, or the amplitude divided by the wavelength, is hindered by strain localization transitions when an increasing global compressive stress is applied to synthetic material systems. However, many examples from living organisms show extremely high aspect ratios, such as gut villi and flower petals. We use three experimental approaches to demonstrate that these high aspect ratio structures can be achieved by modifying the network stress in the wrinkle substrate. We modify the wrinkle stress and effectively delay the strain localization transition, such as folding, to larger aspect ratios by using a zero-stress initial wavy substrate, creating a secondary network with post-curing, or using chemical stress relaxation materials. A wrinkle aspect ratio as high as 0.85, almost three times higher than common values of synthetic wrinkles, is achieved, and a quantitative framework is presented to provide understanding the different strategies and predictions for future investigations.

  18. Dynamic High-Temperature Characterization of an Iridium Alloy in Compression at High Strain Rates

    Energy Technology Data Exchange (ETDEWEB)

    Song, Bo [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Experimental Environment Simulation Dept.; Nelson, Kevin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Mechanics of Materials Dept.; Lipinski, Ronald J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Advanced Nuclear Fuel Cycle Technology Dept.; Bignell, John L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Structural and Thermal Analysis Dept.; Ulrich, G. B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Radioisotope Power Systems Program; George, E. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Radioisotope Power Systems Program

    2014-06-01

    Iridium alloys have superior strength and ductility at elevated temperatures, making them useful as structural materials for certain high-temperature applications. However, experimental data on their high-temperature high-strain-rate performance are needed for understanding high-speed impacts in severe elevated-temperature environments. Kolsky bars (also called split Hopkinson bars) have been extensively employed for high-strain-rate characterization of materials at room temperature, but it has been challenging to adapt them for the measurement of dynamic properties at high temperatures. Current high-temperature Kolsky compression bar techniques are not capable of obtaining satisfactory high-temperature high-strain-rate stress-strain response of thin iridium specimens investigated in this study. We analyzed the difficulties encountered in high-temperature Kolsky compression bar testing of thin iridium alloy specimens. Appropriate modifications were made to the current high-temperature Kolsky compression bar technique to obtain reliable compressive stress-strain response of an iridium alloy at high strain rates (300 – 10000 s-1) and temperatures (750°C and 1030°C). Uncertainties in such high-temperature high-strain-rate experiments on thin iridium specimens were also analyzed. The compressive stress-strain response of the iridium alloy showed significant sensitivity to strain rate and temperature.

  19. Stainless steel component with compressed fiber Bragg grating for high temperature sensing applications

    Science.gov (United States)

    Jinesh, Mathew; MacPherson, William N.; Hand, Duncan P.; Maier, Robert R. J.

    2016-05-01

    A smart metal component having the potential for high temperature strain sensing capability is reported. The stainless steel (SS316) structure is made by selective laser melting (SLM). A fiber Bragg grating (FBG) is embedded in to a 3D printed U-groove by high temperature brazing using a silver based alloy, achieving an axial FBG compression of 13 millistrain at room temperature. Initial results shows that the test component can be used for up to 700°C for sensing applications.

  20. A Fast Faraday Cup for the Neutralized Drift Compression Experiment

    CERN Document Server

    Sefkow, Adam; Coleman, Joshua E; Davidson, Ronald C; Efthimion, Philip; Eylon, Shmuel; Gilson, Erik P; Greenway, Wayne; Henestroza, Enrique; Kwan, Joe W; Roy, Prabir K; Vanecek, David; Waldron, William; Welch, Dale; Yu, Simon

    2005-01-01

    Heavy ion drivers for high energy density physics applications and inertial fusion energy use space-charge-dominated beams which require longitudinal bunch compression in order to achieve sufficiently high beam intensity at the target. The Neutralized Drift Compression Experiment-1A (NDCX-1A) at Lawrence Berkeley National Laboratory (LBNL) is used to determine the effective limits of neutralized drift compression. NDCX-1A investigates the physics of longitudinal drift compression of an intense ion beam, achieved by imposing an initial velocity tilt on the drifting beam and neutralizing the beam's space-charge with background plasma. Accurately measuring the longitudinal compression of the beam pulse with high resolution is critical for NDCX-1A, and an understanding of the accessible parameter space is modeled using the LSP particle-in-cell (PIC) code. The design and preliminary experimental results for an ion beam probe which measures the total beam current at the focal plane as a function of time are summari...

  1. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  2. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  3. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  4. Design of indirectly driven, high-compression Inertial Confinement Fusion implosions with improved hydrodynamic stability using a 4-shock adiabat-shaped drive

    Energy Technology Data Exchange (ETDEWEB)

    Milovich, J. L., E-mail: milovich1@llnl.gov; Robey, H. F.; Clark, D. S.; Baker, K. L.; Casey, D. T.; Cerjan, C.; Field, J.; MacPhee, A. G.; Pak, A.; Patel, P. K.; Peterson, J. L.; Smalyuk, V. A.; Weber, C. R. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2015-12-15

    Experimental results from indirectly driven ignition implosions during the National Ignition Campaign (NIC) [M. J. Edwards et al., Phys. Plasmas 20, 070501 (2013)] achieved a record compression of the central deuterium-tritium fuel layer with measured areal densities up to 1.2 g/cm{sup 2}, but with significantly lower total neutron yields (between 1.5 × 10{sup 14} and 5.5 × 10{sup 14}) than predicted, approximately 10% of the 2D simulated yield. An order of magnitude improvement in the neutron yield was subsequently obtained in the “high-foot” experiments [O. A. Hurricane et al., Nature 506, 343 (2014)]. However, this yield was obtained at the expense of fuel compression due to deliberately higher fuel adiabat. In this paper, the design of an adiabat-shaped implosion is presented, in which the laser pulse is tailored to achieve similar resistance to ablation-front instability growth, but with a low fuel adiabat to achieve high compression. Comparison with measured performance shows a factor of 3–10× improvement in the neutron yield (>40% of predicted simulated yield) over similar NIC implosions, while maintaining a reasonable fuel compression of >1 g/cm{sup 2}. Extension of these designs to higher laser power and energy is discussed to further explore the trade-off between increased implosion velocity and the deleterious effects of hydrodynamic instabilities.

  5. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  6. High bit depth infrared image compression via low bit depth codecs

    Science.gov (United States)

    Belyaev, Evgeny; Mantel, Claire; Forchhammer, Søren

    2017-08-01

    Future infrared remote sensing systems, such as monitoring of the Earth's environment by satellites, infrastructure inspection by unmanned airborne vehicles etc., will require 16 bit depth infrared images to be compressed and stored or transmitted for further analysis. Such systems are equipped with low power embedded platforms where image or video data is compressed by a hardware block called the video processing unit (VPU). However, in many cases using two 8-bit VPUs can provide advantages compared with using higher bit depth image compression directly. We propose to compress 16 bit depth images via 8 bit depth codecs in the following way. First, an input 16 bit depth image is mapped into 8 bit depth images, e.g., the first image contains only the most significant bytes (MSB image) and the second one contains only the least significant bytes (LSB image). Then each image is compressed by an image or video codec with 8 bits per pixel input format. We analyze how the compression parameters for both MSB and LSB images should be chosen to provide the maximum objective quality for a given compression ratio. Finally, we apply the proposed infrared image compression method utilizing JPEG and H.264/AVC codecs, which are usually available in efficient implementations, and compare their rate-distortion performance with JPEG2000, JPEG-XT and H.265/HEVC codecs supporting direct compression of infrared images in 16 bit depth format. A preliminary result shows that two 8 bit H.264/AVC codecs can achieve similar result as 16 bit HEVC codec.

  7. Development of ultra-short high voltage pulse technology using magnetic pulse compression

    Energy Technology Data Exchange (ETDEWEB)

    Cha, Byung Heon; Kim, S. G.; Nam, S. M.; Lee, B. C.; Lee, S. M.; Jeong, Y. U.; Cho, S. O.; Jin, J. T.; Choi, H. L

    1998-01-01

    The control circuit for high voltage switches, the saturable inductor for magnetic assist, and the magnetic pulse compression circuit were designed, constructed, and tested. The core materials of saturable inductors in magnetic pulse compression circuit were amorphous metal and ferrite and total compression stages were 3. By the test, in high repetition rate, high pulse compression were certified. As a result of this test, it became possible to increase life-time of thyratrons and to replace thyratrons by solid-state semiconductor switches. (author). 16 refs., 16 tabs.

  8. Development of ultra-short high voltage pulse technology using magnetic pulse compression

    International Nuclear Information System (INIS)

    Cha, Byung Heon; Kim, S. G.; Nam, S. M.; Lee, B. C.; Lee, S. M.; Jeong, Y. U.; Cho, S. O.; Jin, J. T.; Choi, H. L.

    1998-01-01

    The control circuit for high voltage switches, the saturable inductor for magnetic assist, and the magnetic pulse compression circuit were designed, constructed, and tested. The core materials of saturable inductors in magnetic pulse compression circuit were amorphous metal and ferrite and total compression stages were 3. By the test, in high repetition rate, high pulse compression were certified. As a result of this test, it became possible to increase life-time of thyratrons and to replace thyratrons by solid-state semiconductor switches. (author). 16 refs., 16 tabs

  9. Compressed sensing for high-resolution nonlipid suppressed 1 H FID MRSI of the human brain at 9.4T.

    Science.gov (United States)

    Nassirpour, Sahar; Chang, Paul; Avdievitch, Nikolai; Henning, Anke

    2018-04-29

    The aim of this study was to apply compressed sensing to accelerate the acquisition of high resolution metabolite maps of the human brain using a nonlipid suppressed ultra-short TR and TE 1 H FID MRSI sequence at 9.4T. X-t sparse compressed sensing reconstruction was optimized for nonlipid suppressed 1 H FID MRSI data. Coil-by-coil x-t sparse reconstruction was compared with SENSE x-t sparse and low rank reconstruction. The effect of matrix size and spatial resolution on the achievable acceleration factor was studied. Finally, in vivo metabolite maps with different acceleration factors of 2, 4, 5, and 10 were acquired and compared. Coil-by-coil x-t sparse compressed sensing reconstruction was not able to reliably recover the nonlipid suppressed data, rather a combination of parallel and sparse reconstruction was necessary (SENSE x-t sparse). For acceleration factors of up to 5, both the low-rank and the compressed sensing methods were able to reconstruct the data comparably well (root mean squared errors [RMSEs] ≤ 10.5% for Cre). However, the reconstruction time of the low rank algorithm was drastically longer than compressed sensing. Using the optimized compressed sensing reconstruction, acceleration factors of 4 or 5 could be reached for the MRSI data with a matrix size of 64 × 64. For lower spatial resolutions, an acceleration factor of up to R∼4 was successfully achieved. By tailoring the reconstruction scheme to the nonlipid suppressed data through parameter optimization and performance evaluation, we present high resolution (97 µL voxel size) accelerated in vivo metabolite maps of the human brain acquired at 9.4T within scan times of 3 to 3.75 min. © 2018 International Society for Magnetic Resonance in Medicine.

  10. A novel high-frequency encoding algorithm for image compression

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-12-01

    In this paper, a new method for image compression is proposed whose quality is demonstrated through accurate 3D reconstruction from 2D images. The method is based on the discrete cosine transform (DCT) together with a high-frequency minimization encoding algorithm at compression stage and a new concurrent binary search algorithm at decompression stage. The proposed compression method consists of five main steps: (1) divide the image into blocks and apply DCT to each block; (2) apply a high-frequency minimization method to the AC-coefficients reducing each block by 2/3 resulting in a minimized array; (3) build a look up table of probability data to enable the recovery of the original high frequencies at decompression stage; (4) apply a delta or differential operator to the list of DC-components; and (5) apply arithmetic encoding to the outputs of steps (2) and (4). At decompression stage, the look up table and the concurrent binary search algorithm are used to reconstruct all high-frequency AC-coefficients while the DC-components are decoded by reversing the arithmetic coding. Finally, the inverse DCT recovers the original image. We tested the technique by compressing and decompressing 2D images including images with structured light patterns for 3D reconstruction. The technique is compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results demonstrate that the proposed compression method is perceptually superior to JPEG with equivalent quality to JPEG2000. Concerning 3D surface reconstruction from images, it is demonstrated that the proposed method is superior to both JPEG and JPEG2000.

  11. COxSwAIN: Compressive Sensing for Advanced Imaging and Navigation

    Science.gov (United States)

    Kurwitz, Richard; Pulley, Marina; LaFerney, Nathan; Munoz, Carlos

    2015-01-01

    The COxSwAIN project focuses on building an image and video compression scheme that can be implemented in a small or low-power satellite. To do this, we used Compressive Sensing, where the compression is performed by matrix multiplications on the satellite and reconstructed on the ground. Our paper explains our methodology and demonstrates the results of the scheme, being able to achieve high quality image compression that is robust to noise and corruption.

  12. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  13. Structure of boron nitride after the high-temperature shock compression

    International Nuclear Information System (INIS)

    Kurdyumov, A.V.; Ostrovskaya, N.F.; Pilipenko, V.A.; Pilyankevich, A.N.; Savvakin, G.I.; Trefilov, V.I.

    1979-01-01

    Boron nitride structure changes as a result of high temperature dynamic compression are studied. The X-ray technique and transmission electron microscopy have been applied. The data on the structure and regularities of formation of diamond-like modifications of boron nitride at high temperature impact compression permit to consider martensite transformation as the first stage of formation of the sphalerite phase stable at high pressures. The second stage is possible if the temperature at the impact moment is sufficiently high for intensive diffusion processes

  14. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  15. Initial Results on Neutralized Drift Compression Experiments (NDCX-IA) for High Intensity Ion Beam

    CERN Document Server

    Roy, Prabir K; Baca, David; Bieniosek, Frank; Coleman, Joshua E; Davidson, Ronald C; Efthimion, Philip; Eylon, Shmuel; Gilson, Erik P; Grant Logan, B; Greenway, Wayne; Henestroza, Enrique; Kaganovich, Igor D; Leitner, Matthaeus; Rose, David; Sefkow, Adam; Sharp, William M; Shuman, Derek; Thoma, Carsten H; Vanecek, David; Waldron, William; Welch, Dale; Yu, Simon

    2005-01-01

    Ion beam neutralization and compression experiments are designed to determine the feasibility of using compressed high intensity ion beams for high energy density physics (HEDP) experiments and for inertial fusion power. To quantitatively ascertain the various mechanisms and methods for beam compression, the Neutralized Drift Compression Experiment (NDCX) facility is being constructed at Lawrence Berkeley National Laboratory (LBNL). In the first compression experiment, a 260 KeV, 25 mA, K+ ion beam of centimeters size is radially compressed to a mm size spot by neutralization in a meter-long plasma column and beam peak current is longitudinally compressed by an induction velocity tilt core. Instrumentation, preliminary results of the experiments, and practical limits of compression are presented. These include parameters such as emittance, degree of neutralization, velocity tilt time profile, and accuracy of measurements (fast and spatially high resolution diagnostic) are discussed.

  16. A high compression crystal growth system

    International Nuclear Information System (INIS)

    Nieman, H.F.; Walton, A.A.; Powell, B.M.; Dolling, G.

    1980-01-01

    This report describes the construction and operating procedure for a high compression crystal growth system, capable of growing single crystals from the fluid phase over the temperature range of 4.2 K to 300 K, at pressures up to 900 MPa. Some experimental results obtained with the system are given for solid β-nitrogen. (auth)

  17. Ge nanobelts with high compressive strain fabricated by secondary oxidation of self-assembly SiGe rings

    DEFF Research Database (Denmark)

    Lu, Weifang; Li, Cheng; Lin, Guangyang

    2015-01-01

    Curled Ge nanobelts were fabricated by secondary oxidation of self-assembly SiGe rings, which were exfoliated from the SiGe stripes on the insulator. The Ge-rich SiGe stripes on insulator were formed by hololithography and modified Ge condensation processes of Si0.82Ge0.18 on SOI substrate. Ge...... nanobelts under a residual compressive strain of 2% were achieved, and the strain should be higher before partly releasing through bulge islands and breakage of the curled Ge nanobelts during the secondary oxidation process. The primary factor leading to compressive strain is thermal shrinkage of Ge...... nanobelts, which extrudes to Ge nanobelts in radial and tangent directions during the cooling process. This technique is promising for application in high-mobility Ge nano-scale transistors...

  18. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  19. The Effects of Design Strength, Fly Ash Content and Curing Method on Compressive Strength of High Volume Fly Ash Concrete: A Design of Experimental

    Directory of Open Access Journals (Sweden)

    Solikin Mochamad

    2017-01-01

    Full Text Available High volume fly ash concrete becomes one of alternatives to produce green concrete as it uses waste material and significantly reduces the utilization of Portland cement in concrete production. Although using less cement, its compressive strength is comparable to ordinary Portland cement (hereafter OPC and the its durability increases significantly. This paper reports investigation on the effect of design strength, fly ash content and curing method on compressive strength of High Volume Fly Ash Concrete. The experiment and data analysis were prepared using minitab, a statistic software for design of experimental. The specimens were concrete cylinder with diameter of 15 cm and height of 30 cm, tested for its compressive strength at 56 days. The result of the research demonstrates that high volume fly ash concrete can produce comparable compressive strength which meets the strength of OPC design strength especially for high strength concrete. In addition, the best mix proportion to achieve the design strength is the combination of high strength concrete and 50% content of fly ash. Moreover, the use of spraying method for curing method of concrete on site is still recommended as it would not significantly reduce the compressive strength result.

  20. Achieving Mixtures of Ultra-High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Mircea POPA

    2013-07-01

    Full Text Available Ultra-High Performance Concrete (UHPC is a relatively new concrete. According to [11] UHPC is that concrete which features compressive strength over C100/115 class. Up to this point standards for this type of concrete were not adopted, although its characteristic strength exceeds those specified in [33]. Its main property is high compressive strength. This provides the possibility of reducing the section of elements (beams or columns made of this type of concrete, while the load capacity remains high. The study consists in blending mixtures of UHPC made of varying proportions of materials. The authors have obtained strengths of up to 160 MPa. The materials used are: Portland cement, silica fume, quartz powder, steel fibers, superplasticiser, sand and crushed aggregate for concrete - andesite.

  1. Compressive behaviour of hybrid fiber-reinforced reactive powder concrete after high temperature

    International Nuclear Information System (INIS)

    Zheng, Wenzhong; Li, Haiyan; Wang, Ying

    2012-01-01

    Highlights: ► We complete the high temperature test and compression test of RPC after 20–900 °C. ► The presence of steel fiber and polypropylene fiber can prevent RPC from spalling. ► Compressive strength increases first and then decreases with elevated temperatures. ► Microstructure deterioration is the root cause of macro-properties recession. ► Equations to express the compressive strength change with temperature are proposed. -- Abstract: This study focuses on the compressive properties and microstructures of reactive powder concrete (RPC) mixed with steel fiber and polypropylene fiber after exposure to 20–900 °C. The volume dosage of steel fiber and polypropylene fiber is (2%, 0.1%), (2%, 0.2%) and (1%, 0.2%). The effects of heating temperature, fiber content and specimen size on the compressive properties are analyzed. The microstructures of RPC exposed to different high temperatures are studied by scanning electron microscope (SEM). The results indicate that the compressive strength of hybrid fiber-reinforced RPC increases at first, then decreases with the increasing temperature, and the basic reason for the degradation of macro-mechanical properties is the deterioration of RPC microstructure. Based on the experimental results, equations to express the relationships of the compressive strength with the heating temperatures are established. Compared with normal-strength and high-strength concrete, the hybrid fiber-reinforced RPC has excellent capacity in resistance to high temperature.

  2. Multiple-correction hybrid k-exact schemes for high-order compressible RANS-LES simulations on fully unstructured grids

    Science.gov (United States)

    Pont, Grégoire; Brenner, Pierre; Cinnella, Paola; Maugars, Bruno; Robinet, Jean-Christophe

    2017-12-01

    A Godunov's type unstructured finite volume method suitable for highly compressible turbulent scale-resolving simulations around complex geometries is constructed by using a successive correction technique. First, a family of k-exact Godunov schemes is developed by recursively correcting the truncation error of the piecewise polynomial representation of the primitive variables. The keystone of the proposed approach is a quasi-Green gradient operator which ensures consistency on general meshes. In addition, a high-order single-point quadrature formula, based on high-order approximations of the successive derivatives of the solution, is developed for flux integration along cell faces. The proposed family of schemes is compact in the algorithmic sense, since it only involves communications between direct neighbors of the mesh cells. The numerical properties of the schemes up to fifth-order are investigated, with focus on their resolvability in terms of number of mesh points required to resolve a given wavelength accurately. Afterwards, in the aim of achieving the best possible trade-off between accuracy, computational cost and robustness in view of industrial flow computations, we focus more specifically on the third-order accurate scheme of the family, and modify locally its numerical flux in order to reduce the amount of numerical dissipation in vortex-dominated regions. This is achieved by switching from the upwind scheme, mostly applied in highly compressible regions, to a fourth-order centered one in vortex-dominated regions. An analytical switch function based on the local grid Reynolds number is adopted in order to warrant numerical stability of the recentering process. Numerical applications demonstrate the accuracy and robustness of the proposed methodology for compressible scale-resolving computations. In particular, supersonic RANS/LES computations of the flow over a cavity are presented to show the capability of the scheme to predict flows with shocks

  3. Fracture Energy of High-Strength Concrete in Compression

    DEFF Research Database (Denmark)

    Dahl, Henrik; Brincker, Rune

    is essential for understanding the fracture mechanism of concrete in compression. In this paper a series of tests is reported, carried out for the purpose of studying the fracture mechanical properties of concrete in compression. Including the measurement and study of the descending branch, a new experimental...... method has been used to investigate the influence of boundary conditions, loading rate, size effects and the influence of the strength on the fracture energy of high-strength concrete over the range 70 MPa to 150 MPa, expressed in nominal values....

  4. Theoretical models for describing longitudinal bunch compression in the neutralized drift compression experiment

    Directory of Open Access Journals (Sweden)

    Adam B. Sefkow

    2006-09-01

    Full Text Available Heavy ion drivers for warm dense matter and heavy ion fusion applications use intense charge bunches which must undergo transverse and longitudinal compression in order to meet the requisite high current densities and short pulse durations desired at the target. The neutralized drift compression experiment (NDCX at the Lawrence Berkeley National Laboratory is used to study the longitudinal neutralized drift compression of a space-charge-dominated ion beam, which occurs due to an imposed longitudinal velocity tilt and subsequent neutralization of the beam’s space charge by background plasma. Reduced theoretical models have been used in order to describe the realistic propagation of an intense charge bunch through the NDCX device. A warm-fluid model is presented as a tractable computational tool for investigating the nonideal effects associated with the experimental acceleration gap geometry and voltage waveform of the induction module, which acts as a means to pulse shape both the velocity and line density profiles. Self-similar drift compression solutions can be realized in order to transversely focus the entire charge bunch to the same focal plane in upcoming simultaneous transverse and longitudinal focusing experiments. A kinetic formalism based on the Vlasov equation has been employed in order to show that the peaks in the experimental current profiles are a result of the fact that only the central portion of the beam contributes effectively to the main compressed pulse. Significant portions of the charge bunch reside in the nonlinearly compressing part of the ion beam because of deviations between the experimental and ideal velocity tilts. Those regions form a pedestal of current around the central peak, thereby decreasing the amount of achievable longitudinal compression and increasing the pulse durations achieved at the focal plane. A hybrid fluid-Vlasov model which retains the advantages of both the fluid and kinetic approaches has been

  5. Electron beam acceleration and compression for short wavelength FELs

    International Nuclear Information System (INIS)

    Raubenheimer, T.O.

    1994-11-01

    A single pass UV or X-ray FEL will require a low emittance electron beam with high peak current and relatively high beam energy, a few hundred MeV to many GeV. To achieve the necessary peak current and beam energy, the beams must be bunch compressed and they must be accelerated in long transport lines where dispersive and wakefield emittance dilutions are important. In this paper, we will describe the sources and significance of the dilutions during acceleration, bunch compression, and transport through the undulator. In addition, we will discuss sources of jitter, especially effects arising from the bunch compressions, and the possible cancellation techniques

  6. Mechanical behavior and dynamic failure of high-strength ultrafine grained tungsten under uniaxial compression

    International Nuclear Information System (INIS)

    Wei, Q.; Jiao, T.; Ramesh, K.T.; Ma, E.; Kecskes, L.J.; Magness, L.; Dowding, R.; Kazykhanov, V.U.; Valiev, R.Z.

    2006-01-01

    We have systematically investigated the quasi-static and dynamic mechanical behavior (especially dynamic failure) of ultra-fine grained (UFG) tungsten (W) under uniaxial compression. The starting material is of commercial purity and large grain size. We utilized severe plastic deformation to achieve the ultrafine microstructure characterized by grains and subgrains with sizes of ∼500 nm, as identified by transmission electron microscopy. Results of quasi-static compression show that the UFG W behaves in an elastic-nearly perfect plastic manner (i.e., vanishing strain hardening), with its flow stress approaching 2 GPa, close to twice that of conventional coarse grain W. Post-mortem examinations of the quasi-statically loaded samples show no evidence of cracking, in sharp contrast to the behavior of conventional W (where axial cracking is usually observed). Under uniaxial dynamic compression (strain rate ∼10 3 s -1 ), the true stress-true strain curves of the UFG W exhibit significant flow softening, and the peak stress is ∼3 GPa. Furthermore, the strain rate sensitivity of the UFG W is reduced to half the value of the conventional W. Both in situ high-speed photography and post-mortem examinations reveal shear localization and as a consequence, cracking of the UFG W under dynamic uniaxial compression. These observations are consistent with recent observations on other body-centered cubic metals with nanocrystalline or ultrafine microstructures. The experimental results are discussed using existing models for adiabatic shear localization in metals

  7. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Cording, Patrick Hagge

    2017-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  8. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2016-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  9. Intertrochanteric fractures in elderly high risk patients treated with Ender nails and compression screw

    Directory of Open Access Journals (Sweden)

    Gangadharan Sidhartha

    2010-01-01

    Full Text Available Background: Ender and Simon Weidner popularized the concept of closed condylocephlic nailing for intertrochanteric fractures in 1970. The clinical experience of authors revealed that Ender nailing alone cannot provide secure fixation in elderly patients with osteoporosis. Hence we conducted a study to evaluate the efficacy of a combined fixation procedure using Ender nails and a cannulated compression screw for intertrochanteric fractures. Materials and Methods: 76 patients with intertrochanteric fractures were treated using intramedullary Ender nails and cannulated compression screw from January 2004 to December 2007. The mean age of the patients was 80 years (range 70-105 years.Using the Evan′s system of classification 49 were stable and 27 unstable fractures. Inclusion criteria was high risk elderly patients (age > 70 years with intertrochanteric fracture. The exclusion criteria included patients with pressure sores over the trochanteric region. Many patients had pre-existing co-morbidities like diabetes mellitus, hypertension, COPD, ischemic heart disease, CVA and coronary artery bypass surgery. The two Ender nails of 4.5mm each were passed across the fracture site into the proximal neck. This was reinforced with a 6.5 mm cannulated compression screw passed from the sub trochanteric region, across the fracture into the head. Results: The mean follow-up was 14 months (range 9-19 months Average time to fracture union was 10 weeks (range 6-16 weeks. The mean knee ROM was 130o (± 5o. There was no case of nail penetration into hip joint. In five cases with advanced osteoporosis there was minimal migration of Ender nails distally. Conclusions: The Ender nailing combined with compression screw fixation in cases of intertrochanteric fractures in high risk elderly patients could achieve reliable fracture stability with minimal complications.

  10. N-Cadherin Maintains the Healthy Biology of Nucleus Pulposus Cells under High-Magnitude Compression.

    Science.gov (United States)

    Wang, Zhenyu; Leng, Jiali; Zhao, Yuguang; Yu, Dehai; Xu, Feng; Song, Qingxu; Qu, Zhigang; Zhuang, Xinming; Liu, Yi

    2017-01-01

    Mechanical load can regulate disc nucleus pulposus (NP) biology in terms of cell viability, matrix homeostasis and cell phenotype. N-cadherin (N-CDH) is a molecular marker of NP cells. This study investigated the role of N-CDH in maintaining NP cell phenotype, NP matrix synthesis and NP cell viability under high-magnitude compression. Rat NP cells seeded on scaffolds were perfusion-cultured using a self-developed perfusion bioreactor for 5 days. NP cell biology in terms of cell apoptosis, matrix biosynthesis and cell phenotype was studied after the cells were subjected to different compressive magnitudes (low- and high-magnitudes: 2% and 20% compressive deformation, respectively). Non-loaded NP cells were used as controls. Lentivirus-mediated N-CDH overexpression was used to further investigate the role of N-CDH under high-magnitude compression. The 20% deformation compression condition significantly decreased N-CDH expression compared with the 2% deformation compression and control conditions. Meanwhile, 20% deformation compression increased the number of apoptotic NP cells, up-regulated the expression of Bax and cleaved-caspase-3 and down-regulated the expression of Bcl-2, matrix macromolecules (aggrecan and collagen II) and NP cell markers (glypican-3, CAXII and keratin-19) compared with 2% deformation compression. Additionally, N-CDH overexpression attenuated the effects of 20% deformation compression on NP cell biology in relation to the designated parameters. N-CDH helps to restore the cell viability, matrix biosynthesis and cellular phenotype of NP cells under high-magnitude compression. © 2017 The Author(s). Published by S. Karger AG, Basel.

  11. Highly compressible and all-solid-state supercapacitors based on nanostructured composite sponge.

    Science.gov (United States)

    Niu, Zhiqiang; Zhou, Weiya; Chen, Xiaodong; Chen, Jun; Xie, Sishen

    2015-10-21

    Based on polyaniline-single-walled carbon nanotubes -sponge electrodes, highly compressible all-solid-state supercapacitors are prepared with an integrated configuration using a poly(vinyl alcohol) (PVA)/H2 SO4 gel as the electrolyte. The unique configuration enables the resultant supercapacitors to be compressed as an integrated unit arbitrarily during 60% compressible strain. Furthermore, the performance of the resultant supercapacitors is nearly unchanged even under 60% compressible strain. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Simulations and experiments of intense ion beam compression in space and time

    International Nuclear Information System (INIS)

    Yu, S.S.; Seidl, P.A.; Roy, P.K.; Lidia, S.M.; Coleman, J.E.; Kaganovich, I.D.; Gilson, E.P.; Welch, Dale Robert; Sefkow, Adam B.; Davidson, R.C.

    2008-01-01

    The Heavy Ion Fusion Science Virtual National Laboratory has achieved 60-fold longitudinal pulse compression of ion beams on the Neutralized Drift Compression Experiment (NDCX) (P. K. Roy et al., Phys. Rev. Lett. 95, 234801 (2005)). To focus a space-charge-dominated charge bunch to sufficiently high intensities for ion-beam-heated warm dense matter and inertial fusion energy studies, simultaneous transverse and longitudinal compression to a coincident focal plane is required. Optimizing the compression under the appropriate constraints can deliver higher intensity per unit length of accelerator to the target, thereby facilitating the creation of more compact and cost-effective ion beam drivers. The experiments utilized a drift region filled with high-density plasma in order to neutralize the space charge and current of an ∼300 keV K + beam and have separately achieved transverse and longitudinal focusing to a radius Z 2 MeV) ion beam user-facility for warm dense matter and inertial fusion energy-relevant target physics experiments.

  13. Effect of high image compression on the reproducibility of cardiac Sestamibi reporting

    International Nuclear Information System (INIS)

    Thomas, P.; Allen, L.; Beuzeville, S.

    1999-01-01

    Full text: Compression algorithms have been mooted to minimize storage space and transmission times of digital images. We assessed the impact of high-level lousy compression using JPEG and wavelet algorithms on image quality and reporting accuracy of cardiac Sestamibi studies. Twenty stress/rest Sestamibi cardiac perfusion studies were reconstructed into horizontal short, vertical long and horizontal long axis slices using conventional methods. Each of these six sets of slices were aligned for reporting and saved (uncompressed) as a bitmap. This bitmap was then compressed using JPEG compression, then decompressed and saved as a bitmap for later viewing. This process was repeated using the original bitmap and wavelet compression. Finally, a second copy of the original bitmap was made. All 80 bitmaps were randomly coded to ensure blind reporting. The bitmaps were read blinded and by consensus of 2 experienced nuclear medicine physicians using a 5-point scale and 25 cardiac segments. Subjective image quality was also reported using a 3-point scale. Samples of the compressed images were also subtracted from the original bitmap for visual comparison of differences. Results showed an average compression ratio of 23:1 for wavelet and 13:1 for JPEG. Image subtraction showed only very minor discordance between the original and compressed images. There was no significant difference in subjective quality between the compressed and uncompressed images. There was no significant difference in reporting reproducibility of the identical bitmap copy, the JPEG image and the wavelet image compared with the original bitmap. Use of the high compression algorithms described had no significant impact on reporting reproducibility and subjective image quality of cardiac Sestamibi perfusion studies

  14. Implementation of a compressive sampling scheme for wireless sensors to achieve energy efficiency in a structural health monitoring system

    Science.gov (United States)

    O'Connor, Sean M.; Lynch, Jerome P.; Gilbert, Anna C.

    2013-04-01

    Wireless sensors have emerged to offer low-cost sensors with impressive functionality (e.g., data acquisition, computing, and communication) and modular installations. Such advantages enable higher nodal densities than tethered systems resulting in increased spatial resolution of the monitoring system. However, high nodal density comes at a cost as huge amounts of data are generated, weighing heavy on power sources, transmission bandwidth, and data management requirements, often making data compression necessary. The traditional compression paradigm consists of high rate (>Nyquist) uniform sampling and storage of the entire target signal followed by some desired compression scheme prior to transmission. The recently proposed compressed sensing (CS) framework combines the acquisition and compression stage together, thus removing the need for storage and operation of the full target signal prior to transmission. The effectiveness of the CS approach hinges on the presence of a sparse representation of the target signal in a known basis, similarly exploited by several traditional compressive sensing applications today (e.g., imaging, MRI). Field implementations of CS schemes in wireless SHM systems have been challenging due to the lack of commercially available sensing units capable of sampling methods (e.g., random) consistent with the compressed sensing framework, often moving evaluation of CS techniques to simulation and post-processing. The research presented here describes implementation of a CS sampling scheme to the Narada wireless sensing node and the energy efficiencies observed in the deployed sensors. Of interest in this study is the compressibility of acceleration response signals collected from a multi-girder steel-concrete composite bridge. The study shows the benefit of CS in reducing data requirements while ensuring data analysis on compressed data remain accurate.

  15. Developments in time-resolved high pressure x-ray diffraction using rapid compression and decompression

    International Nuclear Information System (INIS)

    Smith, Jesse S.; Sinogeikin, Stanislav V.; Lin, Chuanlong; Rod, Eric; Bai, Ligang; Shen, Guoyin

    2015-01-01

    Complementary advances in high pressure research apparatus and techniques make it possible to carry out time-resolved high pressure research using what would customarily be considered static high pressure apparatus. This work specifically explores time-resolved high pressure x-ray diffraction with rapid compression and/or decompression of a sample in a diamond anvil cell. Key aspects of the synchrotron beamline and ancillary equipment are presented, including source considerations, rapid (de)compression apparatus, high frequency imaging detectors, and software suitable for processing large volumes of data. A number of examples are presented, including fast equation of state measurements, compression rate dependent synthesis of metastable states in silicon and germanium, and ultrahigh compression rates using a piezoelectric driven diamond anvil cell

  16. Influence of Compacting Rate on the Properties of Compressed Earth Blocks

    Directory of Open Access Journals (Sweden)

    Humphrey Danso

    2016-01-01

    Full Text Available Compaction of blocks contributes significantly to the strength properties of compressed earth blocks. This paper investigates the influence of compacting rates on the properties of compressed earth blocks. Experiments were conducted to determine the density, compressive strength, splitting tensile strength, and erosion properties of compressed earth blocks produced with different rates of compacting speed. The study concludes that although the low rate of compaction achieved slightly better performance characteristics, there is no statistically significant difference between the soil blocks produced with low compacting rate and high compacting rate. The study demonstrates that there is not much influence on the properties of compressed earth blocks produced with low and high compacting rates. It was further found that there are strong linear correlations between the compressive strength test and density, and density and the erosion. However, a weak linear correlation was found between tensile strength and compressive strength, and tensile strength and density.

  17. Compressed beam directed particle nuclear energy generator

    International Nuclear Information System (INIS)

    Salisbury, W.W.

    1985-01-01

    This invention relates to the generation of energy from the fusion of atomic nuclei which are caused to travel towards each other along collision courses, orbiting in common paths having common axes and equal radii. High velocity fusible ion beams are directed along head-on circumferential collision paths in an annular zone wherein beam compression by electrostatic focusing greatly enhances head-on fusion-producing collisions. In one embodiment, a steady radial electric field is imposed on the beams to compress the beams and reduce the radius of the spiral paths for enhancing the particle density. Beam compression is achieved through electrostatic focusing to establish and maintain two opposing beams in a reaction zone

  18. Compression force-depth relationship during out-of-hospital cardiopulmonary resuscitation.

    Science.gov (United States)

    Tomlinson, A E; Nysaether, J; Kramer-Johansen, J; Steen, P A; Dorph, E

    2007-03-01

    Recent clinical studies reporting the high frequency of inadequate chest compression depth (compression depth in certain patients. Using a specially designed monitor/defibrillator equipped with a sternal pad fitted with an accelerometer and a pressure sensor, compression force and depth was measured during CPR in 91 adult out-of-hospital cardiac arrest patients. There was a strong non-linear relationship between the force of compression and depth achieved. Mean applied force for all patients was 30.3+/-8.2 kg and mean absolute compression depth 42+/-8 mm. For 87 of 91 patients 38 mm compression depth was obtained with less than 50 kg. Stiffer chests were compressed more forcefully than softer chests (pcompressed more deeply than stiffer chests (p=0.001). The force needed to reach 38 mm compression depth (F38) and mean compression force were higher for males than for females: 29.8+/-14.5 kg versus 22.5+/-10.2 kg (pcompression depth with age, but a significant 1.5 kg mean decrease in applied force for each 10 years increase in age (pcompressions performed. Average residual force during decompression was 1.7+/-1.0 kg, corresponding to an average residual depth of 3+/-2 mm. In most out-of-hospital cardiac arrest victims adequate chest compression depth can be achieved by a force<50 kg, indicating that an average sized and fit rescuer should be able to perform effective CPR in most adult patients.

  19. Statistical conditional sampling for variable-resolution video compression.

    Directory of Open Access Journals (Sweden)

    Alexander Wong

    Full Text Available In this study, we investigate a variable-resolution approach to video compression based on Conditional Random Field and statistical conditional sampling in order to further improve compression rate while maintaining high-quality video. In the proposed approach, representative key-frames within a video shot are identified and stored at full resolution. The remaining frames within the video shot are stored and compressed at a reduced resolution. At the decompression stage, a region-based dictionary is constructed from the key-frames and used to restore the reduced resolution frames to the original resolution via statistical conditional sampling. The sampling approach is based on the conditional probability of the CRF modeling by use of the constructed dictionary. Experimental results show that the proposed variable-resolution approach via statistical conditional sampling has potential for improving compression rates when compared to compressing the video at full resolution, while achieving higher video quality when compared to compressing the video at reduced resolution.

  20. Cascaded quadratic soliton compression of high-power femtosecond fiber lasers in Lithium Niobate crystals

    DEFF Research Database (Denmark)

    Bache, Morten; Moses, Jeffrey; Wise, Frank W.

    2008-01-01

    The output of a high-power femtosecond fiber laser is typically 300 fs with a wavelength around $\\lambda=1030-1060$ nm. Our numerical simulations show that cascaded quadratic soliton compression in bulk LiNbO$_3$ can compress such pulses to below 100 fs.......The output of a high-power femtosecond fiber laser is typically 300 fs with a wavelength around $\\lambda=1030-1060$ nm. Our numerical simulations show that cascaded quadratic soliton compression in bulk LiNbO$_3$ can compress such pulses to below 100 fs....

  1. Compressed air production with waste heat utilization in industry

    Science.gov (United States)

    Nolting, E.

    1984-06-01

    The centralized power-heat coupling (PHC) technique using block heating power stations, is presented. Compressed air production in PHC technique with internal combustion engine drive achieves a high degree of primary energy utilization. Cost savings of 50% are reached compared to conventional production. The simultaneous utilization of compressed air and heat is especially interesting. A speed regulated drive via an internal combustion motor gives a further saving of 10% to 20% compared to intermittent operation. The high fuel utilization efficiency ( 80%) leads to a pay off after two years for operation times of 3000 hr.

  2. High speed and high resolution interrogation of a fiber Bragg grating sensor based on microwave photonic filtering and chirped microwave pulse compression.

    Science.gov (United States)

    Xu, Ou; Zhang, Jiejun; Yao, Jianping

    2016-11-01

    High speed and high resolution interrogation of a fiber Bragg grating (FBG) sensor based on microwave photonic filtering and chirped microwave pulse compression is proposed and experimentally demonstrated. In the proposed sensor, a broadband linearly chirped microwave waveform (LCMW) is applied to a single-passband microwave photonic filter (MPF) which is implemented based on phase modulation and phase modulation to intensity modulation conversion using a phase modulator (PM) and a phase-shifted FBG (PS-FBG). Since the center frequency of the MPF is a function of the central wavelength of the PS-FBG, when the PS-FBG experiences a strain or temperature change, the wavelength is shifted, which leads to the change in the center frequency of the MPF. At the output of the MPF, a filtered chirped waveform with the center frequency corresponding to the applied strain or temperature is obtained. By compressing the filtered LCMW in a digital signal processor, the resolution is improved. The proposed interrogation technique is experimentally demonstrated. The experimental results show that interrogation sensitivity and resolution as high as 1.25 ns/με and 0.8 με are achieved.

  3. Crystal structure of actinide metals at high compression

    International Nuclear Information System (INIS)

    Fast, L.; Soederlind, P.

    1995-08-01

    The crystal structures of some light actinide metals are studied theoretically as a function of applied pressure. The first principles electronic structure theory is formulated in the framework of density functional theory, with the gradient corrected local density approximation of the exchange-correlation functional. The light actinide metals are shown to be well described as itinerant (metallic) f-electron metals and generally, they display a crystal structure which have, in agreement with previous theoretical suggestions, increasing degree of symmetry and closed-packing upon compression. The theoretical calculations agree well with available experimental data. At very high compression, the theory predicts closed-packed structures such as the fcc or the hcp structures or the nearly closed-packed bcc structure for the light actinide metals. A simple canonical band picture is presented to explain in which particular closed-packed form these metals will crystallize at ultra-high pressure

  4. PET image reconstruction with rotationally symmetric polygonal pixel grid based highly compressible system matrix

    International Nuclear Information System (INIS)

    Yu Yunhan; Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Chen Jing; Hong Baoyu

    2013-01-01

    To achieve a maximum compression of system matrix in positron emission tomography (PET) image reconstruction, we proposed a polygonal image pixel division strategy in accordance with rotationally symmetric PET geometry. Geometrical definition and indexing rule for polygonal pixels were established. Image conversion from polygonal pixel structure to conventional rectangular pixel structure was implemented using a conversion matrix. A set of test images were analytically defined in polygonal pixel structure, converted to conventional rectangular pixel based images, and correctly displayed which verified the correctness of the image definition, conversion description and conversion of polygonal pixel structure. A compressed system matrix for PET image recon was generated by tap model and tested by forward-projecting three different distributions of radioactive sources to the sinogram domain and comparing them with theoretical predictions. On a practical small animal PET scanner, a compress ratio of 12.6:1 of the system matrix size was achieved with the polygonal pixel structure, comparing with the conventional rectangular pixel based tap-mode one. OS-EM iterative image reconstruction algorithms with the polygonal and conventional Cartesian pixel grid were developed. A hot rod phantom was detected and reconstructed based on these two grids with reasonable time cost. Image resolution of reconstructed images was both 1.35 mm. We conclude that it is feasible to reconstruct and display images in a polygonal image pixel structure based on a compressed system matrix in PET image reconstruction. (authors)

  5. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2009-01-01

    This book introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. For the computation of turbulent compressible flows, current methods of averaging and filtering are presented so that the reader is exposed to a consistent development of applicable equation sets for both the mean or resolved fields as well as the transport equations for the turbulent stress field. For the measurement of turbulent compressible flows, current techniques ranging from hot-wire anemometry to PIV are evaluated and limitations assessed. Characterizing dynamic features of free shear flows, including jets, mixing layers and wakes, and wall-bounded flows, including shock-turbulence and shock boundary-layer interactions, obtained from computations, experiments and simulations are discussed. Key features: * Describes prediction methodologies in...

  6. Double Compression Expansion Engine: A Parametric Study on a High-Efficiency Engine Concept

    KAUST Repository

    Bhavani Shankar, Vijai Shankar

    2018-04-03

    The Double compression expansion engine (DCEE) concept has exhibited a potential for achieving high brake thermal efficiencies (BTE). The effect of different engine components on system efficiency was evaluated in this work using GT Power simulations. A parametric study on piston insulation, convection heat transfer multiplier, expander head insulation, insulation of connecting pipes, ports and tanks, and the expander intake valve lift profiles was conducted to understand the critical parameters that affected engine efficiency. The simulations were constrained to a constant peak cylinder pressure of 300 bar, and a fixed combustion phasing. The results from this study would be useful in making technology choices that will help realise the potential of this engine concept.

  7. Statistical approach to predict compressive strength of high workability slag-cement mortars

    International Nuclear Information System (INIS)

    Memon, N.A.; Memon, N.A.; Sumadi, S.R.

    2009-01-01

    This paper reports an attempt made to develop empirical expressions to estimate/ predict the compressive strength of high workability slag-cement mortars. Experimental data of 54 mix mortars were used. The mortars were prepared with slag as cement replacement of the order of 0, 50 and 60%. The flow (workability) was maintained at 136+-3%. The numerical and statistical analysis was performed by using database computer software Microsoft Office Excel 2003. Three empirical mathematical models were developed to estimate/predict 28 days compressive strength of high workability slag cement-mortars with 0, 50 and 60% slag which predict the values accurate between 97 and 98%. Finally a generalized empirical mathematical model was proposed which can predict 28 days compressive strength of high workability mortars up to degree of accuracy 95%. (author)

  8. Highly Compressible Carbon Sponge Supercapacitor Electrode with Enhanced Performance by Growing Nickel-Cobalt Sulfide Nanosheets.

    Science.gov (United States)

    Liang, Xu; Nie, Kaiwen; Ding, Xian; Dang, Liqin; Sun, Jie; Shi, Feng; Xu, Hua; Jiang, Ruibin; He, Xuexia; Liu, Zonghuai; Lei, Zhibin

    2018-03-28

    The development of compressible supercapacitor highly relies on the innovative design of electrode materials with both superior compression property and high capacitive performance. This work reports a highly compressible supercapacitor electrode which is prepared by growing electroactive NiCo 2 S 4 (NCS) nanosheets on the compressible carbon sponge (CS). The strong adhesion of the metallic conductive NCS nanosheets to the highly porous carbon scaffolds enable the CS-NCS composite electrode to exhibit an enhanced conductivity and ideal structural integrity during repeated compression-release cycles. Accordingly, the CS-NCS composite electrode delivers a specific capacitance of 1093 F g -1 at 0.5 A g -1 and remarkable rate performance with 91% capacitance retention in the range of 0.5-20 A g -1 . Capacitance performance under the strain of 60% shows that the incorporation of NCS nanosheets in CS scaffolds leads to over five times enhancement in gravimetric capacitance and 17 times enhancement in volumetric capacitance. These performances enable the CS-NCS composite to be one of the promising candidates for potential applications in compressible electrochemical energy storage devices.

  9. High Compressive Stresses Near the Surface of the Sierra Nevada, California

    Science.gov (United States)

    Martel, S. J.; Logan, J. M.; Stock, G. M.

    2012-12-01

    Observations and stress measurements in granitic rocks of the Sierra Nevada, California reveal strong compressive stresses parallel to the surface of the range at shallow depths. New overcoring measurements show high compressive stresses at three locations along an east-west transect through Yosemite National Park. At the westernmost site (west end of Tenaya Lake), the mean compressive stress is 1.9. At the middle site (north shore of Tenaya Lake) the mean compressive stress is 6.8 MPa. At the easternmost site (south side of Lembert Dome) the mean compressive stress is 3.0 MPa. The trend of the most compressive stress at these sites is within ~30° of the strike of the local topographic surface. Previously published hydraulic fracturing measurements by others elsewhere in the Sierra Nevada indicate surface-parallel compressive stresses of several MPa within several tens of meters of the surface, with the stress magnitudes generally diminishing to the west. Both the new and the previously published compressive stress magnitudes are consistent with the presence of sheeting joints (i.e., "exfoliation joints") in the Sierra Nevada, which require lateral compressive stresses of several MPa to form. These fractures are widespread: they are distributed in granitic rocks from the north end of the range to its southern tip and across the width of the range. Uplift along the normal faults of the eastern escarpment, recently measured by others at ~1-2 mm/yr, probably contributes to these stresses substantially. Geodetic surveys reveal that normal faulting flexes a range concave upwards in response to fault slip, and this flexure is predicted by elastic dislocation models. The topographic relief of the eastern escarpment of the Sierra Nevada is 2-4 km, and since alluvial fill generally buries the bedrock east of the faults, the offset of granitic rocks is at least that much. Compressive stresses of several MPa are predicted by elastic dislocation models of the range front

  10. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  11. Exploring Potential Foreshocks on Highly Compressed Patches in a Rate-and-State Fault Model

    Science.gov (United States)

    Higgins, N.; Lapusta, N.

    2015-12-01

    On both natural and laboratory faults, some mainshocks are preceded by foreshocks. Such foreshocks may be triggered by aseismic processes of the mainshock nucleation at fault heterogeneities such as bumps, as inferred in some laboratory experiments. We explore a rate-and-state fault model in which potential foreshocks occur on patches of elevated normal compression (by a factor of 5 to 10) within a larger velocity-weakening (VW) region, using 3D numerical simulations of long-term earthquake sequences and aseismic slip. We find that this model produces isolated microseismicity during the nucleation of a larger-scale seismic event, under the following conditions: (i) Patch diameter D is comparable to or larger than the patch nucleation size h*patch; (ii) D is much smaller than the nucleation size h*main for the larger-scale VW region; otherwise, a patch-hosted event simply starts the larger-scale event; (iii) the patches are sufficiently separated to prevent them triggering each other nearly instantaneously. Hence the nucleation sizes h*main and h*patch need to be substantially different, by a factor of around 10 in our simulations so far, and potentially much more. The aforementioned separation of scales can be achieved by assigning high levels of compression on the patches. However, one would expect unrealistically large stress drops for events on such patches. Remarkably, in this model, we find that the stress drops of the patch-hosted events are reasonable and roughly constant, despite a wide variation in the patch compression, due to patch ruptures extending into the surrounding VW region. Furthermore, for D close to h*patch, a substantial part of the stress change on the patch occurs aseismically. Our current work is directed towards quantifying and explaining these trends, as well as exploring whether the microseismicity occurring on highly compressed patches due to nucleation-induced creep has any observable differences from other events.

  12. Investigation on compression behaviour of highly compacted GMZ01 bentonite with suction and temperature control

    International Nuclear Information System (INIS)

    Ye, W.M.; Zhang, Y.W.; Chen, B.; Zheng, Z.J.; Chen, Y.G.; Cui, Y.J.

    2012-01-01

    Highlights: ► Heating induced volumetric change of GMZ01 bentonite depends on suction. ► Suction has significant influence on compressibility. ► Temperature has slight influence on compressibility. - Abstract: In this paper, an oedometer with suction and temperature control was developed. Mechanical compaction tests have been performed on the highly compacted GMZ01 bentonite, which has been recognized as potential buffer/backfill material for construction of Chinese high-level radioactive waste (HLW) geological repository, under conditions of suction ranging from 0 to 110 MPa, temperature from 20 to 80 °C and vertical pressure from 0.1 to 80 MPa. Based on the test results, suction and temperature effects on compressibility parameters are investigated. Results reveal that: (1) at high suctions, heating induced an expansion, while contraction is induced by heating at low suctions. The thermal expansion coefficient of GMZ01 bentonite measured is 1 × 10 −4 °C −1 ; (2) with increasing suction, the elastic compressibility κ and the plastic compressibility λ(s) of the highly compacted GMZ01 bentonite decrease, while the pre-consolidation pressure increases markedly; (3) with increasing temperature, the elastic compressibility of compacted GMZ01 bentonite changes insignificantly, while the plastic compressibility λ(s) slightly decreases and the yield surface tends to shrink.

  13. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  14. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  15. Anomalous anisotropic compression behavior of superconducting CrAs under high pressure

    Science.gov (United States)

    Yu, Zhenhai; Wu, Wei; Hu, Qingyang; Zhao, Jinggeng; Li, Chunyu; Yang, Ke; Cheng, Jinguang; Luo, Jianlin; Wang, Lin; Mao, Ho-kwang

    2015-01-01

    CrAs was observed to possess the bulk superconductivity under high-pressure conditions. To understand the superconducting mechanism and explore the correlation between the structure and superconductivity, the high-pressure structural evolution of CrAs was investigated using the angle-dispersive X-ray diffraction (XRD) method. The structure of CrAs remains stable up to 1.8 GPa, whereas the lattice parameters exhibit anomalous compression behaviors. With increasing pressure, the lattice parameters a and c both demonstrate a nonmonotonic change, and the lattice parameter b undergoes a rapid contraction at ∼0.18−0.35 GPa, which suggests that a pressure-induced isostructural phase transition occurs in CrAs. Above the phase transition pressure, the axial compressibilities of CrAs present remarkable anisotropy. A schematic band model was used to address the anomalous compression behavior of CrAs. The present results shed light on the structural and related electronic responses to high pressure, which play a key role toward understanding the superconductivity of CrAs. PMID:26627230

  16. Predicting the perceived sound quality of frequency-compressed speech.

    Directory of Open Access Journals (Sweden)

    Rainer Huber

    Full Text Available The performance of objective speech and audio quality measures for the prediction of the perceived quality of frequency-compressed speech in hearing aids is investigated in this paper. A number of existing quality measures have been applied to speech signals processed by a hearing aid, which compresses speech spectra along frequency in order to make information contained in higher frequencies audible for listeners with severe high-frequency hearing loss. Quality measures were compared with subjective ratings obtained from normal hearing and hearing impaired children and adults in an earlier study. High correlations were achieved with quality measures computed by quality models that are based on the auditory model of Dau et al., namely, the measure PSM, computed by the quality model PEMO-Q; the measure qc, computed by the quality model proposed by Hansen and Kollmeier; and the linear subcomponent of the HASQI. For the prediction of quality ratings by hearing impaired listeners, extensions of some models incorporating hearing loss were implemented and shown to achieve improved prediction accuracy. Results indicate that these objective quality measures can potentially serve as tools for assisting in initial setting of frequency compression parameters.

  17. Investigation on compression behaviour of highly compacted GMZ01 bentonite with suction and temperature control

    Energy Technology Data Exchange (ETDEWEB)

    Ye, W.M., E-mail: ye_tju@tongji.edu.cn [Key Laboratory of Geotechnical and Underground Engineering of Ministry of Education, Tongji University, Shanghai 200092 (China); United Research Center for Urban Environment and Sustainable Development, The Ministry of Education, Shanghai 200092 (China); Zhang, Y.W.; Chen, B.; Zheng, Z.J.; Chen, Y.G. [Key Laboratory of Geotechnical and Underground Engineering of Ministry of Education, Tongji University, Shanghai 200092 (China); Cui, Y.J. [Key Laboratory of Geotechnical and Underground Engineering of Ministry of Education, Tongji University, Shanghai 200092 (China); Ecole des Ponts ParisTech, UR Navier/CERMES 77455 (France)

    2012-11-15

    Highlights: Black-Right-Pointing-Pointer Heating induced volumetric change of GMZ01 bentonite depends on suction. Black-Right-Pointing-Pointer Suction has significant influence on compressibility. Black-Right-Pointing-Pointer Temperature has slight influence on compressibility. - Abstract: In this paper, an oedometer with suction and temperature control was developed. Mechanical compaction tests have been performed on the highly compacted GMZ01 bentonite, which has been recognized as potential buffer/backfill material for construction of Chinese high-level radioactive waste (HLW) geological repository, under conditions of suction ranging from 0 to 110 MPa, temperature from 20 to 80 Degree-Sign C and vertical pressure from 0.1 to 80 MPa. Based on the test results, suction and temperature effects on compressibility parameters are investigated. Results reveal that: (1) at high suctions, heating induced an expansion, while contraction is induced by heating at low suctions. The thermal expansion coefficient of GMZ01 bentonite measured is 1 Multiplication-Sign 10{sup -4} Degree-Sign C{sup -1}; (2) with increasing suction, the elastic compressibility {kappa} and the plastic compressibility {lambda}(s) of the highly compacted GMZ01 bentonite decrease, while the pre-consolidation pressure increases markedly; (3) with increasing temperature, the elastic compressibility of compacted GMZ01 bentonite changes insignificantly, while the plastic compressibility {lambda}(s) slightly decreases and the yield surface tends to shrink.

  18. Material Compressing Test of the High Polymer Part Used in Draft Gear of Heavy Load Locomotive

    Directory of Open Access Journals (Sweden)

    Wei Yangang

    2016-01-01

    Full Text Available According to the actual load cases of heavy load locomotive, the material compressing tests of the high polymer parts used in the locomotive are researched. The relationship between stress and strain during the material compressing are acquired by means of comparing the many results of the material compressing tests under different test condition. The relationship between stress and strain during the material compressing is nonlinear in large range of strain, but the relationship is approximately linear in small range of strain. The material of the high polymer made in China and the material of the high polymer imported are compared through the tests. The results show that the compressing property of the material of the high polymer made in China and the material of the high polymer imported are almost same. The research offers the foundation to study the structure elasticity of the draft gear.

  19. Experimental Compressibility of Molten Hedenbergite at High Pressure

    Science.gov (United States)

    Agee, C. B.; Barnett, R. G.; Guo, X.; Lange, R. A.; Waller, C.; Asimow, P. D.

    2010-12-01

    Experiments using the sink/float method have bracketed the density of molten hedenbergite (CaFeSi2O6) at high pressures and temperatures. The experiments are the first of their kind to determine the compressibility of molten hedenbergite at high pressure and are part of a collaborative effort to establish a new database for an array of silicate melt compositions, which will contribute to the development of an empirically based predictive model that will allow calculation of silicate liquid density and compressibility over a wide range of P-T-X conditions where melting could occur in the Earth. Each melt composition will be measured using: (i) double-bob Archimedean method for melt density and thermal expansion at ambient pressure, (ii) sound speed measurements on liquids to constrain melt compressibility at ambient pressure, (iii) sink/float technique to measure melt density to 15 GPa, and (iv) shock wave measurements of P-V-E equation of state and temperature between 10 and 150 GPa. Companion abstracts on molten fayalite (Waller et al., 2010) and liquid mixes of hedenbergite-diopside and anorthite-hedenbergite-diopside (Guo and Lange, 2010) are also presented at this meeting. In the present study, the hedenbergite starting material was synthesized at the Experimental Petrology Lab, University of Michigan, where melt density, thermal expansion, and sound speed measurements were also carried out. The starting material has also been loaded into targets at the Caltech Shockwave Lab, and experiments there are currently underway. We report here preliminary results from static compression measurement performed at the Department of Petrology, Vrije Universiteit, Amsterdam, and the High Pressure Lab, Institute of Meteoritics, University of New Mexico. Experiments were carried out in Quick Press piston-cylinder devices and a Walker-style multi-anvil device. Sink/float marker spheres implemented were gem quality synthetic forsterite (Fo100), San Carlos olivine (Fo90), and

  20. FEM Modeling of the Relationship between the High-Temperature Hardness and High-Temperature, Quasi-Static Compression Experiment.

    Science.gov (United States)

    Zhang, Tao; Jiang, Feng; Yan, Lan; Xu, Xipeng

    2017-12-26

    The high-temperature hardness test has a wide range of applications, but lacks test standards. The purpose of this study is to develop a finite element method (FEM) model of the relationship between the high-temperature hardness and high-temperature, quasi-static compression experiment, which is a mature test technology with test standards. A high-temperature, quasi-static compression test and a high-temperature hardness test were carried out. The relationship between the high-temperature, quasi-static compression test results and the high-temperature hardness test results was built by the development of a high-temperature indentation finite element (FE) simulation. The simulated and experimental results of high-temperature hardness have been compared, verifying the accuracy of the high-temperature indentation FE simulation.The simulated results show that the high temperature hardness basically does not change with the change of load when the pile-up of material during indentation is ignored. The simulated and experimental results show that the decrease in hardness and thermal softening are consistent. The strain and stress of indentation were analyzed from the simulated contour. It was found that the strain increases with the increase of the test temperature, and the stress decreases with the increase of the test temperature.

  1. FEM Modeling of the Relationship between the High-Temperature Hardness and High-Temperature, Quasi-Static Compression Experiment

    Directory of Open Access Journals (Sweden)

    Tao Zhang

    2017-12-01

    Full Text Available The high-temperature hardness test has a wide range of applications, but lacks test standards. The purpose of this study is to develop a finite element method (FEM model of the relationship between the high-temperature hardness and high-temperature, quasi-static compression experiment, which is a mature test technology with test standards. A high-temperature, quasi-static compression test and a high-temperature hardness test were carried out. The relationship between the high-temperature, quasi-static compression test results and the high-temperature hardness test results was built by the development of a high-temperature indentation finite element (FE simulation. The simulated and experimental results of high-temperature hardness have been compared, verifying the accuracy of the high-temperature indentation FE simulation.The simulated results show that the high temperature hardness basically does not change with the change of load when the pile-up of material during indentation is ignored. The simulated and experimental results show that the decrease in hardness and thermal softening are consistent. The strain and stress of indentation were analyzed from the simulated contour. It was found that the strain increases with the increase of the test temperature, and the stress decreases with the increase of the test temperature.

  2. High-energy synchrotron X-ray radiography of shock-compressed materials

    Science.gov (United States)

    Rutherford, Michael E.; Chapman, David J.; Collinson, Mark A.; Jones, David R.; Music, Jasmina; Stafford, Samuel J. P.; Tear, Gareth R.; White, Thomas G.; Winters, John B. R.; Drakopoulos, Michael; Eakins, Daniel E.

    2015-06-01

    This presentation will discuss the development and application of a high-energy (50 to 250 keV) synchrotron X-ray imaging method to study shock-compressed, high-Z samples at Beamline I12 at the Diamond Light Source synchrotron (Rutherford-Appleton Laboratory, UK). Shock waves are driven into materials using a portable, single-stage gas gun designed by the Institute of Shock Physics. Following plate impact, material deformation is probed in-situ by white-beam X-ray radiography and complimentary velocimetry diagnostics. The high energies, large beam size (13 x 13 mm), and appreciable sample volumes (~ 1 cm3) viable for study at Beamline I12 compliment existing in-house pulsed X-ray capabilities and studies at the Dynamic Compression Sector. The authors gratefully acknowledge the ongoing support of Imperial College London, EPSRC, STFC and the Diamond Light Source, and AWE Plc.

  3. Application of content-based image compression to telepathology

    Science.gov (United States)

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace

    2002-05-01

    Telepathology is a means of practicing pathology at a distance, viewing images on a computer display rather than directly through a microscope. Without compression, images take too long to transmit to a remote location and are very expensive to store for future examination. However, to date the use of compressed images in pathology remains controversial. This is because commercial image compression algorithms such as JPEG achieve data compression without knowledge of the diagnostic content. Often images are lossily compressed at the expense of corrupting informative content. None of the currently available lossy compression techniques are concerned with what information has been preserved and what data has been discarded. Their sole objective is to compress and transmit the images as fast as possible. By contrast, this paper presents a novel image compression technique, which exploits knowledge of the slide diagnostic content. This 'content based' approach combines visually lossless and lossy compression techniques, judiciously applying each in the appropriate context across an image so as to maintain 'diagnostic' information while still maximising the possible compression. Standard compression algorithms, e.g. wavelets, can still be used, but their use in a context sensitive manner can offer high compression ratios and preservation of diagnostically important information. When compared with lossless compression the novel content-based approach can potentially provide the same degree of information with a smaller amount of data. When compared with lossy compression it can provide more information for a given amount of compression. The precise gain in the compression performance depends on the application (e.g. database archive or second opinion consultation) and the diagnostic content of the images.

  4. Generation of high intensity rf pulses in the ionosphere by means of in situ compression

    International Nuclear Information System (INIS)

    Cowley, S.C.; Perkins, F.W.; Valeo, E.J.

    1993-04-01

    We demonstrate, using a simple model, that high intensity pulses can be generated from a frequency-chirped modifier of much lower intensity by making use of the dispersive properties of the ionosphere. We show that a frequency-chirped pulse can be constructed so that its various components overtake each other at a prescribed height, resulting in large (up to one hundred times) transient intensity enhancements as compared to those achievable from a steady modifier operating at the same power. We examine briefly one possible application: the enhancement of plasma wave amplitudes which occurs as a result of the interaction of such a compressed pulse with pre-generated turbulence

  5. P. W. Bridgman's contributions to the foundations of shock compression of condensed matter

    Energy Technology Data Exchange (ETDEWEB)

    Nellis, W J, E-mail: nellis@physics.harvard.ed [Department of Physics, Harvard University, Cambridge MA 02138 (United States)

    2010-03-01

    Based on his 50-year career in static high-pressure research, P. W. Bridgman (PWB) is the father of modern high-pressure physics. What is not generally recognized is that Bridgman was also intimately connected with establishing shock compression as a scientific tool and he predicted major events in shock research that occurred up to 40 years after his death. In 1956 the first phase transition under shock compression was reported in Fe at 13 GPa (130 kbar). PWB said a phase transition could not occur in a {approx}microsec, thus setting off a controversy. The scientific legitimacy of shock compression resulted 5 years later when static high-pressure researchers confirmed with x-ray diffraction the existence of epsilon-Fe. Once PWB accepted the fact that shock waves generated with chemical explosives were a valid scientific tool, he immediately realized that substantially higher pressures would be achieved with nuclear explosives. He included his ideas for achieving higher pressures in articles published a few years after his death. L. V. Altshuler eventually read Bridgman's articles and pursued the idea of using nuclear explosives to generate super high pressures, which subsequently morphed today into giant lasers. PWB also anticipated combining static and shock methods, which today is done with pre-compression of a soft sample in a diamond anvil cell followed by laser-driven shock compression. One variation of that method is the reverberating-shock technique, in which the first shock pre-compresses a soft sample and subsequent reverberations isentropically compress the first-shocked state.

  6. A Streaming PCA VLSI Chip for Neural Data Compression.

    Science.gov (United States)

    Wu, Tong; Zhao, Wenfeng; Guo, Hongsun; Lim, Hubert H; Yang, Zhi

    2017-12-01

    Neural recording system miniaturization and integration with low-power wireless technologies require compressing neural data before transmission. Feature extraction is a procedure to represent data in a low-dimensional space; its integration into a recording chip can be an efficient approach to compress neural data. In this paper, we propose a streaming principal component analysis algorithm and its microchip implementation to compress multichannel local field potential (LFP) and spike data. The circuits have been designed in a 65-nm CMOS technology and occupy a silicon area of 0.06 mm. Throughout the experiments, the chip compresses LFPs by 10 at the expense of as low as 1% reconstruction errors and 144-nW/channel power consumption; for spikes, the achieved compression ratio is 25 with 8% reconstruction errors and 3.05-W/channel power consumption. In addition, the algorithm and its hardware architecture can swiftly adapt to nonstationary spiking activities, which enables efficient hardware sharing among multiple channels to support a high-channel count recorder.

  7. Wave energy devices with compressible volumes.

    Science.gov (United States)

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  8. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  9. Bunch Compression Stability Dependence on RF Parameters

    CERN Document Server

    Limberg, T

    2005-01-01

    In present designs for FEL's with high electron peak currents and short bunch lengths, higher harmonic RF systems are often used to optimize the final longitudinal charge distributions. This opens degrees of freedom for the choice of RF phases and amplitudes to achieve the necessary peak current with a reasonable longitudinal bunch shape. It had been found empirically that different working points result in different tolerances for phases and amplitudes. We give an analytical expression for the sensitivity of the compression factor on phase and amplitude jitter for a bunch compression scheme involving two RF systems and two magnetic chicanes as well numerical results for the case of the European XFEL.

  10. Quantization Distortion in Block Transform-Compressed Data

    Science.gov (United States)

    Boden, A. F.

    1995-01-01

    The popular JPEG image compression standard is an example of a block transform-based compression scheme; the image is systematically subdivided into block that are individually transformed, quantized, and encoded. The compression is achieved by quantizing the transformed data, reducing the data entropy and thus facilitating efficient encoding. A generic block transform model is introduced.

  11. A new type of hydrogen generator-HHEG (high-compressed hydrogen energy generator)

    International Nuclear Information System (INIS)

    Harada, H.; Tojima, K.; Takeda, M.; Nakazawa, T.

    2004-01-01

    'Full text:' We have developed a new type of hydrogen generator named HHEG (High-compressed Hydrogen Energy Generator). HHEG can produce 35 MPa high-compressed hydrogen for fuel cell vehicle without any mechanical compressor. HHEG is a kind of PEM(proton exchange membrane)electrolysis. It was well known that compressed hydrogen could be generated by water electrolysis. However, the conventional electrolysis could not generate 35 MPa or higher pressure that is required for fuel cell vehicle, because electrolysis cell stack is destroyed in such high pressure. In HHEG, the cell stack is put in high-pressure vessel and the pressure difference of oxygen and hydrogen that is generated by the cell stack is always kept at nearly zero by an automatic compensator invented by Mitsubishi Corporation. The cell stack of HHEG is not so special one, but it is not broken under such high pressure, because the automatic compensator always offsets the force acting on the cell stack. Hydrogen for fuel cell vehicle must be produce by no emission energy such as solar and atomic power. These energies are available as electricity. So, water electrolysis is the only way of producing hydrogen fuel. Hydrogen fuel is also 35 MPa high-compressed hydrogen and will become 70 MPa in near future. But conventional mechanical compressor is not useful for such high pressure hydrogen fuel, because of the short lifetime and high power consumption. Construction of hydrogen station network is indispensable in order to come into wide use of fuel cell vehicles. For such network contraction, an on-site type hydrogen generator is required. HHEG can satisfy above these requirements. So we can conclude that HHEG is the only way of realizing the hydrogen economy. (author)

  12. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  13. Lossy compression for Animated Web Visualisation

    Science.gov (United States)

    Prudden, R.; Tomlinson, J.; Robinson, N.; Arribas, A.

    2017-12-01

    This talk will discuss an technique for lossy data compression specialised for web animation. We set ourselves the challenge of visualising a full forecast weather field as an animated 3D web page visualisation. This data is richly spatiotemporal, however it is routinely communicated to the public as a 2D map, and scientists are largely limited to visualising data via static 2D maps or 1D scatter plots. We wanted to present Met Office weather forecasts in a way that represents all the generated data. Our approach was to repurpose the technology used to stream high definition videos. This enabled us to achieve high rates of compression, while being compatible with both web browsers and GPU processing. Since lossy compression necessarily involves discarding information, evaluating the results is an important and difficult problem. This is essentially a problem of forecast verification. The difficulty lies in deciding what it means for two weather fields to be "similar", as simple definitions such as mean squared error often lead to undesirable results. In the second part of the talk, I will briefly discuss some ideas for alternative measures of similarity.

  14. Achieving high baryon densities in the fragmentation regions in heavy ion collisions at top RHIC energy

    International Nuclear Information System (INIS)

    Li, Ming; Kapusta, Joseph I.

    2017-01-01

    Heavy ion collisions at extremely high energy, such as the top energy at RHIC, exhibit the property of transparency where there is a clear separation between the almost net-baryon-free central rapidity region and the net-baryon-rich fragmentation region. We calculate the net-baryon rapidity loss and the nuclear excitation energy using the energy-momentum tensor obtained from the McLerran-Venugopalan model. Nuclear compression during the collision is further estimated using a simple space-time picture. The results show that extremely high baryon densities, about twenty times larger than the normal nuclear density, can be achieved in the fragmentation regions. (paper)

  15. Configuring and Characterizing X-Rays for Laser-Driven Compression Experiments at the Dynamic Compression Sector

    Science.gov (United States)

    Li, Y.; Capatina, D.; D'Amico, K.; Eng, P.; Hawreliak, J.; Graber, T.; Rickerson, D.; Klug, J.; Rigg, P. A.; Gupta, Y. M.

    2017-06-01

    Coupling laser-driven compression experiments to the x-ray beam at the Dynamic Compression Sector (DCS) at the Advanced Photon Source (APS) of Argonne National Laboratory requires state-of-the-art x-ray focusing, pulse isolation, and diagnostics capabilities. The 100J UV pulsed laser system can be fired once every 20 minutes so precise alignment and focusing of the x-rays on each new sample must be fast and reproducible. Multiple Kirkpatrick-Baez (KB) mirrors are used to achieve a focal spot size as small as 50 μm at the target, while the strategic placement of scintillating screens, cameras, and detectors allows for fast diagnosis of the beam shape, intensity, and alignment of the sample to the x-ray beam. In addition, a series of x-ray choppers and shutters are used to ensure that the sample is exposed to only a single x-ray pulse ( 80ps) during the dynamic compression event and require highly precise synchronization. Details of the technical requirements, layout, and performance of these instruments will be presented. Work supported by DOE/NNSA.

  16. Data compression of digital X-ray images from a clinical viewpoint

    International Nuclear Information System (INIS)

    Ando, Yutaka

    1992-01-01

    For the PACS (picture archiving and communication system), large storage capacity recording media and a fast data transfer network are necessary. When the PACS are working, these technology requirements become an large problem. So we need image data compression having a higher recording efficiency media and an improved transmission ratio. There are two kinds of data compression methods, one is reversible compression and other is the irreversible one. By these reversible compression methods, a compressed-expanded image is exactly equal to the original image. The ratio of data compression is about between 1/2 an d1/3. On the other hand, for irreversible data compression, the compressed-expanded image is a distorted image, and we can achieve a high compression ratio by using this method. In the medical field, the discrete cosine transform (DCT) method is popular because of the low distortion and fast performance. The ratio of data compression is actually from 1/10 to 1/20. It is important for us to decide the compression ratio according to the purposes and modality of the image. We must carefully select the ratio of the data compression because the suitable compression ratio alters in the usage of image for education, clinical diagnosis and reference. (author)

  17. A Novel ECG Data Compression Method Using Adaptive Fourier Decomposition With Security Guarantee in e-Health Applications.

    Science.gov (United States)

    Ma, JiaLi; Zhang, TanTan; Dong, MingChui

    2015-05-01

    This paper presents a novel electrocardiogram (ECG) compression method for e-health applications by adapting an adaptive Fourier decomposition (AFD) algorithm hybridized with a symbol substitution (SS) technique. The compression consists of two stages: first stage AFD executes efficient lossy compression with high fidelity; second stage SS performs lossless compression enhancement and built-in data encryption, which is pivotal for e-health. Validated with 48 ECG records from MIT-BIH arrhythmia benchmark database, the proposed method achieves averaged compression ratio (CR) of 17.6-44.5 and percentage root mean square difference (PRD) of 0.8-2.0% with a highly linear and robust PRD-CR relationship, pushing forward the compression performance to an unexploited region. As such, this paper provides an attractive candidate of ECG compression method for pervasive e-health applications.

  18. Poor Results for High Achievers

    Science.gov (United States)

    Bui, Sa; Imberman, Scott; Craig, Steven

    2012-01-01

    Three million students in the United States are classified as gifted, yet little is known about the effectiveness of traditional gifted and talented (G&T) programs. In theory, G&T programs might help high-achieving students because they group them with other high achievers and typically offer specially trained teachers and a more advanced…

  19. Image compression-encryption algorithms by combining hyper-chaotic system with discrete fractional random transform

    Science.gov (United States)

    Gong, Lihua; Deng, Chengzhi; Pan, Shumin; Zhou, Nanrun

    2018-07-01

    Based on hyper-chaotic system and discrete fractional random transform, an image compression-encryption algorithm is designed. The original image is first transformed into a spectrum by the discrete cosine transform and the resulting spectrum is compressed according to the method of spectrum cutting. The random matrix of the discrete fractional random transform is controlled by a chaotic sequence originated from the high dimensional hyper-chaotic system. Then the compressed spectrum is encrypted by the discrete fractional random transform. The order of DFrRT and the parameters of the hyper-chaotic system are the main keys of this image compression and encryption algorithm. The proposed algorithm can compress and encrypt image signal, especially can encrypt multiple images once. To achieve the compression of multiple images, the images are transformed into spectra by the discrete cosine transform, and then the spectra are incised and spliced into a composite spectrum by Zigzag scanning. Simulation results demonstrate that the proposed image compression and encryption algorithm is of high security and good compression performance.

  20. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  1. Offshore compression system design for low cost high and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Carlos J. Rocha de O.; Carrijo Neto, Antonio Dias; Cordeiro, Alexandre Franca [Chemtech Engineering Services and Software Ltd., Rio de Janeiro, RJ (Brazil). Special Projects Div.], Emails: antonio.carrijo@chemtech.com.br, carlos.rocha@chemtech.com.br, alexandre.cordeiro@chemtech.com.br

    2010-07-01

    In the offshore oil fields, the oil streams coming from the wells usually have significant amounts of gas. This gas is separated at low pressure and has to be compressed to the export pipeline pressure, usually at high pressure to reduce the needed diameter of the pipelines. In the past, this gases where flared, but nowadays there are a increasing pressure for the energy efficiency improvement of the oil rigs and the use of this gaseous fraction. The most expensive equipment of this kind of plant are the compression and power generation systems, being the second a strong function of the first, because the most power consuming equipment are the compressors. For this reason, the optimization of the compression system in terms of efficiency and cost are determinant to the plant profit. The availability of the plants also have a strong influence in the plant profit, specially in gas fields where the products have a relatively low aggregated value, compared to oil. Due this, the third design variable of the compression system becomes the reliability. As high the reliability, larger will be the plant production. The main ways to improve the reliability of compression system are the use of multiple compression trains in parallel, in a 2x50% or 3x50% configuration, with one in stand-by. Such configurations are possible and have some advantages and disadvantages, but the main side effect is the increase of the cost. This is the offshore common practice, but that does not always significantly improve the plant availability, depending of the previous process system. A series arrangement and a critical evaluation of the overall system in some cases can provide a cheaper system with equal or better performance. This paper shows a case study of the procedure to evaluate a compression system design to improve the reliability but without extreme cost increase, balancing the number of equipment, the series or parallel arrangement, and the driver selection. Two cases studies will be

  2. Three dimensional range geometry and texture data compression with space-filling curves.

    Science.gov (United States)

    Chen, Xia; Zhang, Song

    2017-10-16

    This paper presents a novel method to effectively store three-dimensional (3D) data and 2D texture data into a regular 24-bit image. The proposed method uses the Hilbert space-filling curve to map the normalized unwrapped phase map to two 8-bit color channels, and saves the third color channel for 2D texture storage. By further leveraging existing 2D image and video compression techniques, the proposed method can achieve high compression ratios while effectively preserving data quality. Since the encoding and decoding processes can be applied to most of the current 2D media platforms, this proposed compression method can make 3D data storage and transmission available for many electrical devices without requiring special hardware changes. Experiments demonstrate that if a lossless 2D image/video format is used, both original 3D geometry and 2D color texture can be accurately recovered; if lossy image/video compression is used, only black-and-white or grayscale texture can be properly recovered, but much higher compression ratios (e.g., 1543:1 against the ASCII OBJ format) are achieved with slight loss of 3D geometry quality.

  3. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  4. A new hyperspectral image compression paradigm based on fusion

    Science.gov (United States)

    Guerra, Raúl; Melián, José; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    The on-board compression of remote sensed hyperspectral images is an important task nowadays. One of the main difficulties is that the compression of these images must be performed in the satellite which carries the hyperspectral sensor. Hence, this process must be performed by space qualified hardware, having area, power and speed limitations. Moreover, it is important to achieve high compression ratios without compromising the quality of the decompress image. In this manuscript we proposed a new methodology for compressing hyperspectral images based on hyperspectral image fusion concepts. The proposed compression process has two independent steps. The first one is to spatially degrade the remote sensed hyperspectral image to obtain a low resolution hyperspectral image. The second step is to spectrally degrade the remote sensed hyperspectral image to obtain a high resolution multispectral image. These two degraded images are then send to the earth surface, where they must be fused using a fusion algorithm for hyperspectral and multispectral image, in order to recover the remote sensed hyperspectral image. The main advantage of the proposed methodology for compressing remote sensed hyperspectral images is that the compression process, which must be performed on-board, becomes very simple, being the fusion process used to reconstruct image the more complex one. An extra advantage is that the compression ratio can be fixed in advanced. Many simulations have been performed using different fusion algorithms and different methodologies for degrading the hyperspectral image. The results obtained in the simulations performed corroborate the benefits of the proposed methodology.

  5. Audiovisual focus of attention and its application to Ultra High Definition video compression

    Science.gov (United States)

    Rerabek, Martin; Nemoto, Hiromi; Lee, Jong-Seok; Ebrahimi, Touradj

    2014-02-01

    Using Focus of Attention (FoA) as a perceptual process in image and video compression belongs to well-known approaches to increase coding efficiency. It has been shown that foveated coding, when compression quality varies across the image according to region of interest, is more efficient than the alternative coding, when all region are compressed in a similar way. However, widespread use of such foveated compression has been prevented due to two main conflicting causes, namely, the complexity and the efficiency of algorithms for FoA detection. One way around these is to use as much information as possible from the scene. Since most video sequences have an associated audio, and moreover, in many cases there is a correlation between the audio and the visual content, audiovisual FoA can improve efficiency of the detection algorithm while remaining of low complexity. This paper discusses a simple yet efficient audiovisual FoA algorithm based on correlation of dynamics between audio and video signal components. Results of audiovisual FoA detection algorithm are subsequently taken into account for foveated coding and compression. This approach is implemented into H.265/HEVC encoder producing a bitstream which is fully compliant to any H.265/HEVC decoder. The influence of audiovisual FoA in the perceived quality of high and ultra-high definition audiovisual sequences is explored and the amount of gain in compression efficiency is analyzed.

  6. H.264/AVC Video Compression on Smartphones

    Science.gov (United States)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  7. Ultra high-speed x-ray imaging of laser-driven shock compression using synchrotron light

    Science.gov (United States)

    Olbinado, Margie P.; Cantelli, Valentina; Mathon, Olivier; Pascarelli, Sakura; Grenzer, Joerg; Pelka, Alexander; Roedel, Melanie; Prencipe, Irene; Laso Garcia, Alejandro; Helbig, Uwe; Kraus, Dominik; Schramm, Ulrich; Cowan, Tom; Scheel, Mario; Pradel, Pierre; De Resseguier, Thibaut; Rack, Alexander

    2018-02-01

    A high-power, nanosecond pulsed laser impacting the surface of a material can generate an ablation plasma that drives a shock wave into it; while in situ x-ray imaging can provide a time-resolved probe of the shock-induced material behaviour on macroscopic length scales. Here, we report on an investigation into laser-driven shock compression of a polyurethane foam and a graphite rod by means of single-pulse synchrotron x-ray phase-contrast imaging with MHz frame rate. A 6 J, 10 ns pulsed laser was used to generate shock compression. Physical processes governing the laser-induced dynamic response such as elastic compression, compaction, pore collapse, fracture, and fragmentation have been imaged; and the advantage of exploiting the partial spatial coherence of a synchrotron source for studying low-density, carbon-based materials is emphasized. The successful combination of a high-energy laser and ultra high-speed x-ray imaging using synchrotron light demonstrates the potentiality of accessing complementary information from scientific studies of laser-driven shock compression.

  8. Compressible dynamic stall control using high momentum microjets

    Science.gov (United States)

    Beahan, James J.; Shih, Chiang; Krothapalli, Anjaneyulu; Kumar, Rajan; Chandrasekhara, Muguru S.

    2014-09-01

    Control of the dynamic stall process of a NACA 0015 airfoil undergoing periodic pitching motion is investigated experimentally at the NASA Ames compressible dynamic stall facility. Multiple microjet nozzles distributed uniformly in the first 12 % chord from the airfoil's leading edge are used for the dynamic stall control. Point diffraction interferometry technique is used to characterize the control effectiveness, both qualitatively and quantitatively. The microjet control has been found to be very effective in suppressing both the emergence of the dynamic stall vortex and the associated massive flow separation at the entire operating range of angles of attack. At the high Mach number ( M = 0.4), the use of microjets appears to eliminate the shock structures that are responsible for triggering the shock-induced separation, establishing the fact that the use of microjets is effective in controlling dynamic stall with a strong compressibility effect. In general, microjet control has an overall positive effect in terms of maintaining leading edge suction pressure and preventing flow separation.

  9. Soliton compression to few-cycle pulses with a high quality factor by engineering cascaded quadratic nonlinearities

    DEFF Research Database (Denmark)

    Zeng, Xianglong; Guo, Hairun; Zhou, Binbin

    2012-01-01

    We propose an efficient approach to improve few-cycle soliton compression with cascaded quadratic nonlinearities by using an engineered multi-section structure of the nonlinear crystal. By exploiting engineering of the cascaded quadratic nonlinearities, in each section soliton compression...... with a low effective order is realized, and high-quality few-cycle pulses with large compression factors are feasible. Each subsequent section is designed so that the compressed pulse exiting the previous section experiences an overall effective self-defocusing cubic nonlinearity corresponding to a modest...... soliton order, which is kept larger than unity to ensure further compression. This is done by increasing the cascaded quadratic nonlinearity in the new section with an engineered reduced residual phase mismatch. The low soliton orders in each section ensure excellent pulse quality and high efficiency...

  10. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    Science.gov (United States)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  11. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  12. A Study on Homogeneous Charge Compression Ignition Gasoline Engines

    Science.gov (United States)

    Kaneko, Makoto; Morikawa, Koji; Itoh, Jin; Saishu, Youhei

    A new engine concept consisting of HCCI combustion for low and midrange loads and spark ignition combustion for high loads was introduced. The timing of the intake valve closing was adjusted to alter the negative valve overlap and effective compression ratio to provide suitable HCCI conditions. The effect of mixture formation on auto-ignition was also investigated using a direct injection engine. As a result, HCCI combustion was achieved with a relatively low compression ratio when the intake air was heated by internal EGR. The resulting combustion was at a high thermal efficiency, comparable to that of modern diesel engines, and produced almost no NOx emissions or smoke. The mixture stratification increased the local A/F concentration, resulting in higher reactivity. A wide range of combustible A/F ratios was used to control the compression ignition timing. Photographs showed that the flame filled the entire chamber during combustion, reducing both emissions and fuel consumption.

  13. Data compression techniques and the ACR-NEMA digital interface communications standard

    International Nuclear Information System (INIS)

    Zielonka, J.S.; Blume, H.; Hill, D.; Horil, S.C.; Lodwick, G.S.; Moore, J.; Murphy, L.L.; Wake, R.; Wallace, G.

    1987-01-01

    Data compression offers the possibility of achieving high, effective information transfer rates between devices and of efficient utilization of digital storge devices in meeting department-wide archiving needs. Accordingly, the ARC-NEMA Digital Imaging and Communications Standards Committee established a Working Group to develop a means to incorporate the optimal use of a wide variety of current compression techniques while remaining compatible with the standard. This proposed method allows the use of public domain techniques, predetermined methods between devices already aware of the selected algorithm, and the ability for the originating device to specify algorithms and parameters prior to transmitting compressed data. Because of the latter capability, the technique has the potential for supporting many compression algorithms not yet developed or in common use. Both lossless and lossy methods can be implemented. In addition to description of the overall structure of this proposal, several examples using current compression algorithms are given

  14. SCALCE: boosting sequence compression algorithms using locally consistent encoding.

    Science.gov (United States)

    Hach, Faraz; Numanagic, Ibrahim; Alkan, Can; Sahinalp, S Cenk

    2012-12-01

    The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms; for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzip2, SCALCE + gzip

  15. THE TURBULENT DYNAMO IN HIGHLY COMPRESSIBLE SUPERSONIC PLASMAS

    Energy Technology Data Exchange (ETDEWEB)

    Federrath, Christoph [Research School of Astronomy and Astrophysics, The Australian National University, Canberra, ACT 2611 (Australia); Schober, Jennifer [Universität Heidelberg, Zentrum für Astronomie, Institut für Theoretische Astrophysik, Albert-Ueberle-Strasse 2, D-69120 Heidelberg (Germany); Bovino, Stefano; Schleicher, Dominik R. G., E-mail: christoph.federrath@anu.edu.au [Institut für Astrophysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany)

    2014-12-20

    The turbulent dynamo may explain the origin of cosmic magnetism. While the exponential amplification of magnetic fields has been studied for incompressible gases, little is known about dynamo action in highly compressible, supersonic plasmas, such as the interstellar medium of galaxies and the early universe. Here we perform the first quantitative comparison of theoretical models of the dynamo growth rate and saturation level with three-dimensional magnetohydrodynamical simulations of supersonic turbulence with grid resolutions of up to 1024{sup 3} cells. We obtain numerical convergence and find that dynamo action occurs for both low and high magnetic Prandtl numbers Pm = ν/η = 0.1-10 (the ratio of viscous to magnetic dissipation), which had so far only been seen for Pm ≥ 1 in supersonic turbulence. We measure the critical magnetic Reynolds number, Rm{sub crit}=129{sub −31}{sup +43}, showing that the compressible dynamo is almost as efficient as in incompressible gas. Considering the physical conditions of the present and early universe, we conclude that magnetic fields need to be taken into account during structure formation from the early to the present cosmic ages, because they suppress gas fragmentation and drive powerful jets and outflows, both greatly affecting the initial mass function of stars.

  16. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  17. Expandable image compression system: A modular approach

    International Nuclear Information System (INIS)

    Ho, B.K.T.; Lo, S.C.; Huang, H.K.

    1986-01-01

    The full-frame bit-allocation algorithm for radiological image compression can achieve an acceptable compression ratio as high as 30:1. It involves two stages of operation: a two-dimensional discrete cosine transform and pixel quantization in the transformed space with pixel depth kept accountable by a bit-allocation table. The cosine transform hardware design took an expandable modular approach based on the VME bus system with a maximum data transfer rate of 48 Mbytes/sec and a microprocessor (Motorola 68000 family). The modules are cascadable and microprogrammable to perform 1,024-point butterfly operations. A total of 18 stages would be required for transforming a 1,000 x 1,000 image. Multiplicative constants and addressing sequences are to be software loaded into the parameter buffers of each stage prior to streaming data through the processor stages. The compression rate for 1K x 1K images is expected to be faster than one image per sec

  18. Gmz: a Gml Compression Model for Webgis

    Science.gov (United States)

    Khandelwal, A.; Rajan, K. S.

    2017-09-01

    Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.

  19. A seismic data compression system using subband coding

    Science.gov (United States)

    Kiely, A. B.; Pollara, F.

    1995-01-01

    This article presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The algorithm includes three stages: a decorrelation stage, a quantization stage that introduces a controlled amount of distortion to allow for high compression ratios, and a lossless entropy coding stage based on a simple but efficient arithmetic coding method. Subband coding methods are particularly suited to the decorrelation of nonstationary processes such as seismic events. Adaptivity to the nonstationary behavior of the waveform is achieved by dividing the data into separate blocks that are encoded separately with an adaptive arithmetic encoder. This is done with high efficiency due to the low overhead introduced by the arithmetic encoder in specifying its parameters. The technique could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  20. Effect of Pelletized Coconut Fibre on the Compressive Strength of Foamed Concrete

    Directory of Open Access Journals (Sweden)

    Mohd Jaini Zainorizuan

    2016-01-01

    Full Text Available Foamed concrete is a controlled low density ranging from 400kg/m3 to 1800kg/m3, and hence suitable for the construction of buildings and infrastructures. The uniqueness of foamed concrete is does not use aggregates in order to retain low density. Foamed concrete contains only cement, sand, water and foam agent. Therefore, the consumption of cement is higher in producing a good quality and strength of foamed concrete. Without the present of aggregates, the compressive strength of foamed concrete can only achieve as high as 15MPa. Therefore, this study aims to introduce the pelletized coconut fibre aggregate to reduce the consumption of cement but able to enhance the compressive strength. In the experimental study, forty-five (45 cube samples of foamed concrete with density 1600kg/m3 were prepared with different volume fractions of pelletized coconut fibre aggregate. All cube samples were tested using the compression test to obtain compressive strength. The results showed that the compressive strength of foamed concrete containing 5%, 10%, 15% and 20% of pelletized coconut fibre aggregate are 9.6MPa, 11.4MPa, 14.6MPa and 13.4MPa respectively. It is in fact higher than the controlled foamed concrete that only achieves 9MPa. It is found that the pelletized coconut fibre aggregate indicates a good potential to enhance the compressive strength of foamed concrete.

  1. Resource efficient data compression algorithms for demanding, WSN based biomedical applications.

    Science.gov (United States)

    Antonopoulos, Christos P; Voros, Nikolaos S

    2016-02-01

    During the last few years, medical research areas of critical importance such as Epilepsy monitoring and study, increasingly utilize wireless sensor network technologies in order to achieve better understanding and significant breakthroughs. However, the limited memory and communication bandwidth offered by WSN platforms comprise a significant shortcoming to such demanding application scenarios. Although, data compression can mitigate such deficiencies there is a lack of objective and comprehensive evaluation of relative approaches and even more on specialized approaches targeting specific demanding applications. The research work presented in this paper focuses on implementing and offering an in-depth experimental study regarding prominent, already existing as well as novel proposed compression algorithms. All algorithms have been implemented in a common Matlab framework. A major contribution of this paper, that differentiates it from similar research efforts, is the employment of real world Electroencephalography (EEG) and Electrocardiography (ECG) datasets comprising the two most demanding Epilepsy modalities. Emphasis is put on WSN applications, thus the respective metrics focus on compression rate and execution latency for the selected datasets. The evaluation results reveal significant performance and behavioral characteristics of the algorithms related to their complexity and the relative negative effect on compression latency as opposed to the increased compression rate. It is noted that the proposed schemes managed to offer considerable advantage especially aiming to achieve the optimum tradeoff between compression rate-latency. Specifically, proposed algorithm managed to combine highly completive level of compression while ensuring minimum latency thus exhibiting real-time capabilities. Additionally, one of the proposed schemes is compared against state-of-the-art general-purpose compression algorithms also exhibiting considerable advantages as far as the

  2. Light-weight reference-based compression of FASTQ data.

    Science.gov (United States)

    Zhang, Yongpeng; Li, Linsen; Yang, Yanli; Yang, Xiao; He, Shan; Zhu, Zexuan

    2015-06-09

    The exponential growth of next generation sequencing (NGS) data has posed big challenges to data storage, management and archive. Data compression is one of the effective solutions, where reference-based compression strategies can typically achieve superior compression ratios compared to the ones not relying on any reference. This paper presents a lossless light-weight reference-based compression algorithm namely LW-FQZip to compress FASTQ data. The three components of any given input, i.e., metadata, short reads and quality score strings, are first parsed into three data streams in which the redundancy information are identified and eliminated independently. Particularly, well-designed incremental and run-length-limited encoding schemes are utilized to compress the metadata and quality score streams, respectively. To handle the short reads, LW-FQZip uses a novel light-weight mapping model to fast map them against external reference sequence(s) and produce concise alignment results for storage. The three processed data streams are then packed together with some general purpose compression algorithms like LZMA. LW-FQZip was evaluated on eight real-world NGS data sets and achieved compression ratios in the range of 0.111-0.201. This is comparable or superior to other state-of-the-art lossless NGS data compression algorithms. LW-FQZip is a program that enables efficient lossless FASTQ data compression. It contributes to the state of art applications for NGS data storage and transmission. LW-FQZip is freely available online at: http://csse.szu.edu.cn/staff/zhuzx/LWFQZip.

  3. DSP accelerator for the wavelet compression/decompression of high- resolution images

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, M.A.; Gleason, S.S.; Jatko, W.B.

    1993-07-23

    A Texas Instruments (TI) TMS320C30-based S-Bus digital signal processing (DSP) module was used to accelerate a wavelet-based compression and decompression algorithm applied to high-resolution fingerprint images. The law enforcement community, together with the National Institute of Standards and Technology (NISI), is adopting a standard based on the wavelet transform for the compression, transmission, and decompression of scanned fingerprint images. A two-dimensional wavelet transform of the input image is computed. Then spatial/frequency regions are automatically analyzed for information content and quantized for subsequent Huffman encoding. Compression ratios range from 10:1 to 30:1 while maintaining the level of image quality necessary for identification. Several prototype systems were developed using SUN SPARCstation 2 with a 1280 {times} 1024 8-bit display, 64-Mbyte random access memory (RAM), Tiber distributed data interface (FDDI), and Spirit-30 S-Bus DSP-accelerators from Sonitech. The final implementation of the DSP-accelerated algorithm performed the compression or decompression operation in 3.5 s per print. Further increases in system throughput were obtained by adding several DSP accelerators operating in parallel.

  4. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  5. Compression of computer generated phase-shifting hologram sequence using AVC and HEVC

    Science.gov (United States)

    Xing, Yafei; Pesquet-Popescu, Béatrice; Dufaux, Frederic

    2013-09-01

    With the capability of achieving twice the compression ratio of Advanced Video Coding (AVC) with similar reconstruction quality, High Efficiency Video Coding (HEVC) is expected to become the newleading technique of video coding. In order to reduce the storage and transmission burden of digital holograms, in this paper we propose to use HEVC for compressing the phase-shifting digital hologram sequences (PSDHS). By simulating phase-shifting digital holography (PSDH) interferometry, interference patterns between illuminated three dimensional( 3D) virtual objects and the stepwise phase changed reference wave are generated as digital holograms. The hologram sequences are obtained by the movement of the virtual objects and compressed by AVC and HEVC. The experimental results show that AVC and HEVC are efficient to compress PSDHS, with HEVC giving better performance. Good compression rate and reconstruction quality can be obtained with bitrate above 15000kbps.

  6. Adaptive compressive learning for prediction of protein-protein interactions from primary sequence.

    Science.gov (United States)

    Zhang, Ya-Nan; Pan, Xiao-Yong; Huang, Yan; Shen, Hong-Bin

    2011-08-21

    Protein-protein interactions (PPIs) play an important role in biological processes. Although much effort has been devoted to the identification of novel PPIs by integrating experimental biological knowledge, there are still many difficulties because of lacking enough protein structural and functional information. It is highly desired to develop methods based only on amino acid sequences for predicting PPIs. However, sequence-based predictors are often struggling with the high-dimensionality causing over-fitting and high computational complexity problems, as well as the redundancy of sequential feature vectors. In this paper, a novel computational approach based on compressed sensing theory is proposed to predict yeast Saccharomyces cerevisiae PPIs from primary sequence and has achieved promising results. The key advantage of the proposed compressed sensing algorithm is that it can compress the original high-dimensional protein sequential feature vector into a much lower but more condensed space taking the sparsity property of the original signal into account. What makes compressed sensing much more attractive in protein sequence analysis is its compressed signal can be reconstructed from far fewer measurements than what is usually considered necessary in traditional Nyquist sampling theory. Experimental results demonstrate that proposed compressed sensing method is powerful for analyzing noisy biological data and reducing redundancy in feature vectors. The proposed method represents a new strategy of dealing with high-dimensional protein discrete model and has great potentiality to be extended to deal with many other complicated biological systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Delivery of compression therapy for venous leg ulcers.

    Science.gov (United States)

    Zarchi, Kian; Jemec, Gregor B E

    2014-07-01

    Despite the documented effect of compression therapy in clinical studies and its widespread prescription, treatment of venous leg ulcers is often prolonged and recurrence rates high. Data on provided compression therapy are limited. To assess whether home care nurses achieve adequate subbandage pressure when treating patients with venous leg ulcers and the factors that predict the ability to achieve optimal pressure. We performed a cross-sectional study from March 1, 2011, through March 31, 2012, in home care centers in 2 Danish municipalities. Sixty-eight home care nurses who managed wounds in their everyday practice were included. Participant-masked measurements of subbandage pressure achieved with an elastic, long-stretch, single-component bandage; an inelastic, short-stretch, single-component bandage; and a multilayer, 2-component bandage, as well as, association between achievement of optimal pressure and years in the profession, attendance at wound care educational programs, previous work experience, and confidence in bandaging ability. A substantial variation in the exerted pressure was found: subbandage pressures ranged from 11 mm Hg exerted by an inelastic bandage to 80 mm Hg exerted by a 2-component bandage. The optimal subbandage pressure range, defined as 30 to 50 mm Hg, was achieved by 39 of 62 nurses (63%) applying the 2-component bandage, 28 of 68 nurses (41%) applying the elastic bandage, and 27 of 68 nurses (40%) applying the inelastic bandage. More than half the nurses applying the inelastic (38 [56%]) and elastic (36 [53%]) bandages obtained pressures less than 30 mm Hg. At best, only 17 of 62 nurses (27%) using the 2-component bandage achieved subbandage pressure within the range they aimed for. In this study, none of the investigated factors was associated with the ability to apply a bandage with optimal pressure. This study demonstrates the difficulty of achieving the desired subbandage pressure and indicates that a substantial proportion of

  8. LFQC: a lossless compression algorithm for FASTQ files

    Science.gov (United States)

    Nicolae, Marius; Pathak, Sudipta; Rajasekaran, Sanguthevar

    2015-01-01

    Motivation: Next Generation Sequencing (NGS) technologies have revolutionized genomic research by reducing the cost of whole genome sequencing. One of the biggest challenges posed by modern sequencing technology is economic storage of NGS data. Storing raw data is infeasible because of its enormous size and high redundancy. In this article, we address the problem of storage and transmission of large FASTQ files using innovative compression techniques. Results: We introduce a new lossless non-reference based FASTQ compression algorithm named Lossless FASTQ Compressor. We have compared our algorithm with other state of the art big data compression algorithms namely gzip, bzip2, fastqz (Bonfield and Mahoney, 2013), fqzcomp (Bonfield and Mahoney, 2013), Quip (Jones et al., 2012), DSRC2 (Roguski and Deorowicz, 2014). This comparison reveals that our algorithm achieves better compression ratios on LS454 and SOLiD datasets. Availability and implementation: The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/rajasek/lfqc-v1.1.zip. Contact: rajasek@engr.uconn.edu PMID:26093148

  9. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  10. Superelastic Graphene Aerogel/Poly(3,4-Ethylenedioxythiophene/MnO2 Composite as Compression-Tolerant Electrode for Electrochemical Capacitors

    Directory of Open Access Journals (Sweden)

    Peng Lv

    2017-11-01

    Full Text Available Ultra-compressible electrodes with high electrochemical performance, reversible compressibility and extreme durability are in high demand in compression-tolerant energy storage devices. Herein, an ultra-compressible ternary composite was synthesized by successively electrodepositing poly(3,4-ethylenedioxythiophene (PEDOT and MnO2 into the superelastic graphene aerogel (SEGA. In SEGA/PEDOT/MnO2 ternary composite, SEGA provides the compressible backbone and conductive network; MnO2 is mainly responsible for pseudo reactions; the middle PEDOT not only reduces the interface resistance between MnO2 and graphene, but also further reinforces the strength of graphene cellar walls. The synergistic effect of the three components in the ternary composite electrode leads to high electrochemical performances and good compression-tolerant ability. The gravimetric capacitance of the compressible ternary composite electrodes reaches 343 F g−1 and can retain 97% even at 95% compressive strain. And a volumetric capacitance of 147.4 F cm−3 is achieved, which is much higher than that of other graphene-based compressible electrodes. This value of volumetric capacitance can be preserved by 80% after 3500 charge/discharge cycles under various compression strains, indicating an extreme durability.

  11. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  12. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  13. Image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing

    Science.gov (United States)

    Zhou, Nanrun; Pan, Shumin; Cheng, Shan; Zhou, Zhihong

    2016-08-01

    Most image encryption algorithms based on low-dimensional chaos systems bear security risks and suffer encryption data expansion when adopting nonlinear transformation directly. To overcome these weaknesses and reduce the possible transmission burden, an efficient image compression-encryption scheme based on hyper-chaotic system and 2D compressive sensing is proposed. The original image is measured by the measurement matrices in two directions to achieve compression and encryption simultaneously, and then the resulting image is re-encrypted by the cycle shift operation controlled by a hyper-chaotic system. Cycle shift operation can change the values of the pixels efficiently. The proposed cryptosystem decreases the volume of data to be transmitted and simplifies the keys distribution simultaneously as a nonlinear encryption system. Simulation results verify the validity and the reliability of the proposed algorithm with acceptable compression and security performance.

  14. A MODIFIED EMBEDDED ZERO-TREE WAVELET METHOD FOR MEDICAL IMAGE COMPRESSION

    Directory of Open Access Journals (Sweden)

    T. Celine Therese Jenny

    2010-11-01

    Full Text Available The Embedded Zero-tree Wavelet (EZW is a lossy compression method that allows for progressive transmission of a compressed image. By exploiting the natural zero-trees found in a wavelet decomposed image, the EZW algorithm is able to encode large portions of insignificant regions of an still image with a minimal number of bits. The upshot of this encoding is an algorithm that is able to achieve relatively high peak signal to noise ratios (PSNR for high compression levels. The EZW algorithm is to encode large portions of insignificant regions of an image with a minimal number of bits. Vector Quantization (VQ method can be performed as a post processing step to reduce the coded file size. Vector Quantization (VQ method can be reduces redundancy of the image data in order to be able to store or transmit data in an efficient form. It is demonstrated by experimental results that the proposed method outperforms several well-known lossless image compression techniques for still images that contain 256 colors or less.

  15. Fracture Energy of High-Strength Concrete in Compression

    DEFF Research Database (Denmark)

    Dahl, H.; Brincker, Rune

    1989-01-01

    is essential for understanding the fracture mechanism of concrete in compression. In this paper a series of tests is reported, carried out for the purpose of studying the fracture mechanical properties of concrete in compression. Including the measurement and study of the descending branch, a new experimental...

  16. Wavelet-based audio embedding and audio/video compression

    Science.gov (United States)

    Mendenhall, Michael J.; Claypoole, Roger L., Jr.

    2001-12-01

    Watermarking, traditionally used for copyright protection, is used in a new and exciting way. An efficient wavelet-based watermarking technique embeds audio information into a video signal. Several effective compression techniques are applied to compress the resulting audio/video signal in an embedded fashion. This wavelet-based compression algorithm incorporates bit-plane coding, index coding, and Huffman coding. To demonstrate the potential of this audio embedding and audio/video compression algorithm, we embed an audio signal into a video signal and then compress. Results show that overall compression rates of 15:1 can be achieved. The video signal is reconstructed with a median PSNR of nearly 33 dB. Finally, the audio signal is extracted from the compressed audio/video signal without error.

  17. A Simulation-based Randomized Controlled Study of Factors Influencing Chest Compression Depth

    Directory of Open Access Journals (Sweden)

    Kelsey P. Mayrand

    2015-12-01

    Full Text Available Introduction: Current resuscitation guidelines emphasize a systems approach with a strong emphasis on quality cardiopulmonary resuscitation (CPR. Despite the American Heart Association (AHA emphasis on quality CPR for over 10 years, resuscitation teams do not consistently meet recommended CPR standards. The objective is to assess the impact on chest compression depth of factors including bed height, step stool utilization, position of the rescuer’s arms and shoulders relative to the point of chest compression, and rescuer characteristics including height, weight, and gender. Methods: Fifty-six eligible subjects, including physician assistant students and first-year emergency medicine residents, were enrolled and randomized to intervention (bed lowered and step stool readily available and control (bed raised and step stool accessible, but concealed groups. We instructed all subjects to complete all interventions on a high-fidelity mannequin per AHA guidelines. Secondary end points included subject arm angle, height, weight group, and gender. Results: Using an intention to treat analysis, the mean compression depths for the intervention and control groups were not significantly different. Subjects positioning their arms at a 90-degree angle relative to the sagittal plane of the mannequin’s chest achieved a mean compression depth significantly greater than those compressing at an angle less than 90 degrees. There was a significant correlation between using a step stool and achieving the correct shoulder position. Subject height, weight group, and gender were all independently associated with compression depth. Conclusion: Rescuer arm position relative to the patient’s chest and step stool utilization during CPR are modifiable factors facilitating improved chest compression depth.

  18. High temperature compression tests performed on doped fuels

    International Nuclear Information System (INIS)

    Duguay, C.; Mocellin, A.; Dehaudt, P.; Fantozzi, G.

    1997-01-01

    The use of additives of corundum structure M 2 O 3 (M=Cr, Al) is an effective way of promoting grain growth of uranium dioxide. The high-temperature compressive deformation of large-grained UO 2 doped with these oxides has been investigated and compared with that of pure UO 2 with a standard microstructure. Such doped fuels are expected to exhibit enhanced plasticity. Their use would therefore reduce the pellet-cladding mechanical interaction and thus improve the performances of the nuclear fuel. (orig.)

  19. Semi-confined compression of microfabricated polymerized biomaterial constructs

    International Nuclear Information System (INIS)

    Moraes, Christopher; Likhitpanichkul, Morakot; Simmons, Craig A; Sun, Yu; Zhao, Ruogang

    2011-01-01

    Mechanical forces are critical parameters in engineering functional tissue because of their established influence on cellular behaviour. However, identifying ideal combinations of mechanical, biomaterial and chemical stimuli to obtain a desired cellular response requires high-throughput screening technologies, which may be realized through microfabricated systems. This paper reports on the development and characterization of a MEMS device for semi-confined biomaterial compression. An array of these devices would enable studies involving mechanical deformation of three-dimensional biomaterials, an important parameter in creating physiologically relevant microenvironments in vitro. The described device has the ability to simultaneously apply a range of compressive mechanical stimuli to multiple polymerized hydrogel microconstructs. Local micromechanical strains generated within the semi-confined hydrogel cylinders are characterized and compared with those produced in current micro- and macroscale technologies. In contrast to previous work generating unconfined compression in microfabricated devices, the semi-confined compression model used in this work generates uniform regions of strain within the central portion of each hydrogel, demonstrated here to range from 20% to 45% across the array. The uniform strains achieved simplify experimental analysis and improve the utility of the compression platform. Furthermore, the system is compatible with a wide variety of polymerizable biomaterials, enhancing device versatility and usability in tissue engineering and fundamental cell biology studies

  20. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  1. Neutron scattering experiments of the ionic crystal deformed plastically with uniaxial compression under high temperature

    Energy Technology Data Exchange (ETDEWEB)

    Tsuchiya, Yoshinori; Minakawa, Nobuaki; Aizawa, Kazuya; Ozawa, Kunio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-04-01

    As an aim of huge growth of alkali halide (AH) single crystal, a mosaic structure of small size AH single crystal deformed plastically with uniaxial compression under high temperature was evaluated due to its neutron irradiation experiment. Using TAS-2 installed at JRR-3M guide hole of Japan Atomic Energy Research Institute, locking curve at a representative face factor of the specimen was measured to observe the mosaic structure accompanied with expansion of the crystal due to compression. As a result, though the specimen before compression could be supposed to be divided to some parts already, the locking curve under 10 sec. of compression time showed already some fracture to divisions to suppose finer degradation of the crystal, and division of the locking curve at 600 sec. of compression time could be observed onto its 220 face. And, every compressed specimens showed some changes of crystallization method from standard sample. (G.K.)

  2. n-Gram-Based Text Compression

    Science.gov (United States)

    Duong, Hieu N.; Snasel, Vaclav

    2016-01-01

    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  3. n-Gram-Based Text Compression

    Directory of Open Access Journals (Sweden)

    Vu H. Nguyen

    2016-01-01

    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  4. Does accelerometer feedback on high-quality chest compression improve survival rate? An in-hospital cardiac arrest simulation.

    Science.gov (United States)

    Jung, Min Hee; Oh, Je Hyeok; Kim, Chan Woong; Kim, Sung Eun; Lee, Dong Hoon; Chang, Wen Joen

    2015-08-01

    We investigated whether visual feedback from an accelerometer device facilitated high-quality chest compressions during an in-hospital cardiac arrest simulation using a manikin. Thirty health care providers participated in an in-hospital cardiac arrest simulation with 1 minute of continuous chest compressions. Chest compressions were performed on a manikin lying on a bed according to visual feedback from an accelerometer feedback device. The manikin and accelerometer recorded chest compression data simultaneously. The simulated patient was deemed to have survived when the chest compression data satisfied all of the preset high-quality chest compression criteria (depth ≥51 mm, rate >100 per minute, and ≥95% full recoil). Survival rates were calculated from the feedback device and manikin data. The survival rate according to the feedback device data was 80%; however, the manikin data indicated a significantly lower survival rate (46.7%; P = .015). The difference between the accelerometer and manikin survival rates was not significant for participants with a body mass index greater than or equal to 20 kg/m(2) (93.3 vs 73.3%, respectively; P = .330); however, the difference in survival rate was significant in participants with body mass index less than 20 kg/m(2) (66.7 vs 20.0%, respectively; P = .025). The use of accelerometer feedback devices to facilitate high-quality chest compression may not be appropriate for lightweight rescuers because of the potential for compression depth overestimation. Clinical Research Information Service (KCT0001449). Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Compressive Sampling of EEG Signals with Finite Rate of Innovation

    Directory of Open Access Journals (Sweden)

    Poh Kok-Kiong

    2010-01-01

    Full Text Available Analyses of electroencephalographic signals and subsequent diagnoses can only be done effectively on long term recordings that preserve the signals' morphologies. Currently, electroencephalographic signals are obtained at Nyquist rate or higher, thus introducing redundancies. Existing compression methods remove these redundancies, thereby achieving compression. We propose an alternative compression scheme based on a sampling theory developed for signals with a finite rate of innovation (FRI which compresses electroencephalographic signals during acquisition. We model the signals as FRI signals and then sample them at their rate of innovation. The signals are thus effectively represented by a small set of Fourier coefficients corresponding to the signals' rate of innovation. Using the FRI theory, original signals can be reconstructed using this set of coefficients. Seventy-two hours of electroencephalographic recording are tested and results based on metrices used in compression literature and morphological similarities of electroencephalographic signals are presented. The proposed method achieves results comparable to that of wavelet compression methods, achieving low reconstruction errors while preserving the morphologiies of the signals. More importantly, it introduces a new framework to acquire electroencephalographic signals at their rate of innovation, thus entailing a less costly low-rate sampling device that does not waste precious computational resources.

  6. Compact toroid formation, compression, and acceleration

    International Nuclear Information System (INIS)

    Degnan, J.H.; Peterkin, R.E. Jr.; Baca, G.P.; Beason, J.D.; Bell, D.E.; Dearborn, M.E.; Dietz, D.; Douglas, M.R.; Englert, S.E.; Englert, T.J.; Hackett, K.E.; Holmes, J.H.; Hussey, T.W.; Kiuttu, G.F.; Lehr, F.M.; Marklin, G.J.; Mullins, B.W.; Price, D.W.; Roderick, N.F.; Ruden, E.L.; Sovinec, C.R.; Turchi, P.J.; Bird, G.; Coffey, S.K.; Seiler, S.W.; Chen, Y.G.; Gale, D.; Graham, J.D.; Scott, M.; Sommars, W.

    1993-01-01

    Research on forming, compressing, and accelerating milligram-range compact toroids using a meter diameter, two-stage, puffed gas, magnetic field embedded coaxial plasma gun is described. The compact toroids that are studied are similar to spheromaks, but they are threaded by an inner conductor. This research effort, named MARAUDER (Magnetically Accelerated Ring to Achieve Ultra-high Directed Energy and Radiation), is not a magnetic confinement fusion program like most spheromak efforts. Rather, the ultimate goal of the present program is to compress toroids to high mass density and magnetic field intensity, and to accelerate the toroids to high speed. There are a variety of applications for compressed, accelerated toroids including fast opening switches, x-radiation production, radio frequency (rf) compression, as well as charge-neutral ion beam and inertial confinement fusion studies. Experiments performed to date to form and accelerate toroids have been diagnosed with magnetic probe arrays, laser interferometry, time and space resolved optical spectroscopy, and fast photography. Parts of the experiment have been designed by, and experimental results are interpreted with, the help of two-dimensional (2-D), time-dependent magnetohydrodynamic (MHD) numerical simulations. When not driven by a second discharge, the toroids relax to a Woltjer--Taylor equilibrium state that compares favorably to the results of 2-D equilibrium calculations and to 2-D time-dependent MHD simulations. Current, voltage, and magnetic probe data from toroids that are driven by an acceleration discharge are compared to 2-D MHD and to circuit solver/slug model predictions. Results suggest that compact toroids are formed in 7--15 μsec, and can be accelerated intact with material species the same as injected gas species and entrained mass ≥1/2 the injected mass

  7. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  8. Enhancing the Hardness and Compressive Response of Magnesium Using Complex Composition Alloy Reinforcement

    Directory of Open Access Journals (Sweden)

    Khin Sandar Tun

    2018-04-01

    Full Text Available The present study reports the development of new magnesium composites containing complex composition alloy (CCA particles. Materials were synthesized using a powder metallurgy route incorporating hybrid microwave sintering and hot extrusion. The presence and variation in the amount of ball-milled CCA particles (2.5 wt %, 5 wt %, and 7.5 wt % in a magnesium matrix and their effect on the microstructure and mechanical properties of Mg-CCA composites were investigated. The use of CCA particle reinforcement effectively led to a significant matrix grain refinement. Uniformly distributed CCA particles were observed in the microstructure of the composites. The refined microstructure coupled with the intrinsically high hardness of CCA particles (406 HV contributed to the superior mechanical properties of the Mg-CCA composites. A microhardness of 80 HV was achieved in a Mg-7.5HEA (high entropy alloy composite, which is 1.7 times higher than that of pure Mg. A significant improvement in compressive yield strength (63% and ultimate compressive strength (79% in the Mg-7.5CCA composite was achieved when compared to that of pure Mg while maintaining the same ductility level. When compared to ball-milled amorphous particle-reinforced and ceramic-particle-reinforced Mg composites, higher yield and compressive strengths in Mg-CCA composites were achieved at a similar ductility level.

  9. Use of magnetic compression based on amorphous alloys as a drive for induction linacs

    International Nuclear Information System (INIS)

    Birx, D.L.; Cook, E.G.; Hawkins, S.A.; Poor, S.E.; Reginato, L.; Schmidt, J.; Smith, M.W.

    1984-01-01

    In anticipation of current and future needs for the Particle Beam Program and other programs at the Lawrence Livermore National Laboratory, we are continuing efforts in the development of high-repetition-rate magnetic pulse compressors that use ferromagnetic metallic glasses, both in the linear and very high saturation rates. These devices are ideally suited as drivers for linear induction accelerators, where duty factor or average repetition rates (hundred of hertz) requirements exceed the parameters that can be achieved by pulse compression using spark gaps. The technique of magnetic pulse compression has been with use for several decades, but relatively recent developments in rapidly quenched magnetic metals of very thin cross sections, has led to the development of state-of-the-art magnetic pulse compressors with very high peak power, repetition rates, and reliability. This paper will describe results of recent experiments and the relevant electrical and mechanical properties of magnetic pulse compressors to achieve high efficiency and reliability

  10. High level compressive residual stresses produced in aluminum alloys by laser shock processing

    International Nuclear Information System (INIS)

    Gomez-Rosas, G.; Rubio-Gonzalez, C.; Ocana, J.L; Molpeceres, C.; Porro, J.A.; Chi-Moreno, W.; Morales, M.

    2005-01-01

    Laser shock processing (LSP) has been proposed as a competitive alternative technology to classical treatments for improving fatigue and wear resistance of metals. We present a configuration and results for metal surface treatments in underwater laser irradiation at 1064 nm. A convergent lens is used to deliver 1.2 J/cm 2 in a 8 ns laser FWHM pulse produced by 10 Hz Q-switched Nd:YAG, two laser spot diameters were used: 0.8 and 1.5 mm. Results using pulse densities of 2500 pulses/cm 2 in 6061-T6 aluminum samples and 5000 pulses/cm 2 in 2024 aluminum samples are presented. High level of compressive residual stresses are produced -1600 MPa for 6061-T6 Al alloy, and -1400 MPa for 2024 Al alloy. It has been shown that surface residual stress level is higher than that achieved by conventional shot peening and with greater depths. This method can be applied to surface treatment of final metal products

  11. Compressed air-assisted solvent extraction (CASX) for metal removal.

    Science.gov (United States)

    Li, Chi-Wang; Chen, Yi-Ming; Hsiao, Shin-Tien

    2008-03-01

    A novel process, compressed air-assisted solvent extraction (CASX), was developed to generate micro-sized solvent-coated air bubbles (MSAB) for metal extraction. Through pressurization of solvent with compressed air followed by releasing air-oversaturated solvent into metal-containing wastewater, MSAB were generated instantaneously. The enormous surface area of MSAB makes extraction process extremely fast and achieves very high aqueous/solvent weight ratio (A/S ratio). CASX process completely removed Cr(VI) from acidic electroplating wastewater under A/S ratio of 115 and extraction time of less than 10s. When synthetic wastewater containing Cd(II) of 50mgl(-1) was treated, A/S ratios of higher than 714 and 1190 could be achieved using solvent with extractant/diluent weight ratio of 1:1 and 5:1, respectively. Also, MSAB have very different physical properties, such as size and density, compared to the emulsified solvent droplets, making separation and recovery of solvent from treated effluent very easy.

  12. Compressed gas domestic aerosol valve design using high viscous product

    Directory of Open Access Journals (Sweden)

    A Nourian

    2016-10-01

    Full Text Available Most of the current universal consumer aerosol products using high viscous product such as cooking oil, antiperspirants, hair removal cream are primarily used LPG (Liquefied Petroleum Gas propellant which is unfriendly environmental. The advantages of the new innovative technology described in this paper are: i. No butane or other liquefied hydrocarbon gas is used as a propellant and it replaced with Compressed air, nitrogen or other safe gas propellant. ii. Customer acceptable spray quality and consistency during can lifetime iii. Conventional cans and filling technology There is only a feasible energy source which is inert gas (i.e. compressed air to replace VOCs (Volatile Organic Compounds and greenhouse gases, which must be avoided, to improve atomisation by generating gas bubbles and turbulence inside the atomiser insert and the actuator. This research concentrates on using "bubbly flow" in the valve stem, with injection of compressed gas into the passing flow, thus also generating turbulence. The new valve designed in this investigation using inert gases has advantageous over conventional valve with butane propellant using high viscous product (> 400 Cp because, when the valving arrangement is fully open, there are negligible energy losses as fluid passes through the valve from the interior of the container to the actuator insert. The use of valving arrangement thus permits all pressure drops to be controlled, resulting in improved control of atomising efficiency and flow rate, whereas in conventional valves a significant pressure drops occurs through the valve which has a complex effect on the corresponding spray.

  13. Disk-based compression of data from genome sequencing.

    Science.gov (United States)

    Grabowski, Szymon; Deorowicz, Sebastian; Roguski, Łukasz

    2015-05-01

    High-coverage sequencing data have significant, yet hard to exploit, redundancy. Most FASTQ compressors cannot efficiently compress the DNA stream of large datasets, since the redundancy between overlapping reads cannot be easily captured in the (relatively small) main memory. More interesting solutions for this problem are disk based, where the better of these two, from Cox et al. (2012), is based on the Burrows-Wheeler transform (BWT) and achieves 0.518 bits per base for a 134.0 Gbp human genome sequencing collection with almost 45-fold coverage. We propose overlapping reads compression with minimizers, a compression algorithm dedicated to sequencing reads (DNA only). Our method makes use of a conceptually simple and easily parallelizable idea of minimizers, to obtain 0.317 bits per base as the compression ratio, allowing to fit the 134.0 Gbp dataset into only 5.31 GB of space. http://sun.aei.polsl.pl/orcom under a free license. sebastian.deorowicz@polsl.pl Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Structure and Properties of Silica Glass Densified in Cold Compression and Hot Compression

    Science.gov (United States)

    Guerette, Michael; Ackerson, Michael R.; Thomas, Jay; Yuan, Fenglin; Bruce Watson, E.; Walker, David; Huang, Liping

    2015-10-01

    Silica glass has been shown in numerous studies to possess significant capacity for permanent densification under pressure at different temperatures to form high density amorphous (HDA) silica. However, it is unknown to what extent the processes leading to irreversible densification of silica glass in cold-compression at room temperature and in hot-compression (e.g., near glass transition temperature) are common in nature. In this work, a hot-compression technique was used to quench silica glass from high temperature (1100 °C) and high pressure (up to 8 GPa) conditions, which leads to density increase of ~25% and Young’s modulus increase of ~71% relative to that of pristine silica glass at ambient conditions. Our experiments and molecular dynamics (MD) simulations provide solid evidences that the intermediate-range order of the hot-compressed HDA silica is distinct from that of the counterpart cold-compressed at room temperature. This explains the much higher thermal and mechanical stability of the former than the latter upon heating and compression as revealed in our in-situ Brillouin light scattering (BLS) experiments. Our studies demonstrate the limitation of the resulting density as a structural indicator of polyamorphism, and point out the importance of temperature during compression in order to fundamentally understand HDA silica.

  15. Contributions to HEVC Prediction for Medical Image Compression

    OpenAIRE

    Guarda, André Filipe Rodrigues

    2016-01-01

    Medical imaging technology and applications are continuously evolving, dealing with images of increasing spatial and temporal resolutions, which allow easier and more accurate medical diagnosis. However, this increase in resolution demands a growing amount of data to be stored and transmitted. Despite the high coding efficiency achieved by the most recent image and video coding standards in lossy compression, they are not well suited for quality-critical medical image compressi...

  16. Shock compression experiments on Lithium Deuteride single crystals.

    Energy Technology Data Exchange (ETDEWEB)

    Knudson, Marcus D.; Desjarlais, Michael Paul; Lemke, Raymond W.

    2014-10-01

    S hock compression exper iments in the few hundred GPa (multi - Mabr) regime were performed on Lithium Deuteride (LiD) single crystals . This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17 - 32 km/s. Measurements included pressure, density, and temperature between %7E200 - 600 GPa along the Principal Hugoniot - the locus of end states achievable through compression by large amplitude shock waves - as well as pressure and density of re - shock states up to %7E900 GPa . The experimental measurements are compared with recent density functional theory calculations as well as a new tabular equation of state developed at Los Alamos National Labs.

  17. [Compressive and bend strength of experimental admixed high copper alloys].

    Science.gov (United States)

    Sourai, P; Paximada, H; Lagouvardos, P; Douvitsas, G

    1988-01-01

    Mixed alloys for dental amalgams have been used mainly in the form of admixed alloys, where eutectic spheres are blend with conventional flakes. In the present study the compressive strength, bend strength and microstructure of two high-copper alloys (Tytin, Ana-2000) is compared with three experimental alloys prepared of the two high copper by mixing them in proportions of 3:1, 1:1 and 1:3 by weight. The results revealed that experimental alloys inherited high early and final strength values without any significant change in their microstructure.

  18. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  19. Isentropic compression with the SPHINX machine

    International Nuclear Information System (INIS)

    D'almeida, T; Lasalle, F.; Morell, A.; Grunenwald, J.; Zucchini, F.; Loyen, A.

    2013-01-01

    The SPHINX machine is a generator of pulsed high power (Class 6 MA, 1 μs) that can be used in the framework of inertial fusion for achieving isentropic compression experiments. The magnetic field created by the current impulse generates a quasi-isentropic compression of a metallic liner. In order to optimize this mode of operation, the current impulse is shaped through a device called DLCM (Dynamic Load Current Multiplier). The DLCM device allows both the increase of the amplitude of the current injected into the liner and its shaping. Some preliminary results concerning an aluminium liner are reported. The measurement of the speed of the internal surface of the liner during its implosion and over a quite long trajectory has been possible by interferometry and the results agree well with simulations based on the experimental value of the current delivered to the liner

  20. High temperature compression tests performed on doped fuels

    Energy Technology Data Exchange (ETDEWEB)

    Duguay, C.; Mocellin, A.; Dehaudt, P. [Commissariat a l`Energie Atomique, CEA Grenoble (France); Fantozzi, G. [INSA Lyon - GEMPPM, Villeurbanne (France)

    1997-12-31

    The use of additives of corundum structure M{sub 2}O{sub 3} (M=Cr, Al) is an effective way of promoting grain growth of uranium dioxide. The high-temperature compressive deformation of large-grained UO{sub 2} doped with these oxides has been investigated and compared with that of pure UO{sub 2} with a standard microstructure. Such doped fuels are expected to exhibit enhanced plasticity. Their use would therefore reduce the pellet-cladding mechanical interaction and thus improve the performances of the nuclear fuel. (orig.) 5 refs.

  1. Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction

    Directory of Open Access Journals (Sweden)

    Chun-mei Li

    2016-01-01

    Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.

  2. Radial and axial compression of pure electron

    International Nuclear Information System (INIS)

    Park, Y.; Soga, Y.; Mihara, Y.; Takeda, M.; Kamada, K.

    2013-01-01

    Experimental studies are carried out on compression of the density distribution of a pure electron plasma confined in a Malmberg-Penning Trap in Kanazawa University. More than six times increase of the on-axis density is observed under application of an external rotating electric field that couples to low-order Trivelpiece-Gould modes. Axial compression of the density distribution with the axial length of a factor of two is achieved by controlling the confining potential at both ends of the plasma. Substantial increase of the axial kinetic energy is observed during the axial compression. (author)

  3. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  4. A RCCI operational limits assessment in a medium duty compression ignition engine using an adapted compression ratio

    International Nuclear Information System (INIS)

    Benajes, Jesús; Pastor, José V.; García, Antonio; Boronat, Vicente

    2016-01-01

    Highlights: • RCCI with CR 12.75 reaches up to 80% load fulfilling mechanical limits. • Ultra-low levels in NOx and soot emissions are obtained in the whole engine map. • Ultra-high levels of CO and uHC have been measured overall at low load. • RCCI improves fuel consumption from 25% to 80% engine loads comparing with CDC. - Abstract: Reactivity Controlled Compression Ignition concept offers an ultra-low nitrogen oxide and soot emissions with a high thermal efficiency. This work investigates the capabilities of this low temperature combustion concept to work on the whole map of a medium duty engine proposing strategies to solve its main challenges. In this sense, an extension to high loads of the concept without exceeding mechanical stress as well as a mitigation of carbon oxide and unburned hydrocarbons emissions at low load together with a fuel consumption penalty have been identified as main Reactivity Controlled Compression Ignition drawbacks. For this purpose, a single cylinder engine derived from commercial four cylinders medium-duty engine with an adapted compression ratio of 12.75 is used. Commercial 95 octane gasoline was used as a low reactivity fuel and commercial diesel as a high reactivity fuel. Thus, the study consists of two different parts. Firstly, the work is focused on the development and evaluation of an engine map trying to achieve the maximum possible load without exceeding a pressure rise rate of 15 bar/CAD. The second part holds on improving fuel consumption and carbon oxide and unburned hydrocarbons emissions at low load. Results suggest that it is possible to achieve up to 80% of nominal conventional diesel combustion engine load without overpassing the constraints of pressure rise rate (below 15 bar/CAD) and maximum pressure peak (below 190 bar) while obtaining ultra-low levels of nitrogen oxide and soot emissions. Regarding low load challenges, it has developed a particular methodology sweeping the gasoline-diesel blend together

  5. Compression and information recovery in ptychography

    Science.gov (United States)

    Loetgering, L.; Treffer, D.; Wilhein, T.

    2018-04-01

    Ptychographic coherent diffraction imaging (PCDI) is a scanning microscopy modality that allows for simultaneous recovery of object and illumination information. This ability renders PCDI a suitable technique for x-ray lensless imaging and optics characterization. Its potential for information recovery typically relies on large amounts of data redundancy. However, the field of view in ptychography is practically limited by the memory and the computational facilities available. We describe techniques that achieve robust ptychographic information recovery at high compression rates. The techniques are compared and tested with experimental data.

  6. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  7. [Neurovascular compression of the medulla oblongata: a rare cause of secondary hypertension].

    Science.gov (United States)

    Nádas, Judit; Czirják, Sándor; Igaz, Péter; Vörös, Erika; Jermendy, György; Rácz, Károly; Tóth, Miklós

    2014-05-25

    Compression of the rostral ventrolateral medulla oblongata is one of the rarely identified causes of refractory hypertension. In patients with severe, intractable hypertension caused by neurovascular compression, neurosurgical decompression should be considered. The authors present the history of a 20-year-old man with severe hypertension. After excluding other possible causes of secondary hypertension, the underlying cause of his high blood pressure was identified by the demonstration of neurovascular compression shown by magnetic resonance angiography and an increased sympathetic activity (sinus tachycardia) during the high blood pressure episodes. Due to frequent episodes of hypertensive crises, surgical decompression was recommended, which was performed with the placement of an isograft between the brainstem and the left vertebral artery. In the first six months after the operation, the patient's blood pressure could be kept in the normal range with significantly reduced doses of antihypertensive medication. Repeat magnetic resonance angiography confirmed the cessation of brainstem compression. After six months, increased blood pressure returned periodically, but to a smaller extent and less frequently. Based on the result of magnetic resonance angiography performed 22 months after surgery, re-operation was considered. According to previous literature data long-term success can only be achieved in one third of patients after surgical decompression. In the majority of patients surgery results in a significant decrease of blood pressure, an increased efficiency of antihypertensive therapy as well as a decrease in the frequency of highly increased blood pressure episodes. Thus, a significant improvement of the patient's quality of life can be achieved. The case of this patient is an example of the latter scenario.

  8. Inter frame motion estimation and its application to image sequence compression: an introduction

    International Nuclear Information System (INIS)

    Cremy, C.

    1996-01-01

    With the constant development of new communication technologies like, digital TV, teleconference, and the development of image analysis applications, there is a growing volume of data to manage. Compression techniques are required for the transmission and storage of these data. Dealing with original images would require the use of expansive high bandwidth communication devices and huge storage media. Image sequence compression can be achieved by means of interframe estimation that consists in retrieving redundant information relative to zones where there is little motion between two frames. This paper is an introduction to some motion estimation techniques like gradient techniques, pel-recursive, block-matching, and its application to image sequence compression. (Author) 17 refs

  9. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  10. Behavior of quenched and tempered steels under high strain rate compression loading

    International Nuclear Information System (INIS)

    Meyer, L.W.; Seifert, K.; Abdel-Malek, S.

    1997-01-01

    Two quenched and tempered steels were tested under compression loading at strain rates of ε = 2.10 2 s -1 and ε = 2.10 3 s -1 . By applying the thermal activation theory, the flow stress at very high strain rates of 10 5 to 10 6 s -1 is derived from low temperature and high strain rate tests. Dynamic true stress - true strain behaviour presents, that stress increases with increasing strain until a maximum, then it decreases. Because of the adiabatic process under dynamic loading the maximum flow stress will occur at a lower strain if the strain rate is increased. Considering strain rate, strain hardening, strain rate hardening and strain softening, a constitutive equation with different additive terms is successfully used to describe the behaviour of material under dynamic compression loading. Results are compared with other models of constitutive equations. (orig.)

  11. Theory of the Thermal Diffusion of Microgel Particles in Highly Compressed Suspensions

    Science.gov (United States)

    Sokoloff, Jeffrey; Maloney, Craig; Ciamarra, Massimo; Bi, Dapeng

    One amazing property of microgel colloids is the ability of the particles to thermally diffuse, even when they are compressed to a volume well below their swollen state volume, despite the fact that they are surrounded by and pressed against other particles. A glass transition is expected to occur when the colloid is sufficiently compressed for diffusion to cease. It is proposed that the diffusion is due to the ability of the highly compressed particles to change shape with little cost in free energy. It will be shown that most of the free energy required to compress microgel particles is due to osmotic pressure resulting from either counterions or monomers inside of the gel, which depends on the particle's volume. There is still, however, a cost in free energy due to polymer elasticity when particles undergo the distortions necessary for them to move around each other as they diffuse through the compressed colloid, even if it occurs at constant volume. Using a scaling theory based on simple models for the linking of polymers belonging to the microgel particles, we examine the conditions under which the cost in free energy needed for a particle to diffuse is smaller than or comparable to thermal energy, which is a necessary condition for particle diffusion. Based on our scaling theory, we predict that thermally activated diffusion should be possible when the mean number of links along the axis along which a distortion occurs is much larger than N 1 / 5, where Nis the mean number of monomers in a polymer chain connecting two links in the gel.

  12. Compressive and flexural strength of high strength phase change mortar

    Science.gov (United States)

    Qiao, Qingyao; Fang, Changle

    2018-04-01

    High-strength cement produces a lot of hydration heat when hydrated, it will usually lead to thermal cracks. Phase change materials (PCM) are very potential thermal storage materials. Utilize PCM can help reduce the hydration heat. Research shows that apply suitable amount of PCM has a significant effect on improving the compressive strength of cement mortar, and can also improve the flexural strength to some extent.

  13. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  14. A high capacity text steganography scheme based on LZW compression and color coding

    Directory of Open Access Journals (Sweden)

    Aruna Malik

    2017-02-01

    Full Text Available In this paper, capacity and security issues of text steganography have been considered by employing LZW compression technique and color coding based approach. The proposed technique uses the forward mail platform to hide the secret data. This algorithm first compresses secret data and then hides the compressed secret data into the email addresses and also in the cover message of the email. The secret data bits are embedded in the message (or cover text by making it colored using a color coding table. Experimental results show that the proposed method not only produces a high embedding capacity but also reduces computational complexity. Moreover, the security of the proposed method is significantly improved by employing stego keys. The superiority of the proposed method has been experimentally verified by comparing with recently developed existing techniques.

  15. Photon level chemical classification using digital compressive detection

    International Nuclear Information System (INIS)

    Wilcox, David S.; Buzzard, Gregery T.; Lucier, Bradley J.; Wang Ping; Ben-Amotz, Dor

    2012-01-01

    Highlights: ► A new digital compressive detection strategy is developed. ► Chemical classification demonstrated using as few as ∼10 photons. ► Binary filters are optimal when taking few measurements. - Abstract: A key bottleneck to high-speed chemical analysis, including hyperspectral imaging and monitoring of dynamic chemical processes, is the time required to collect and analyze hyperspectral data. Here we describe, both theoretically and experimentally, a means of greatly speeding up the collection of such data using a new digital compressive detection strategy. Our results demonstrate that detecting as few as ∼10 Raman scattered photons (in as little time as ∼30 μs) can be sufficient to positively distinguish chemical species. This is achieved by measuring the Raman scattered light intensity transmitted through programmable binary optical filters designed to minimize the error in the chemical classification (or concentration) variables of interest. The theoretical results are implemented and validated using a digital compressive detection instrument that incorporates a 785 nm diode excitation laser, digital micromirror spatial light modulator, and photon counting photodiode detector. Samples consisting of pairs of liquids with different degrees of spectral overlap (including benzene/acetone and n-heptane/n-octane) are used to illustrate how the accuracy of the present digital compressive detection method depends on the correlation coefficients of the corresponding spectra. Comparisons of measured and predicted chemical classification score plots, as well as linear and non-linear discriminant analyses, demonstrate that this digital compressive detection strategy is Poisson photon noise limited and outperforms total least squares-based compressive detection with analog filters.

  16. Report on the achievements in fiscal 1998 on research and development related to the next generation ultra high speed communication node technology; 1998 nendo jisedai chokosoku tsushin nodo gijutsu ni kakawaru kenkyu kaihatsu seika hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-03-01

    Research and development has been performed on a new system COPM that links signals from LAN to an ultra high speed network of tera-bit class, and on the element technology required thereon. This paper summarizes the achievements. The transmission system using the COPM modulates beam signals by using image signals of 155 mega-bits per second; compresses the modulated beam pulse strings as much as sixteen times to transmit the light pulse as the light pulse of 2.5 giga-bits per second; and the pulses are expanded again at the receiver side to demodulate them into electric signal to project them as images on a TV screen. This demonstration was performed successfully. In addition, in order to mitigate the problems in raising the compression rate only in the beam zone, a proposal was made on a hybrid type compression device that combines the electric compression with the compression in the beam zone in series. As a result, it was shown that compression of up to about 10 giga-bit per second is possible in the electric zone by using an electric memory. Further research and development has been advanced on the element technologies, and the initial objective has been nearly achieved. (NEDO)

  17. Development of High Speed Imaging and Analysis Techniques Compressible Dynamics Stall

    Science.gov (United States)

    Chandrasekhara, M. S.; Carr, L. W.; Wilder, M. C.; Davis, Sanford S. (Technical Monitor)

    1996-01-01

    parameters on the dynamic stall process. When interferograms can be captured in real time, the potential for real-time mapping of a developing unsteady flow such as dynamic stall becomes a possibility. This has been achieved in the present case through the use of a high-speed drum camera combined with electronic circuitry which has resulted in a series of interferograms obtained during a single cycle of dynamic stall; images obtained at the rate of 20 KHz will be presented as a part of the formal presentation. Interferometry has been available for a long time; however, most of its use has been limited to visualization. The present research has focused on use of interferograms for quantitative mapping of the flow over oscillating airfoils. Instantaneous pressure distributions can now be obtained semi-automatically, making practical the analysis of the thousands of interferograms that are produced in this research. A review of the techniques that have been developed as part of this research effort will be presented in the final paper.

  18. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    Science.gov (United States)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  19. VLSI ARCHITECTURE FOR IMAGE COMPRESSION THROUGH ADDER MINIMIZATION TECHNIQUE AT DCT STRUCTURE

    Directory of Open Access Journals (Sweden)

    N.R. Divya

    2014-08-01

    Full Text Available Data compression plays a vital role in multimedia devices to present the information in a succinct frame. Initially, the DCT structure is used for Image compression, which has lesser complexity and area efficient. Similarly, 2D DCT also has provided reasonable data compression, but implementation concern, it calls more multipliers and adders thus its lead to acquire more area and high power consumption. To contain an account of all, this paper has been dealt with VLSI architecture for image compression using Rom free DA based DCT (Discrete Cosine Transform structure. This technique provides high-throughput and most suitable for real-time implementation. In order to achieve this image matrix is subdivided into odd and even terms then the multiplication functions are removed by shift and add approach. Kogge_Stone_Adder techniques are proposed for obtaining a bit-wise image quality which determines the new trade-off levels as compared to the previous techniques. Overall the proposed architecture produces reduced memory, low power consumption and high throughput. MATLAB is used as a funding tool for receiving an input pixel and obtaining output image. Verilog HDL is used for implementing the design, Model Sim for simulation, Quatres II is used to synthesize and obtain details about power and area.

  20. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  1. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  2. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  3. Effect of Fiber Orientation on Dynamic Compressive Properties of an Ultra-High Performance Concrete

    Science.gov (United States)

    2017-08-01

    transient stress wave (Chen and Song 2011). A schematic of a modern SHPB is shown in Figure 2.3. On this SHPB, a compressed gas cannon is used to launch...1991. Compressive behaviour of concrete at high strain rates. Materials and Structures 24(6):425-450. Buzug, T. M. 2008. Computed tomography: From...SFRC. Journal of Materials Science 48(10):3745-3759. Empelmann, M., M. Teutsch, and G. Steven. 2008. Improvement of the post fracture behaviour of

  4. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  5. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  6. What factors determine academic achievement in high achieving undergraduate medical students? A qualitative study.

    Science.gov (United States)

    Abdulghani, Hamza M; Al-Drees, Abdulmajeed A; Khalil, Mahmood S; Ahmad, Farah; Ponnamperuma, Gominda G; Amin, Zubair

    2014-04-01

    Medical students' academic achievement is affected by many factors such as motivational beliefs and emotions. Although students with high intellectual capacity are selected to study medicine, their academic performance varies widely. The aim of this study is to explore the high achieving students' perceptions of factors contributing to academic achievement. Focus group discussions (FGD) were carried out with 10 male and 9 female high achieving (scores more than 85% in all tests) students, from the second, third, fourth and fifth academic years. During the FGDs, the students were encouraged to reflect on their learning strategies and activities. The discussion was audio-recorded, transcribed and analysed qualitatively. Factors influencing high academic achievement include: attendance to lectures, early revision, prioritization of learning needs, deep learning, learning in small groups, mind mapping, learning in skills lab, learning with patients, learning from mistakes, time management, and family support. Internal motivation and expected examination results are important drivers of high academic performance. Management of non-academic issues like sleep deprivation, homesickness, language barriers, and stress is also important for academic success. Addressing these factors, which might be unique for a given student community, in a systematic manner would be helpful to improve students' performance.

  7. A New Algorithm for the On-Board Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raúl Guerra

    2018-03-01

    Full Text Available Hyperspectral sensors are able to provide information that is useful for many different applications. However, the huge amounts of data collected by these sensors are not exempt of drawbacks, especially in remote sensing environments where the hyperspectral images are collected on-board satellites and need to be transferred to the earth’s surface. In this situation, an efficient compression of the hyperspectral images is mandatory in order to save bandwidth and storage space. Lossless compression algorithms have been traditionally preferred, in order to preserve all the information present in the hyperspectral cube for scientific purposes, despite their limited compression ratio. Nevertheless, the increment in the data-rate of the new-generation sensors is making more critical the necessity of obtaining higher compression ratios, making it necessary to use lossy compression techniques. A new transform-based lossy compression algorithm, namely Lossy Compression Algorithm for Hyperspectral Image Systems (HyperLCA, is proposed in this manuscript. This compressor has been developed for achieving high compression ratios with a good compression performance at a reasonable computational burden. An extensive amount of experiments have been performed in order to evaluate the goodness of the proposed HyperLCA compressor using different calibrated and uncalibrated hyperspectral images from the AVIRIS and Hyperion sensors. The results provided by the proposed HyperLCA compressor have been evaluated and compared against those produced by the most relevant state-of-the-art compression solutions. The theoretical and experimental evidence indicates that the proposed algorithm represents an excellent option for lossy compressing hyperspectral images, especially for applications where the available computational resources are limited, such as on-board scenarios.

  8. Efficient Bayesian Compressed Sensing-based Channel Estimation Techniques for Massive MIMO-OFDM Systems

    OpenAIRE

    Al-Salihi, Hayder Qahtan Kshash; Nakhai, Mohammad Reza

    2017-01-01

    Efficient and highly accurate channel state information (CSI) at the base station (BS) is essential to achieve the potential benefits of massive multiple input multiple output (MIMO) systems. However, the achievable accuracy that is attainable is limited in practice due to the problem of pilot contamination. It has recently been shown that compressed sensing (CS) techniques can address the pilot contamination problem. However, CS-based channel estimation requires prior knowledge of channel sp...

  9. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  10. Investigation of turbulence models with compressibility corrections for hypersonic boundary flows

    Directory of Open Access Journals (Sweden)

    Han Tang

    2015-12-01

    Full Text Available The applications of pressure work, pressure-dilatation, and dilatation-dissipation (Sarkar, Zeman, and Wilcox models to hypersonic boundary flows are investigated. The flat plate boundary layer flows of Mach number 5–11 and shock wave/boundary layer interactions of compression corners are simulated numerically. For the flat plate boundary layer flows, original turbulence models overestimate the heat flux with Mach number high up to 10, and compressibility corrections applied to turbulence models lead to a decrease in friction coefficients and heating rates. The pressure work and pressure-dilatation models yield the better results. Among the three dilatation-dissipation models, Sarkar and Wilcox corrections present larger deviations from the experiment measurement, while Zeman correction can achieve acceptable results. For hypersonic compression corner flows, due to the evident increase of turbulence Mach number in separation zone, compressibility corrections make the separation areas larger, thus cannot improve the accuracy of calculated results. It is unreasonable that compressibility corrections take effect in separation zone. Density-corrected model by Catris and Aupoix is suitable for shock wave/boundary layer interaction flows which can improve the simulation accuracy of the peak heating and have a little influence on separation zone.

  11. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction.

    Science.gov (United States)

    Motaal, Abdallah G; Coolen, Bram F; Abdurrachim, Desiree; Castro, Rui M; Prompers, Jeanine J; Florack, Luc M J; Nicolay, Klaas; Strijkers, Gustav J

    2013-04-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensing reconstruction. Key to our approach is that we exploit the stochastic nature of the retrospective triggering acquisition scheme to produce an undersampled and random k-t space filling that allows for compressed sensing reconstruction and acceleration. As a standard, a self-gated FLASH sequence with a total acquisition time of 10 min was used to produce single-slice Cine movies of seven mouse hearts with 90 frames per cardiac cycle. Two times (2×) and three times (3×) k-t space undersampled Cine movies were produced from 2.5- and 1.5-min data acquisitions, respectively. The accelerated 90-frame Cine movies of mouse hearts were successfully reconstructed with a compressed sensing algorithm. The movies had high image quality and the undersampling artifacts were effectively removed. Left ventricular functional parameters, i.e. end-systolic and end-diastolic lumen surface areas and early-to-late filling rate ratio as a parameter to evaluate diastolic function, derived from the standard and accelerated Cine movies, were nearly identical. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Nonlinear vibration analysis of the high-efficiency compressive-mode piezoelectric energy harvester

    Science.gov (United States)

    Yang, Zhengbao; Zu, Jean

    2015-04-01

    Power source is critical to achieve independent and autonomous operations of electronic mobile devices. The vibration-based energy harvesting is extensively studied recently, and recognized as a promising technology to realize inexhaustible power supply for small-scale electronics. Among various approaches, the piezoelectric energy harvesting has gained the most attention due to its high conversion efficiency and simple configurations. However, most of piezoelectric energy harvesters (PEHs) to date are based on bending-beam structures and can only generate limited power with a narrow working bandwidth. The insufficient electric output has greatly impeded their practical applications. In this paper, we present an innovative lead zirconate titanate (PZT) energy harvester, named high-efficiency compressive-mode piezoelectric energy harvester (HC-PEH), to enhance the performance of energy harvesters. A theoretical model was developed analytically, and solved numerically to study the nonlinear characteristics of the HC-PEH. The results estimated by the developed model agree well with the experimental data from the fabricated prototype. The HC-PEH shows strong nonlinear responses, favorable working bandwidth and superior power output. Under a weak excitation of 0.3 g (g = 9.8 m/s2), a maximum power output 30 mW is generated at 22 Hz, which is about ten times better than current energy harvesters. The HC-PEH demonstrates the capability of generating enough power for most of wireless sensors.

  13. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  14. New experimental platform to study high density laser-compressed matter

    International Nuclear Information System (INIS)

    Gauthier, M.; Fletcher, L. B.; Galtier, E.; Gamboa, E. J.; Granados, E.; Hastings, J. B.; Heimann, P.; Lee, H. J.; Nagler, B.; Schropp, A.; Falcone, R.; Glenzer, S. H.; Ravasio, A.; Gleason, A.; Döppner, T.; LePape, S.; Ma, T.; Pak, A.; MacDonald, M. J.; Ali, S.

    2014-01-01

    We have developed a new experimental platform at the Linac Coherent Light Source (LCLS) which combines simultaneous angularly and spectrally resolved x-ray scattering measurements. This technique offers a new insights on the structural and thermodynamic properties of warm dense matter. The < 50 fs temporal duration of the x-ray pulse provides near instantaneous snapshots of the dynamics of the compression. We present a proof of principle experiment for this platform to characterize a shock-compressed plastic foil. We observe the disappearance of the plastic semi-crystal structure and the formation of a compressed liquid ion-ion correlation peak. The plasma parameters of shock-compressed plastic can be measured as well, but requires an averaging over a few tens of shots

  15. Perspectives of High-Achieving Women on Teaching

    Science.gov (United States)

    Snodgrass, Helen

    2010-01-01

    High-achieving women are significantly less likely to enter the teaching profession than they were just 40 years ago. Why? While the social and economic reasons for this decline have been well documented in the literature, what is lacking is a discussion with high-achieving women, as they make their first career decisions, about their perceptions…

  16. Extraction Compression and Acceleration of High Line Charge Density Ion Beams

    CERN Document Server

    Henestroza, Enrique; Grote, D P; Peters, Craig; Yu, Simon

    2005-01-01

    HEDP applications require high line charge density ion beams. An efficient method to obtain this type of beams is to extract a long pulse, high current beam from a gun at high energy, and let the beam pass through a decelerating field to compress it. The low energy beam bunch is loaded into a solenoid and matched to a Brillouin flow. The Brillouin equilibrium is independent of the energy if the relationship between the beam size (a), solenoid magnetic field strength (B) and line charge density is such that (Ba)2

  17. Fracto-mechanoluminescent light emission of EuD4TEA-PDMS composites subjected to high strain-rate compressive loading

    Science.gov (United States)

    Ryu, Donghyeon; Castaño, Nicolas; Bhakta, Raj; Kimberley, Jamie

    2017-08-01

    The objective of this study is to understand light emission characteristics of fracto-mechanoluminescent (FML) europium tetrakis(dibenzoylmethide)-triethylammonium (EuD4TEA) crystals under high strain-rate compressive loading. As a sensing material that can play a pivotal role for the self-powered impact sensor technology, it is important to understand transformative light emission characteristics of the FML EuD4TEA crystals under high strain-rate compressive loading. First, EuD4TEA crystals were synthesized and embedded into polydimethylsiloxane (PDMS) elastomer to fabricate EuD4TEA-PDMS composite test specimens. Second, the prepared EuD4TEA-PDMS composites were tested using the modified Kolsky bar setup equipped with a high-speed camera. Third, FML light emission was captured to yield 12 bit grayscale video footage, which was processed to quantify the FML light emission. Finally, quantitative parameters were generated by taking into account pixel values and population of pixels of the 12 bit grayscale images to represent FML light intensity. The FML light intensity was correlated with high strain-rate compressive strain and strain rate to understand the FML light emission characteristics under high strain-rate compressive loading that can result from impact occurrences.

  18. A comparison of inferface pressures of three compression bandage systems.

    Science.gov (United States)

    Hanna, Richard; Bohbot, Serge; Connolly, Nicki

    To measure and compare the interface pressures achieved with two compression bandage systems - a four-layer system (4LB) and a two-layer short-stretch system (SSB) - with a new two-layer system (2LB), which uses an etalonnage (performance indicator) to help achieve the correct therapeutic pressure for healing venous leg ulcers - recommended as 40 mmHg. 32 nurses with experience of using compression bandages applied each of the three systems to a healthy female volunteer in a sitting position. The interface pressures and time taken to apply the systems were measured. A questionnaire regarding the concept of the new system and its application in comparison to the existing two systems was then completed by the nurses. The interface pressures achieved show that many nurses applied very high pressures with the 4LB (25% achieving pressures > 50 mmHg) whereas the majority of the nurses (75%) achieved a pressure of pressure of 30-50 mmHg was achieved with the new 2LB. The SSB took the least time to be applied (mean: 1 minute 50 seconds) with the 4LB the slowest (mean: 3 minutes 46 seconds). A mean time of 2 minutes 35 seconds was taken to apply the 2LB. Over 63% of the nurses felt the 2LB was very easy to apply. These results suggest that the 2LB achieves the required therapeutic pressure necessary for the management of venous leg ulcers, is easy to apply and may provide a suitable alternative to other multi-layer bandage systems.

  19. A Hybrid Data Compression Scheme for Power Reduction in Wireless Sensors for IoT.

    Science.gov (United States)

    Deepu, Chacko John; Heng, Chun-Huat; Lian, Yong

    2017-04-01

    This paper presents a novel data compression and transmission scheme for power reduction in Internet-of-Things (IoT) enabled wireless sensors. In the proposed scheme, data is compressed with both lossy and lossless techniques, so as to enable hybrid transmission mode, support adaptive data rate selection and save power in wireless transmission. Applying the method to electrocardiogram (ECG), the data is first compressed using a lossy compression technique with a high compression ratio (CR). The residual error between the original data and the decompressed lossy data is preserved using entropy coding, enabling a lossless restoration of the original data when required. Average CR of 2.1 × and 7.8 × were achieved for lossless and lossy compression respectively with MIT/BIH database. The power reduction is demonstrated using a Bluetooth transceiver and is found to be reduced to 18% for lossy and 53% for lossless transmission respectively. Options for hybrid transmission mode, adaptive rate selection and system level power reduction make the proposed scheme attractive for IoT wireless sensors in healthcare applications.

  20. Low-Temperature Combustion of High Octane Fuels in a Gasoline Compression Ignition Engine

    Directory of Open Access Journals (Sweden)

    Khanh Duc Cung

    2017-12-01

    Full Text Available Gasoline compression ignition (GCI has been shown as one of the advanced combustion concepts that could potentially provide a pathway to achieve cleaner and more efficient combustion engines. Fuel and air in GCI are not fully premixed compared to homogeneous charge compression ignition (HCCI, which is a completely kinetic-controlled combustion system. Therefore, the combustion phasing can be controlled by the time of injection, usually postinjection in a multiple-injection scheme, to mitigate combustion noise. Gasoline usually has longer ignition delay than diesel. The autoignition quality of gasoline can be indicated by research octane number (RON. Fuels with high octane tend to have more resistance to autoignition, hence more time for fuel-air mixing. In this study, three fuels, namely, aromatic, alkylate, and E30, with similar RON value of 98 but different hydrocarbon compositions were tested in a multicylinder engine under GCI combustion mode. Considerations of exhaust gas recirculating (EGR, start of injection, and boost were investigated to study the sensitivity of dilution, local stratification, and reactivity of the charge, respectively, for each fuel. Combustion phasing (location of 50% of fuel mass burned was kept constant during the experiments. This provides similar thermodynamic conditions to study the effect of fuels on emissions. Emission characteristics at different levels of EGR and lambda were revealed for all fuels with E30 having the lowest filter smoke number and was also most sensitive to the change in dilution. Reasonably low combustion noise (<90 dB and stable combustion (coefficient of variance of indicated mean effective pressure <3% were maintained during the experiments. The second part of this article contains visualization of the combustion process obtained from endoscope imaging for each fuel at selected conditions. Soot radiation signal from GCI combustion were strong during late injection and also more intense

  1. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    quality of 0.6 bpp and 0.1 bpp reconstructions was decreased. The compression performance of the most effective reversible coders is rather unsatisfactory. The subjective rating with the diagnostic criteria of image quality was more sensitive to distortions caused by lossy compression compared with the pathology detection test. The observers constituted 14:1 as the accepted ratio of lossy wavelet compression for test mammograms. This is significantly higher than the mean ratio of 2:1 achieved with lossless methods. (author)

  2. High academic achievement in psychotic students.

    Science.gov (United States)

    Defries, Z; Grothe, L

    1978-02-01

    The authors studied 21 schizophrenic and borderline college students who achieved B+ or higher grade averages and underwent psychotherapy while in college. High academic achievement was found to provide relief from feelings of worthlessness and ineffectuality resulting from poor relationships with parents, siblings, and peers. Psychotherapy and the permissive yet supportive college atmosphere reinforced the students' self-esteem.

  3. Compression of Infrared images

    DEFF Research Database (Denmark)

    Mantel, Claire; Forchhammer, Søren

    2017-01-01

    best for bits-per-pixel rates below 1.4 bpp, while HEVC obtains best performance in the range 1.4 to 6.5 bpp. The compression performance is also evaluated based on maximum errors. These results also show that HEVC can achieve a precision of 1°C with an average of 1.3 bpp....

  4. [Effects of real-time audiovisual feedback on secondary-school students' performance of chest compressions].

    Science.gov (United States)

    Abelairas-Gómez, Cristian; Rodríguez-Núñez, Antonio; Vilas-Pintos, Elisardo; Prieto Saborit, José Antonio; Barcala-Furelos, Roberto

    2015-06-01

    To describe the quality of chest compressions performed by secondary-school students trained with a realtime audiovisual feedback system. The learners were 167 students aged 12 to 15 years who had no prior experience with cardiopulmonary resuscitation (CPR). They received an hour of instruction in CPR theory and practice and then took a 2-minute test, performing hands-only CPR on a child mannequin (Prestan Professional Child Manikin). Lights built into the mannequin gave learners feedback about how many compressions they had achieved and clicking sounds told them when compressions were deep enough. All the learners were able to maintain a steady enough rhythm of compressions and reached at least 80% of the targeted compression depth. Fewer correct compressions were done in the second minute than in the first (P=.016). Real-time audiovisual feedback helps schoolchildren aged 12 to 15 years to achieve quality chest compressions on a mannequin.

  5. On-line compression of symmetrical multidimensional γ-ray spectra using adaptive orthogonal transforms

    International Nuclear Information System (INIS)

    Morhac, M.; Matousek, V.

    2008-01-01

    The efficient algorithm to compress multidimensional symmetrical γ-ray events is presented. The reduction of data volume can be achieved due to both the symmetry of the γ-ray spectra and compression capabilities of the employed adaptive orthogonal transform. Illustrative examples prove in the favor of the proposed compression algorithm. The algorithm was implemented for on-line compression of events. Acquired compressed data can be later processed in an interactive way

  6. Size dependent compressibility of nano-ceria: Minimum near 33 nm

    International Nuclear Information System (INIS)

    Rodenbough, Philip P.; Song, Junhua; Chan, Siu-Wai; Walker, David; Clark, Simon M.; Kalkan, Bora

    2015-01-01

    We report the crystallite-size-dependency of the compressibility of nanoceria under hydrostatic pressure for a wide variety of crystallite diameters and comment on the size-based trends indicating an extremum near 33 nm. Uniform nano-crystals of ceria were synthesized by basic precipitation from cerium (III) nitrate. Size-control was achieved by adjusting mixing time and, for larger particles, a subsequent annealing temperature. The nano-crystals were characterized by transmission electron microscopy and standard ambient x-ray diffraction (XRD). Compressibility, or its reciprocal, bulk modulus, was measured with high-pressure XRD at LBL-ALS, using helium, neon, or argon as the pressure-transmitting medium for all samples. As crystallite size decreased below 100 nm, the bulk modulus first increased, and then decreased, achieving a maximum near a crystallite diameter of 33 nm. We review earlier work and examine several possible explanations for the peaking of bulk modulus at an intermediate crystallite size

  7. Size dependent compressibility of nano-ceria: Minimum near 33 nm

    Energy Technology Data Exchange (ETDEWEB)

    Rodenbough, Philip P. [Department of Applied Physics and Applied Mathematics, Materials Science and Engineering Program, Columbia University, New York, New York 10027 (United States); Chemistry Department, Columbia University, New York, New York 10027 (United States); Song, Junhua; Chan, Siu-Wai, E-mail: sc174@columbia.edu [Department of Applied Physics and Applied Mathematics, Materials Science and Engineering Program, Columbia University, New York, New York 10027 (United States); Walker, David [Department of Earth and Environmental Sciences, Lamont-Doherty Earth Observatory, Columbia University, Palisades, New York 10964 (United States); Clark, Simon M. [ARC Center of Excellence for Core to Crust Fluid Systems and Department of Earth and Planetary Sciences, Macquarie University, Sydney, New South Wales 2019, Australia and The Bragg Institute, Australian Nuclear Science and Technology Organisation, Kirrawee DC, New South Wales 2232 (Australia); Kalkan, Bora [Department of Physics Engineering, Hacettepe University, 06800 Beytepe, Ankara (Turkey)

    2015-04-20

    We report the crystallite-size-dependency of the compressibility of nanoceria under hydrostatic pressure for a wide variety of crystallite diameters and comment on the size-based trends indicating an extremum near 33 nm. Uniform nano-crystals of ceria were synthesized by basic precipitation from cerium (III) nitrate. Size-control was achieved by adjusting mixing time and, for larger particles, a subsequent annealing temperature. The nano-crystals were characterized by transmission electron microscopy and standard ambient x-ray diffraction (XRD). Compressibility, or its reciprocal, bulk modulus, was measured with high-pressure XRD at LBL-ALS, using helium, neon, or argon as the pressure-transmitting medium for all samples. As crystallite size decreased below 100 nm, the bulk modulus first increased, and then decreased, achieving a maximum near a crystallite diameter of 33 nm. We review earlier work and examine several possible explanations for the peaking of bulk modulus at an intermediate crystallite size.

  8. Coherent structures in ablatively compressed ICF targets and Rayleigh-Taylor instability

    International Nuclear Information System (INIS)

    Pant, H.C.; Desai, T.

    1996-01-01

    One of the major issues in laser induced inertial confinement fusion (ICF) is a stable ablative compression of spherical fusion pellets. The main impediment in achievement of this objective is Rayleigh-Taylor instability at the pellet's ablation front. Under sufficiently high acceleration this instability can grow out of noise. However, it can also arise either due to non-uniform laser intensity distribution over the pellet surface or due to pellet wall areal mass irregularity. Coherent structures in the dense target behind the ablation front can be effectively utilised for stabilisation of the Rayleigh-Taylor phenomenon. Such coherent structures in the form of a super lattice can be created by doping the pellet pusher with high atomic number (Z) micro particles. A compressed-cool pusher under laser irradiation behaves like a strongly correlated non ideal plasma when compressed to sufficiently high density such that the non ideality parameter exceeds unity. Moreover, the nonideality parameter for high Z microinclusions may exceed a critical value of 180 and as a consequence they remain in the form of intact clusters, maintaining the superlattice intact during ablative acceleration. Micro-hetrogeneity and its superlattice plays an important role in stabilization of Rayleigh-Taylor instability, through a variety of mechanisms. (orig.)

  9. Application of High-Resolution Ultrasonic Spectroscopy for analysis of complex formulations. Compressibility of solutes and solute particles in liquid mixtures

    International Nuclear Information System (INIS)

    Buckin, V

    2012-01-01

    The paper describes key aspects of interpretation of compressibility of solutes in liquid mixtures obtained through high-resolution measurements of ultrasonic parameters. It examines the fundamental relationships between the characteristics of solutes and the contributions of solutes to compressibility of liquid mixtures expressed through apparent adiabatic compressibility of solutes, and adiabatic compressibility of solute particles. In addition, it analyses relationships between the adiabatic compressibility of solutes and the measured ultrasonic characteristics of mixtures. Especial attention is given to the effects of solvents on the measured adiabatic compressibility of solutes and on concentration increment of ultrasonic velocity of solutes in mixtures.

  10. Full-frame compression of discrete wavelet and cosine transforms

    Science.gov (United States)

    Lo, Shih-Chung B.; Li, Huai; Krasner, Brian; Freedman, Matthew T.; Mun, Seong K.

    1995-04-01

    At the foreground of computerized radiology and the filmless hospital are the possibilities for easy image retrieval, efficient storage, and rapid image communication. This paper represents the authors' continuous efforts in compression research on full-frame discrete wavelet (FFDWT) and full-frame discrete cosine transforms (FFDCT) for medical image compression. Prior to the coding, it is important to evaluate the global entropy in the decomposed space. It is because of the minimum entropy, that a maximum compression efficiency can be achieved. In this study, each image was split into the top three most significant bit (MSB) and the remaining remapped least significant bit (RLSB) images. The 3MSB image was compressed by an error-free contour coding and received an average of 0.1 bit/pixel. The RLSB image was either transformed to a multi-channel wavelet or the cosine transform domain for entropy evaluation. Ten x-ray chest radiographs and ten mammograms were randomly selected from our clinical database and were used for the study. Our results indicated that the coding scheme in the FFDCT domain performed better than in FFDWT domain for high-resolution digital chest radiographs and mammograms. From this study, we found that decomposition efficiency in the DCT domain for relatively smooth images is higher than that in the DWT. However, both schemes worked just as well for low resolution digital images. We also found that the image characteristics of the `Lena' image commonly used in the compression literature are very different from those of radiological images. The compression outcome of the radiological images can not be extrapolated from the compression result based on the `Lena.'

  11. High-energy-throughput pulse compression by off-axis group-delay compensation in a laser-induced filament

    International Nuclear Information System (INIS)

    Voronin, A. A.; Alisauskas, S.; Muecke, O. D.; Pugzlys, A.; Baltuska, A.; Zheltikov, A. M.

    2011-01-01

    Off-axial beam dynamics of ultrashort laser pulses in a filament enable a radical energy-throughput improvement for filamentation-assisted pulse compression. We identify regimes where a weakly diverging wave, produced on the trailing edge of the pulse, catches up with a strongly diverging component, arising in the central part of the pulse, allowing sub-100-fs millijoule infrared laser pulses to be compressed to 20-25-fs pulse widths with energy throughputs in excess of 70%. Theoretical predictions have been verified by experimental results on filamentation-assisted compression of 70-fs, 1.5-μm laser pulses in high-pressure argon.

  12. Accelerated high-frame-rate mouse heart cine-MRI using compressed sensing reconstruction

    NARCIS (Netherlands)

    Motaal, Abdallah G.; Coolen, Bram F.; Abdurrachim, Desiree; Castro, Rui M.; Prompers, Jeanine J.; Florack, Luc M. J.; Nicolay, Klaas; Strijkers, Gustav J.

    2013-01-01

    We introduce a new protocol to obtain very high-frame-rate cinematographic (Cine) MRI movies of the beating mouse heart within a reasonable measurement time. The method is based on a self-gated accelerated fast low-angle shot (FLASH) acquisition and compressed sensi ng reconstruction. Key to our

  13. Thermo-fluid dynamic analysis of wet compression process

    Energy Technology Data Exchange (ETDEWEB)

    Mohan, Abhay; Kim, Heuy Dong [School of Mechanical Engineering, Andong National University, Andong (Korea, Republic of); Chidambaram, Palani Kumar [FMTRC, Daejoo Machinery Co. Ltd., Daegu (Korea, Republic of); Suryan, Abhilash [Dept. of Mechanical Engineering, College of Engineering Trivandrum, Kerala (India)

    2016-12-15

    Wet compression systems increase the useful power output of a gas turbine by reducing the compressor work through the reduction of air temperature inside the compressor. The actual wet compression process differs from the conventional single phase compression process due to the presence of latent heat component being absorbed by the evaporating water droplets. Thus the wet compression process cannot be assumed isentropic. In the current investigation, the gas-liquid two phase has been modeled as air containing dispersed water droplets inside a simple cylinder-piston system. The piston moves in the axial direction inside the cylinder to achieve wet compression. Effects on the thermodynamic properties such as temperature, pressure and relative humidity are investigated in detail for different parameters such as compression speeds and overspray. An analytical model is derived and the requisite thermodynamic curves are generated. The deviations of generated thermodynamic curves from the dry isentropic curves (PV{sup γ} = constant) are analyzed.

  14. Thermo-fluid dynamic analysis of wet compression process

    International Nuclear Information System (INIS)

    Mohan, Abhay; Kim, Heuy Dong; Chidambaram, Palani Kumar; Suryan, Abhilash

    2016-01-01

    Wet compression systems increase the useful power output of a gas turbine by reducing the compressor work through the reduction of air temperature inside the compressor. The actual wet compression process differs from the conventional single phase compression process due to the presence of latent heat component being absorbed by the evaporating water droplets. Thus the wet compression process cannot be assumed isentropic. In the current investigation, the gas-liquid two phase has been modeled as air containing dispersed water droplets inside a simple cylinder-piston system. The piston moves in the axial direction inside the cylinder to achieve wet compression. Effects on the thermodynamic properties such as temperature, pressure and relative humidity are investigated in detail for different parameters such as compression speeds and overspray. An analytical model is derived and the requisite thermodynamic curves are generated. The deviations of generated thermodynamic curves from the dry isentropic curves (PV γ = constant) are analyzed

  15. Curvelet-based compressive sensing for InSAR raw data

    Science.gov (United States)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications

  16. Generation of intense, high-energy ion pulses by magnetic compression of ion rings

    International Nuclear Information System (INIS)

    Kapetanakos, C.A.

    1981-01-01

    A system based on the magnetic compression of ion rings, for generating intense (High-current), high-energy ion pulses that are guided to a target without a metallic wall or an applied external magnetic field includes a vacuum chamber; an inverse reflex tetrode for producing a hollow ion beam within the chamber; magnetic coils for producing a magnetic field, bo, along the axis of the chamber; a disc that sharpens a magnetic cusp for providing a rotational velocity to the beam and causing the beam to rotate; first and second gate coils for producing fast-rising magnetic field gates, the gates being spaced apart, each gate modifying a corresponding magnetic mirror peak (Near and far peaks) for trapping or extracting the ions from the magnetic mirror, the ions forming a ring or layer having rotational energy; a metal liner for generating by magnetic flux compression a high, time-varying magnetic field, the time-varying magnetic field progressively increasing the kinetic energy of the ions, the magnetic field from the second gate coil decreasing the far mirror peak at the end of the compression for extracting the trapped rotating ions from the confining mirror; and a disc that sharpens a magnetic half-cusp for increasing the translational velocity of the ion beam. The system utilizes the self-magnetic field of the rotating, propagating ion beam to prevent the beam from expanding radially upon extraction

  17. Compression Behavior of High Performance Polymeric Fibers

    National Research Council Canada - National Science Library

    Kumar, Satish

    2003-01-01

    Hydrogen bonding has proven to be effective in improving the compressive strength of rigid-rod polymeric fibers without resulting in a decrease in tensile strength while covalent crosslinking results in brittle fibers...

  18. File compression and encryption based on LLS and arithmetic coding

    Science.gov (United States)

    Yu, Changzhi; Li, Hengjian; Wang, Xiyu

    2018-03-01

    e propose a file compression model based on arithmetic coding. Firstly, the original symbols, to be encoded, are input to the encoder one by one, we produce a set of chaotic sequences by using the Logistic and sine chaos system(LLS), and the values of this chaotic sequences are randomly modified the Upper and lower limits of current symbols probability. In order to achieve the purpose of encryption, we modify the upper and lower limits of all character probabilities when encoding each symbols. Experimental results show that the proposed model can achieve the purpose of data encryption while achieving almost the same compression efficiency as the arithmetic coding.

  19. Efficient High-Dimensional Entanglement Imaging with a Compressive-Sensing Double-Pixel Camera

    Directory of Open Access Journals (Sweden)

    Gregory A. Howland

    2013-02-01

    Full Text Available We implement a double-pixel compressive-sensing camera to efficiently characterize, at high resolution, the spatially entangled fields that are produced by spontaneous parametric down-conversion. This technique leverages sparsity in spatial correlations between entangled photons to improve acquisition times over raster scanning by a scaling factor up to n^{2}/log⁡(n for n-dimensional images. We image at resolutions up to 1024 dimensions per detector and demonstrate a channel capacity of 8.4 bits per photon. By comparing the entangled photons’ classical mutual information in conjugate bases, we violate an entropic Einstein-Podolsky-Rosen separability criterion for all measured resolutions. More broadly, our result indicates that compressive sensing can be especially effective for higher-order measurements on correlated systems.

  20. Pulse compression by Raman induced cavity dumping

    International Nuclear Information System (INIS)

    De Rougemont, F.; Xian, D.K.; Frey, R.; Pradere, F.

    1985-01-01

    High efficiency pulse compression using Raman induced cavity dumping has been studied theoretically and experimentally. Through stimulated Raman scattering the electromagnetic energy at a primary frequency is down-converted and extracted from a storage cavity containing the Raman medium. Energy storage may be achieved either at the laser frequency by using a laser medium inside the storage cavity, or performed at a new frequency obtained through an intracavity nonlinear process. The storage cavity may be dumped passively through stimulated Raman scattering either in an oscillator or in an amplifier. All these cases have been studied by using a ruby laser as the pump source and compressed hydrogen as the Raman scatter. Results differ slightly accordingly to the technique used, but pulse shortenings higher than 10 and quantum efficiencies higher than 80% were obtained. This method could also be used with large power lasers of any wavelength from the ultraviolet to the farinfrared spectral region

  1. Compression of fiber supercontinuum pulses to the Fourier-limit in a high-numerical-aperture focus

    DEFF Research Database (Denmark)

    Tu, Haohua; Liu, Yuan; Turchinovich, Dmitry

    2011-01-01

    A multiphoton intrapulse interference phase scan (MIIPS) adaptively and automatically compensates the combined phase distortion from a fiber supercontinuum source, a spatial light modulator pulse shaper, and a high-NA microscope objective, allowing Fourier-transform-limited compression of the sup......A multiphoton intrapulse interference phase scan (MIIPS) adaptively and automatically compensates the combined phase distortion from a fiber supercontinuum source, a spatial light modulator pulse shaper, and a high-NA microscope objective, allowing Fourier-transform-limited compression...... power of 18–70mW, and a repetition rate of 76MHz, permitting the application of this source to nonlinear optical microscopy and coherently controlled microspectroscopy....

  2. Attitudes and Opinions from the Nation's High Achieving Teens: 26th Annual Survey of High Achievers.

    Science.gov (United States)

    Who's Who among American High School Students, Lake Forest, IL.

    A national survey of 3,351 high achieving high school students (junior and senior level) was conducted. All students had A or B averages. Topics covered include lifestyles, political beliefs, violence and entertainment, education, cheating, school violence, sexual violence and date rape, peer pressure, popularity, suicide, drugs and alcohol,…

  3. Integer Set Compression and Statistical Modeling

    DEFF Research Database (Denmark)

    Larsson, N. Jesper

    2014-01-01

    enumeration of elements may be arbitrary or random, but where statistics is kept in order to estimate probabilities of elements. We present a recursive subset-size encoding method that is able to benefit from statistics, explore the effects of permuting the enumeration order based on element probabilities......Compression of integer sets and sequences has been extensively studied for settings where elements follow a uniform probability distribution. In addition, methods exist that exploit clustering of elements in order to achieve higher compression performance. In this work, we address the case where...

  4. Compression and channel-coding algorithms for high-definition television signals

    Science.gov (United States)

    Alparone, Luciano; Benelli, Giuliano; Fabbri, A. F.

    1990-09-01

    In this paper results of investigations about the effects of channel errors in the transmission of images compressed by means of techniques based on Discrete Cosine Transform (DOT) and Vector Quantization (VQ) are presented. Since compressed images are heavily degraded by noise in the transmission channel more seriously for what concern VQ-coded images theoretical studies and simulations are presented in order to define and evaluate this degradation. Some channel coding schemes are proposed in order to protect information during transmission. Hamming codes (7 (15 and (31 have been used for DCT-compressed images more powerful codes such as Golay (23 for VQ-compressed images. Performances attainable with softdecoding techniques are also evaluated better quality images have been obtained than using classical hard decoding techniques. All tests have been carried out to simulate the transmission of a digital image from HDTV signal over an AWGN channel with P5K modulation.

  5. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  6. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  7. A Novel Range Compression Algorithm for Resolution Enhancement in GNSS-SARs

    Directory of Open Access Journals (Sweden)

    Yu Zheng

    2017-06-01

    Full Text Available In this paper, a novel range compression algorithm for enhancing range resolutions of a passive Global Navigation Satellite System-based Synthetic Aperture Radar (GNSS-SAR is proposed. In the proposed algorithm, within each azimuth bin, firstly range compression is carried out by correlating a reflected GNSS intermediate frequency (IF signal with a synchronized direct GNSS base-band signal in the range domain. Thereafter, spectrum equalization is applied to the compressed results for suppressing side lobes to obtain a final range-compressed signal. Both theoretical analysis and simulation results have demonstrated that significant range resolution improvement in GNSS-SAR images can be achieved by the proposed range compression algorithm, compared to the conventional range compression algorithm.

  8. Contact Behavior of Composite CrTiSiN Coated Dies in Compressing of Mg Alloy Sheets under High Pressure

    Directory of Open Access Journals (Sweden)

    T.S. Yang

    2018-01-01

    Full Text Available Hard coatings have been adopted in cutting and forming applications for nearly two decades. The major purpose of using hard coatings is to reduce the friction coefficient between contact surfaces, to increase strength, toughness and anti-wear performance of working tools and molds, and then to obtain a smooth work surface and an increase in service life of tools and molds. In this report, we deposited a composite CrTiSiN hard coating, and a traditional single-layered TiAlN coating as a reference. Then, the coatings were comparatively studied by a series of tests. A field emission SEM was used to characterize the microstructure. Hardness was measured using a nano-indentation tester. Adhesion of coatings was evaluated using a Rockwell C hardness indentation tester. A pin-on-disk wear tester with WC balls as sliding counterparts was used to determine the wear properties. A self-designed compression and friction tester, by combining a Universal Testing Machine and a wear tester, was used to evaluate the contact behavior of composite CrTiSiN coated dies in compressing of Mg alloy sheets under high pressure. The results indicated that the hardness of composite CrTiSiN coating was lower than that of the TiAlN coating. However, the CrTiSiN coating showed better anti-wear performance. The CrTiSiN coated dies achieved smooth surfaces on the Mg alloy sheet in the compressing test and lower friction coefficient in the friction test, as compared with the TiAlN coating.

  9. Contact Behavior of Composite CrTiSiN Coated Dies in Compressing of Mg Alloy Sheets under High Pressure.

    Science.gov (United States)

    Yang, T S; Yao, S H; Chang, Y Y; Deng, J H

    2018-01-08

    Hard coatings have been adopted in cutting and forming applications for nearly two decades. The major purpose of using hard coatings is to reduce the friction coefficient between contact surfaces, to increase strength, toughness and anti-wear performance of working tools and molds, and then to obtain a smooth work surface and an increase in service life of tools and molds. In this report, we deposited a composite CrTiSiN hard coating, and a traditional single-layered TiAlN coating as a reference. Then, the coatings were comparatively studied by a series of tests. A field emission SEM was used to characterize the microstructure. Hardness was measured using a nano-indentation tester. Adhesion of coatings was evaluated using a Rockwell C hardness indentation tester. A pin-on-disk wear tester with WC balls as sliding counterparts was used to determine the wear properties. A self-designed compression and friction tester, by combining a Universal Testing Machine and a wear tester, was used to evaluate the contact behavior of composite CrTiSiN coated dies in compressing of Mg alloy sheets under high pressure. The results indicated that the hardness of composite CrTiSiN coating was lower than that of the TiAlN coating. However, the CrTiSiN coating showed better anti-wear performance. The CrTiSiN coated dies achieved smooth surfaces on the Mg alloy sheet in the compressing test and lower friction coefficient in the friction test, as compared with the TiAlN coating.

  10. Comparison of high order algorithms in Aerosol and Aghora for compressible flows

    Directory of Open Access Journals (Sweden)

    Mbengoue D. A.

    2013-12-01

    Full Text Available This article summarizes the work done within the Colargol project during CEMRACS 2012. The aim of this project is to compare the implementations of high order finite element methods for compressible flows that have been developed at ONERA and at INRIA for about one year, within the Aghora and Aerosol libraries.

  11. Student Perceptions of High-Achieving Classmates

    Science.gov (United States)

    Händel, Marion; Vialle, Wilma; Ziegler, Albert

    2013-01-01

    The reported study investigated students' perceptions of their high-performing classmates in terms of intelligence, social skills, and conscientiousness in different school subjects. The school subjects for study were examined with regard to cognitive, physical, and gender-specific issues. The results show that high academic achievements in…

  12. Prediction of compression strength of high performance concrete using artificial neural networks

    International Nuclear Information System (INIS)

    Torre, A; Moromi, I; Garcia, F; Espinoza, P; Acuña, L

    2015-01-01

    High-strength concrete is undoubtedly one of the most innovative materials in construction. Its manufacture is simple and is carried out starting from essential components (water, cement, fine and aggregates) and a number of additives. Their proportions have a high influence on the final strength of the product. This relations do not seem to follow a mathematical formula and yet their knowledge is crucial to optimize the quantities of raw materials used in the manufacture of concrete. Of all mechanical properties, concrete compressive strength at 28 days is most often used for quality control. Therefore, it would be important to have a tool to numerically model such relationships, even before processing. In this aspect, artificial neural networks have proven to be a powerful modeling tool especially when obtaining a result with higher reliability than knowledge of the relationships between the variables involved in the process. This research has designed an artificial neural network to model the compressive strength of concrete based on their manufacturing parameters, obtaining correlations of the order of 0.94

  13. Compressed sensing electron tomography of needle-shaped biological specimens – Potential for improved reconstruction fidelity with reduced dose

    Energy Technology Data Exchange (ETDEWEB)

    Saghi, Zineb, E-mail: saghizineb@gmail.com [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); Divitini, Giorgio [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); Winter, Benjamin [Center for Nanoanalysis and Electron Microscopy (CENEM), Friedrich-Alexander-Universität Erlangen-Nürnberg, Cauerstraße 6, 91058 Erlangen (Germany); Leary, Rowan [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); Spiecker, Erdmann [Center for Nanoanalysis and Electron Microscopy (CENEM), Friedrich-Alexander-Universität Erlangen-Nürnberg, Cauerstraße 6, 91058 Erlangen (Germany); Ducati, Caterina [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom); Midgley, Paul A., E-mail: pam33@cam.ac.uk [Department of Materials Science and Metallurgy, University of Cambridge, 27 Charles Babbage Road, Cambridge CB3 0FS (United Kingdom)

    2016-01-15

    Electron tomography is an invaluable method for 3D cellular imaging. The technique is, however, limited by the specimen geometry, with a loss of resolution due to a restricted tilt range, an increase in specimen thickness with tilt, and a resultant need for subjective and time-consuming manual segmentation. Here we show that 3D reconstructions of needle-shaped biological samples exhibit isotropic resolution, facilitating improved automated segmentation and feature detection. By using scanning transmission electron tomography, with small probe convergence angles, high spatial resolution is maintained over large depths of field and across the tilt range. Moreover, the application of compressed sensing methods to the needle data demonstrates how high fidelity reconstructions may be achieved with far fewer images (and thus greatly reduced dose) than needed by conventional methods. These findings open the door to high fidelity electron tomography over critically relevant length-scales, filling an important gap between existing 3D cellular imaging techniques. - Highlights: • On-axis electron tomography of a needle-shaped biological sample is presented. • A reconstruction with isotropic resolution is achieved. • Compressed sensing methods are compared to conventional reconstruction algorithms. • High fidelity reconstructions are achieved with greatly undersampled datasets.

  14. Compressed sensing electron tomography of needle-shaped biological specimens – Potential for improved reconstruction fidelity with reduced dose

    International Nuclear Information System (INIS)

    Saghi, Zineb; Divitini, Giorgio; Winter, Benjamin; Leary, Rowan; Spiecker, Erdmann; Ducati, Caterina; Midgley, Paul A.

    2016-01-01

    Electron tomography is an invaluable method for 3D cellular imaging. The technique is, however, limited by the specimen geometry, with a loss of resolution due to a restricted tilt range, an increase in specimen thickness with tilt, and a resultant need for subjective and time-consuming manual segmentation. Here we show that 3D reconstructions of needle-shaped biological samples exhibit isotropic resolution, facilitating improved automated segmentation and feature detection. By using scanning transmission electron tomography, with small probe convergence angles, high spatial resolution is maintained over large depths of field and across the tilt range. Moreover, the application of compressed sensing methods to the needle data demonstrates how high fidelity reconstructions may be achieved with far fewer images (and thus greatly reduced dose) than needed by conventional methods. These findings open the door to high fidelity electron tomography over critically relevant length-scales, filling an important gap between existing 3D cellular imaging techniques. - Highlights: • On-axis electron tomography of a needle-shaped biological sample is presented. • A reconstruction with isotropic resolution is achieved. • Compressed sensing methods are compared to conventional reconstruction algorithms. • High fidelity reconstructions are achieved with greatly undersampled datasets.

  15. The Development of the Electrically Controlled High Power RF Switch and Its Application to Active RF Pulse Compression Systems

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Jiquan [Stanford Univ., CA (United States)

    2008-12-01

    In the past decades, there has been increasing interest in pulsed high power RF sources for building high-gradient high-energy particle accelerators. Passive RF pulse compression systems have been used in many applications to match the available RF sources to the loads requiring higher RF power but a shorter pulse. Theoretically, an active RF pulse compression system has the advantage of higher efficiency and compactness over the passive system. However, the key component for such a system an element capable of switching hundreds of megawatts of RF power in a short time compared to the compressed pulse width is still an open problem. In this dissertation, we present a switch module composed of an active window based on the bulk effects in semiconductor, a circular waveguide three-port network and a movable short plane, with the capability to adjust the S-parameters before and after switching. The RF properties of the switch module were analyzed. We give the scaling laws of the multiple-element switch systems, which allow the expansion of the system to a higher power level. We present a novel overmoded design for the circular waveguide three-port network and the associated circular-to-rectangular mode-converter. We also detail the design and synthesis process of this novel mode-converter. We demonstrate an electrically controlled ultra-fast high power X-band RF active window built with PIN diodes on high resistivity silicon. The window is capable of handling multi-megawatt RF power and can switch in 2-300ns with a 1000A current driver. A low power active pulse compression experiment was carried out with the switch module and a 375ns resonant delay line, obtaining 8 times compression gain with a compression ratio of 20.

  16. Compressive pre-stress effects on magnetostrictive behaviors of highly textured Galfenol and Alfenol thin sheets

    Directory of Open Access Journals (Sweden)

    Julia R. Downing

    2017-05-01

    Full Text Available Fe-Ga (Galfenol and Fe-Al (Alfenol are rare-earth-free magnetostrictive alloys with mechanical robustness and strong magnetoelastic coupling. Since highly textured Galfenol and Alfenol thin sheets along orientations have been developed with magnetostrictive performances of ∼270 ppm and ∼160 ppm, respectively, they have been of great interest in sensor and energy harvesting applications. In this work, we investigate stress-dependent magnetostrictive behaviors in highly textured rolled sheets of NbC-added Fe80Al20 and Fe81Ga19 alloys with a single (011 grain coverage of ∼90%. A compact fixture was designed and used to introduce a uniform compressive pre-stress to those thin sheet samples along a [100] direction. As compressive pre-stress was increased to above 100 MPa, the maximum observed magnetostriction increased 42% in parallel magnetostriction along the stress direction, λ//, in highly textured (011 Fe81Ga19 thin sheets for a compressive pre-stress of 60 MPa. The same phenomena were observed for (011 Fe80Al20 (maximum increase of 88% with a 49 MPa compressive stress. This trend is shown to be consistent with published results on the effect of pre-stress on magnetostriction in rods of single crystal and textured polycrystalline Fe-Ga alloy of similar compositions, and single crystal data gathered using our experimental set up. Interestingly, the saturating field (Hs does not vary with pre-stresses, while the saturating field in rod-shaped samples of Fe-Ga increases with an increase of pre-stress. This suggests that for a range of compressive pre-stresses, thin sheet samples have larger values of d33 transduction coefficients and susceptibility than rod-shaped samples of similar alloy compositions, and hence they should provide performance benefits when used in sensor and actuator device applications. Thus, we discuss potential reasons for the unexpected trends in Hs with pre-stress, and present preliminary results from tests conducted

  17. [Ambulant compression therapy for crural ulcers; an effective treatment when applied skilfully].

    Science.gov (United States)

    de Boer, Edith M; Geerkens, Maud; Mooij, Michael C

    2015-01-01

    The incidence of crural ulcers is high. They reduce quality of life considerably and create a burden on the healthcare budget. The key treatment is ambulant compression therapy (ACT). We describe two patients with crural ulcers whose ambulant compression treatment was suboptimal and did not result in healing. When the bandages were applied correctly healing was achieved. If correctly applied ACT should provide sufficient pressure to eliminate oedema, whilst taking local circumstances such as bony structures and arterial qualities into consideration. To provide pressure-to-measure regular practical training, skills and regular quality checks are needed. Knowledge of the properties of bandages and the proper use of materials for padding under the bandage enables good personalised ACT. In trained hands adequate compression and making use of simple bandages and dressings provides good care for patients suffering from crural ulcers in contrast to inadequate ACT using the same materials.

  18. Self-Concept and Achievement Motivation of High School Students

    Science.gov (United States)

    Lawrence, A. S. Arul; Vimala, A.

    2013-01-01

    The present study "Self-concept and Achievement Motivation of High School Students" was investigated to find the relationship between Self-concept and Achievement Motivation of High School Students. Data for the study were collected using Self-concept Questionnaire developed by Raj Kumar Saraswath (1984) and Achievement Motive Test (ACMT)…

  19. Demonstration of Isothermal Compressed Air Energy Storage to Support Renewable Energy Production

    Energy Technology Data Exchange (ETDEWEB)

    Bollinger, Benjamin [Sustainx, Incorporated, Seabrook, NH (United States)

    2015-01-02

    This project develops and demonstrates a megawatt (MW)-scale Energy Storage System that employs compressed air as the storage medium. An isothermal compressed air energy storage (ICAESTM) system rated for 1 MW or more will be demonstrated in a full-scale prototype unit. Breakthrough cost-effectiveness will be achieved through the use of proprietary methods for isothermal gas cycling and staged gas expansion implemented using industrially mature, readily-available components.The ICAES approach uses an electrically driven mechanical system to raise air to high pressure for storage in low-cost pressure vessels, pipeline, or lined-rock cavern (LRC). This air is later expanded through the same mechanical system to drive the electric motor as a generator. The approach incorporates two key efficiency-enhancing innovations: (1) isothermal (constant temperature) gas cycling, which is achieved by mixing liquid with air (via spray or foam) to exchange heat with air undergoing compression or expansion; and (2) a novel, staged gas-expansion scheme that allows the drivetrain to operate at constant power while still allowing the stored gas to work over its entire pressure range. The ICAES system will be scalable, non-toxic, and cost-effective, making it suitable for firming renewables and for other grid applications.

  20. POLYCOMP: Efficient and configurable compression of astronomical timelines

    Science.gov (United States)

    Tomasi, M.

    2016-07-01

    This paper describes the implementation of polycomp, a open-sourced, publicly available program for compressing one-dimensional data series in tabular format. The program is particularly suited for compressing smooth, noiseless streams of data like pointing information, as one of the algorithms it implements applies a combination of least squares polynomial fitting and discrete Chebyshev transforms that is able to achieve a compression ratio Cr up to ≈ 40 in the examples discussed in this work. This performance comes at the expense of a loss of information, whose upper bound is configured by the user. I show two areas in which the usage of polycomp is interesting. In the first example, I compress the ephemeris table of an astronomical object (Ganymede), obtaining Cr ≈ 20, with a compression error on the x , y , z coordinates smaller than 1 m. In the second example, I compress the publicly available timelines recorded by the Low Frequency Instrument (LFI), an array of microwave radiometers onboard the ESA Planck spacecraft. The compression reduces the needed storage from ∼ 6.5 TB to ≈ 0.75 TB (Cr ≈ 9), thus making them small enough to be kept in a portable hard drive.

  1. Long-term patency of experimental magnetic compression gastroenteric anastomoses achieved with covered stents.

    Science.gov (United States)

    Cope, C; Ginsberg, G G

    2001-06-01

    Our aim was to evaluate the efficacy of a prototype "YO-YO"-shaped covered stent for keeping experimental magnetic compression gastroenteric fistulas patent for 6 months. Magnets were introduced perorally with endoscopic and fluoroscopic guidance and were mated across the gastric and jejunal walls of 5 dogs. After a mean of 5.5 days a 12-mm diameter YO-YO stent was placed perorally in the resulting fistula. The gastroenteric anastomosis (GEA) with stent was observed endoscopically and gastrographically at 1- to 2-month intervals. There was no morbidity and there were no significant weight changes. The GEA was widely patent at necropsy at 6 months (n = 4); partial membrane separation occurred at 5 months in the fifth dog. There was minor breakage of the stent prongs in 2 animals. Peroral creation of a stented magnetic compression GEA is safe and provides long-term patency. This technique may be potentially useful for managing gastric outlet obstruction caused by malignancy.

  2. A high-speed lossless data compression system for space applications

    Science.gov (United States)

    Miko, Joe; Fong, Wai; Miller, Warner

    1993-01-01

    This paper reports on the integration of a lossless data compression/decompression chipset into a space data system architecture. For its compression engine, the data system incorporates the Universal Source Encoder (USE) designed for the NASA/Goddard Space Flight Center. Currently, the data compression testbed generates video frames consisting of 512 lines of 512 pixels having 8-bit resolution. Each image is passed through the USE where the lines are internally partitioned into 16-word blocks. These blocks are adaptively encoded across widely varying entropy levels using a Rice 12-option set coding algorithm. The current system operates at an Input/Output rate of 10 Msamples/s or 80 Mbits/s for each buffered input line. Frame and line synchronization for each image are maintained through the use of uniquely decodable command words. Length information of each variable length compressed image line is also included in the output stream. The data and command information are passed to the next stage of the system architecture through a serial fiber-optic transmitter. The initial segment of this stage consists of packetizer hardware which adds an appropriate CCSDS header to the received source data. An uncompressed mode is optionally available to pass image lines directly to the packetizer hardware. A data decompression testbed has also been developed to confirm the data compression operation.

  3. Combustion in a High-Speed Compression-Ignition Engine

    Science.gov (United States)

    Rothrock, A M

    1933-01-01

    An investigation conducted to determine the factors which control the combustion in a high-speed compression-ignition engine is presented. Indicator cards were taken with the Farnboro indicator and analyzed according to the tangent method devised by Schweitzer. The analysis show that in a quiescent combustion chamber increasing the time lag of auto-ignition increases the maximum rate of combustion. Increasing the maximum rate of combustion increases the tendency for detonation to occur. The results show that by increasing the air temperature during injection the start of combustion can be forced to take place during injection and so prevent detonation from occurring. It is shown that the rate of fuel injection does not in itself control the rate of combustion.

  4. Biomedical sensor design using analog compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza; Krishnan, Sridhar

    2015-05-01

    The main drawback of current healthcare systems is the location-specific nature of the system due to the use of fixed/wired biomedical sensors. Since biomedical sensors are usually driven by a battery, power consumption is the most important factor determining the life of a biomedical sensor. They are also restricted by size, cost, and transmission capacity. Therefore, it is important to reduce the load of sampling by merging the sampling and compression steps to reduce the storage usage, transmission times, and power consumption in order to expand the current healthcare systems to Wireless Healthcare Systems (WHSs). In this work, we present an implementation of a low-power biomedical sensor using analog Compressed Sensing (CS) framework for sparse biomedical signals that addresses both the energy and telemetry bandwidth constraints of wearable and wireless Body-Area Networks (BANs). This architecture enables continuous data acquisition and compression of biomedical signals that are suitable for a variety of diagnostic and treatment purposes. At the transmitter side, an analog-CS framework is applied at the sensing step before Analog to Digital Converter (ADC) in order to generate the compressed version of the input analog bio-signal. At the receiver side, a reconstruction algorithm based on Restricted Isometry Property (RIP) condition is applied in order to reconstruct the original bio-signals form the compressed bio-signals with high probability and enough accuracy. We examine the proposed algorithm with healthy and neuropathy surface Electromyography (sEMG) signals. The proposed algorithm achieves a good level for Average Recognition Rate (ARR) at 93% and reconstruction accuracy at 98.9%. In addition, The proposed architecture reduces total computation time from 32 to 11.5 seconds at sampling-rate=29 % of Nyquist rate, Percentage Residual Difference (PRD)=26 %, Root Mean Squared Error (RMSE)=3 %.

  5. Dynamic Range Enhancement of High-Speed Electrical Signal Data via Non-Linear Compression

    Science.gov (United States)

    Laun, Matthew C. (Inventor)

    2016-01-01

    Systems and methods for high-speed compression of dynamic electrical signal waveforms to extend the measuring capabilities of conventional measuring devices such as oscilloscopes and high-speed data acquisition systems are discussed. Transfer function components and algorithmic transfer functions can be used to accurately measure signals that are within the frequency bandwidth but beyond the voltage range and voltage resolution capabilities of the measuring device.

  6. Shock compression experiments on Lithium Deuteride (LiD) single crystals

    Science.gov (United States)

    Knudson, M. D.; Desjarlais, M. P.; Lemke, R. W.

    2016-12-01

    Shock compression experiments in the few hundred GPa (multi-Mbar) regime were performed on Lithium Deuteride single crystals. This study utilized the high velocity flyer plate capability of the Sandia Z Machine to perform impact experiments at flyer plate velocities in the range of 17-32 km/s. Measurements included pressure, density, and temperature between ˜190 and 570 GPa along the Principal Hugoniot—the locus of end states achievable through compression by large amplitude shock waves—as well as pressure and density of reshock states up to ˜920 GPa. The experimental measurements are compared with density functional theory calculations, tabular equation of state models, and legacy nuclear driven results that have been reanalyzed using modern equations of state for the shock wave standards used in the experiments.

  7. Development and evaluation of a novel lossless image compression method (AIC: artificial intelligence compression method) using neural networks as artificial intelligence

    International Nuclear Information System (INIS)

    Fukatsu, Hiroshi; Naganawa, Shinji; Yumura, Shinnichiro

    2008-01-01

    This study was aimed to validate the performance of a novel image compression method using a neural network to achieve a lossless compression. The encoding consists of the following blocks: a prediction block; a residual data calculation block; a transformation and quantization block; an organization and modification block; and an entropy encoding block. The predicted image is divided into four macro-blocks using the original image for teaching; and then redivided into sixteen sub-blocks. The predicted image is compared to the original image to create the residual image. The spatial and frequency data of the residual image are compared and transformed. Chest radiography, computed tomography (CT), magnetic resonance imaging, positron emission tomography, radioisotope mammography, ultrasonography, and digital subtraction angiography images were compressed using the AIC lossless compression method; and the compression rates were calculated. The compression rates were around 15:1 for chest radiography and mammography, 12:1 for CT, and around 6:1 for other images. This method thus enables greater lossless compression than the conventional methods. This novel method should improve the efficiency of handling of the increasing volume of medical imaging data. (author)

  8. Compressed sensing cine imaging with high spatial or high temporal resolution for analysis of left ventricular function.

    Science.gov (United States)

    Goebel, Juliane; Nensa, Felix; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-08-01

    To assess two compressed sensing cine magnetic resonance imaging (MRI) sequences with high spatial or high temporal resolution in comparison to a reference steady-state free precession cine (SSFP) sequence for reliable quantification of left ventricular (LV) volumes. LV short axis stacks of two compressed sensing breath-hold cine sequences with high spatial resolution (SPARSE-SENSE HS: temporal resolution: 40 msec, in-plane resolution: 1.0 × 1.0 mm(2) ) and high temporal resolution (SPARSE-SENSE HT: temporal resolution: 11 msec, in-plane resolution: 1.7 × 1.7 mm(2) ) and of a reference cine SSFP sequence (standard SSFP: temporal resolution: 40 msec, in-plane resolution: 1.7 × 1.7 mm(2) ) were acquired in 16 healthy volunteers on a 1.5T MR system. LV parameters were analyzed semiautomatically twice by one reader and once by a second reader. The volumetric agreement between sequences was analyzed using paired t-test, Bland-Altman plots, and Passing-Bablock regression. Small differences were observed between standard SSFP and SPARSE-SENSE HS for stroke volume (SV; -7 ± 11 ml; P = 0.024), ejection fraction (EF; -2 ± 3%; P = 0.019), and myocardial mass (9 ± 9 g; P = 0.001), but not for end-diastolic volume (EDV; P = 0.079) and end-systolic volume (ESV; P = 0.266). No significant differences were observed between standard SSFP and SPARSE-SENSE HT regarding EDV (P = 0.956), SV (P = 0.088), and EF (P = 0.103), but for ESV (3 ± 5 ml; P = 0.039) and myocardial mass (8 ± 10 ml; P = 0.007). Bland-Altman analysis showed good agreement between the sequences (maximum bias ≤ -8%). Two compressed sensing cine sequences, one with high spatial resolution and one with high temporal resolution, showed good agreement with standard SSFP for LV volume assessment. J. Magn. Reson. Imaging 2016;44:366-374. © 2016 Wiley Periodicals, Inc.

  9. Using off-the-shelf lossy compression for wireless home sleep staging.

    Science.gov (United States)

    Lan, Kun-Chan; Chang, Da-Wei; Kuo, Chih-En; Wei, Ming-Zhi; Li, Yu-Hung; Shaw, Fu-Zen; Liang, Sheng-Fu

    2015-05-15

    Recently, there has been increasing interest in the development of wireless home sleep staging systems that allow the patient to be monitored remotely while remaining in the comfort of their home. However, transmitting large amount of Polysomnography (PSG) data over the Internet is an important issue needed to be considered. In this work, we aim to reduce the amount of PSG data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to classify sleep stages. We examine the effects of off-the-shelf lossy compression on an all-night PSG dataset from 20 healthy subjects, in the context of automated sleep staging. The popular compression method Set Partitioning in Hierarchical Trees (SPIHT) was used, and a range of compression levels was selected in order to compress the signals with various degrees of loss. In addition, a rule-based automatic sleep staging method was used to automatically classify the sleep stages. Considering the criteria of clinical usefulness, the experimental results show that the system can achieve more than 60% energy saving with a high accuracy (>84%) in classifying sleep stages by using a lossy compression algorithm like SPIHT. As far as we know, our study is the first that focuses how much loss can be tolerated in compressing complex multi-channel PSG data for sleep analysis. We demonstrate the feasibility of using lossy SPIHT compression for wireless home sleep staging. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. Sugar Determination in Foods with a Radially Compressed High Performance Liquid Chromatography Column.

    Science.gov (United States)

    Ondrus, Martin G.; And Others

    1983-01-01

    Advocates use of Waters Associates Radial Compression Separation System for high performance liquid chromatography. Discusses instrumentation and reagents, outlining procedure for analyzing various foods and discussing typical student data. Points out potential problems due to impurities and pump seal life. Suggests use of ribose as internal…

  11. HVS-based medical image compression

    Energy Technology Data Exchange (ETDEWEB)

    Kai Xie [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)]. E-mail: xie_kai2001@sjtu.edu.cn; Jie Yang [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China); Min Zhuyue [CREATIS-CNRS Research Unit 5515 and INSERM Unit 630, 69621 Villeurbanne (France); Liang Lixiao [Institute of Image Processing and Pattern Recognition, Shanghai Jiaotong University, 200030 Shanghai (China)

    2005-07-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time.

  12. HVS-based medical image compression

    International Nuclear Information System (INIS)

    Kai Xie; Jie Yang; Min Zhuyue; Liang Lixiao

    2005-01-01

    Introduction: With the promotion and application of digital imaging technology in the medical domain, the amount of medical images has grown rapidly. However, the commonly used compression methods cannot acquire satisfying results. Methods: In this paper, according to the existed and stated experiments and conclusions, the lifting step approach is used for wavelet decomposition. The physical and anatomic structure of human vision is combined and the contrast sensitivity function (CSF) is introduced as the main research issue in human vision system (HVS), and then the main designing points of HVS model are presented. On the basis of multi-resolution analyses of wavelet transform, the paper applies HVS including the CSF characteristics to the inner correlation-removed transform and quantization in image and proposes a new HVS-based medical image compression model. Results: The experiments are done on the medical images including computed tomography (CT) and magnetic resonance imaging (MRI). At the same bit rate, the performance of SPIHT, with respect to the PSNR metric, is significantly higher than that of our algorithm. But the visual quality of the SPIHT-compressed image is roughly the same as that of the image compressed with our approach. Our algorithm obtains the same visual quality at lower bit rates and the coding/decoding time is less than that of SPIHT. Conclusions: The results show that under common objective conditions, our compression algorithm can achieve better subjective visual quality, and performs better than that of SPIHT in the aspects of compression ratios and coding/decoding time

  13. Mechanical behavior and microstructure during compression of semi-solid ZK60-RE magnesium alloy at high solid content

    International Nuclear Information System (INIS)

    Shan Weiwei; Luo Shoujing

    2007-01-01

    Mechanical behavior during compression of semi-solid ZK60-RE magnesium alloy at high solid content is researched in this paper. The alloy was prepared from ZK60 alloy and rare earth elements by casting, equal channel angular extruding, and liquidus forging. Semi-solid isothermal pre-treatment was carried out to make the grains globular before the compression. Here, several groups of true strain-true stress curves with different variables during compression are given to make comparisons of their mechanical behaviors. Liquid paths were the most essential to deformation, and its variation during compression depends on the strain rate. Here, thixotropic strength is defined as the true stress at the first peak in the true stress-true strain curve

  14. Biomechanical Comparison of External Fixation and Compression Screws for Transverse Tarsal Joint Arthrodesis.

    Science.gov (United States)

    Latt, L Daniel; Glisson, Richard R; Adams, Samuel B; Schuh, Reinhard; Narron, John A; Easley, Mark E

    2015-10-01

    Transverse tarsal joint arthrodesis is commonly performed in the operative treatment of hindfoot arthritis and acquired flatfoot deformity. While fixation is typically achieved using screws, failure to obtain and maintain joint compression sometimes occurs, potentially leading to nonunion. External fixation is an alternate method of achieving arthrodesis site compression and has the advantage of allowing postoperative compression adjustment when necessary. However, its performance relative to standard screw fixation has not been quantified in this application. We hypothesized that external fixation could provide transverse tarsal joint compression exceeding that possible with screw fixation. Transverse tarsal joint fixation was performed sequentially, first with a circular external fixator and then with compression screws, on 9 fresh-frozen cadaveric legs. The external fixator was attached in abutting rings fixed to the tibia and the hindfoot and a third anterior ring parallel to the hindfoot ring using transverse wires and half-pins in the tibial diaphysis, calcaneus, and metatarsals. Screw fixation comprised two 4.3 mm headless compression screws traversing the talonavicular joint and 1 across the calcaneocuboid joint. Compressive forces generated during incremental fixator foot ring displacement to 20 mm and incremental screw tightening were measured using a custom-fabricated instrumented miniature external fixator spanning the transverse tarsal joint. The maximum compressive force generated by the external fixator averaged 186% of that produced by the screws (range, 104%-391%). Fixator compression surpassed that obtainable with screws at 12 mm of ring displacement and decreased when the tibial ring was detached. No correlation was found between bone density and the compressive force achievable by either fusion method. The compression across the transverse tarsal joint that can be obtained with a circular external fixator including a tibial ring exceeds that

  15. Understanding Turbulence in Compressing Plasmas and Its Exploitation or Prevention

    Science.gov (United States)

    Davidovits, Seth

    Unprecedented densities and temperatures are now achieved in compressions of plasma, by lasers and by pulsed power, in major experimental facilities. These compressions, carried out at the largest scale at the National Ignition Facility and at the Z Pulsed Power Facility, have important applications, including fusion, X-ray production, and materials research. Several experimental and simulation results suggest that the plasma in some of these compressions is turbulent. In fact, measurements suggest that in certain laboratory plasma compressions the turbulent energy is a dominant energy component. Similarly, turbulence is dominant in some compressing astrophysical plasmas, such as in molecular clouds. Turbulence need not be dominant to be important; even small quantities could greatly influence experiments that are sensitive to mixing of non-fuel into fuel, such as compressions seeking fusion ignition. Despite its important role in major settings, bulk plasma turbulence under compression is insufficiently understood to answer or even to pose some of the most fundamental questions about it. This thesis both identifies and answers key questions in compressing turbulent motion, while providing a description of the behavior of three-dimensional, isotropic, compressions of homogeneous turbulence with a plasma viscosity. This description includes a simple, but successful, new model for the turbulent energy of plasma undergoing compression. The unique features of compressing turbulence with a plasma viscosity are shown, including the sensitivity of the turbulence to plasma ionization, and a "sudden viscous dissipation'' effect which rapidly converts plasma turbulent energy into thermal energy. This thesis then examines turbulence in both laboratory compression experiments and molecular clouds. It importantly shows: the possibility of exploiting turbulence to make fusion or X-ray production more efficient; conditions under which hot-spot turbulence can be prevented; and a

  16. ERGC: an efficient referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Experimental Study on Compression/Absorption High-Temperature Hybrid Heat Pump with Natural Refrigerant Mixture

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Young; Park, Seong Ryong; Baik, Young Jin; Chang, Ki Chang; Ra, Ho Sang; Kim, Min Sung [Korea Institute of Energy Research, Daejeon (Korea, Republic of); Kim, Yong Chan [Korea University, Seoul (Korea, Republic of)

    2011-12-15

    This research concerns the development of a compression/absorption high-temperature hybrid heat pump that uses a natural refrigerant mixture. Heat pumps based on the compression/absorption cycle offer various advantages over conventional heat pumps based on the vapor compression cycle, such as large temperature glide, temperature lift, flexible operating range, and capacity control. In this study, a lab-scale prototype hybrid heat pump was constructed with a two-stage compressor, absorber, desorber, desuperheater, solution heat exchanger, solution pump, liquid/vapor separator, and rectifier as the main components. The hybrid heat pump system operated at 10-kW-class heating capacity producing hot water whose temperature was more than 90 .deg. C when the heat source and sink temperatures were 50 .deg. C. Experiments with various NH{sub 3}/H{sub 2}O mass fractions and compressor/pump circulation ratios were performed on the system. From the study, the system performance was optimized at a specific NH{sub 3} concentration.

  18. Thermal analysis of near-isothermal compressed gas energy storage system

    International Nuclear Information System (INIS)

    Odukomaiya, Adewale; Abu-Heiba, Ahmad; Gluesenkamp, Kyle R.; Abdelaziz, Omar; Jackson, Roderick K.; Daniel, Claus; Graham, Samuel; Momen, Ayyoub M.

    2016-01-01

    Highlights: • A novel, high-efficiency, scalable, near-isothermal, energy storage system is introduced. • A comprehensive analytical physics-based model for the system is presented. • Efficiency improvement is achieved via heat transfer enhancement and use of waste heat. • Energy storage roundtrip efficiency (RTE) of 82% and energy density of 3.59 MJ/m"3 is shown. - Abstract: Due to the increasing generation capacity of intermittent renewable electricity sources and an electrical grid ill-equipped to handle the mismatch between electricity generation and use, the need for advanced energy storage technologies will continue to grow. Currently, pumped-storage hydroelectricity and compressed air energy storage are used for grid-scale energy storage, and batteries are used at smaller scales. However, prospects for expansion of these technologies suffer from geographic limitations (pumped-storage hydroelectricity and compressed air energy storage), low roundtrip efficiency (compressed air energy storage), and high cost (batteries). Furthermore, pumped-storage hydroelectricity and compressed air energy storage are challenging to scale-down, while batteries are challenging to scale-up. In 2015, a novel compressed gas energy storage prototype system was developed at Oak Ridge National Laboratory. In this paper, a near-isothermal modification to the system is proposed. In common with compressed air energy storage, the novel storage technology described in this paper is based on air compression/expansion. However, several novel features lead to near-isothermal processes, higher efficiency, greater system scalability, and the ability to site a system anywhere. The enabling features are utilization of hydraulic machines for expansion/compression, above-ground pressure vessels as the storage medium, spray cooling/heating, and waste-heat utilization. The base configuration of the novel storage system was introduced in a previous paper. This paper describes the results

  19. Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm

    Science.gov (United States)

    Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan

    2017-12-01

    Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.

  20. The Compressibility and Swell of Mixtures for Sand-Clay Liners

    Directory of Open Access Journals (Sweden)

    Muawia A. Dafalla

    2017-01-01

    Full Text Available Sand-clay liners utilize expansive clay to act as a filler to occupy the voids in the sand and thus reduce the hydraulic conductivity of the mixture. The hydraulic conductivity and transfer of water and other substances through sand-clay mixtures are of prime concern in the design of liners and hydraulic barriers. Many successful research studies have been undertaken to achieve appropriate mixtures that satisfy hydraulic conductivity requirements. This study investigates compressibility and swelling properties of mixtures to ensure that they were acceptable for light structures, roads, and slabs on grade. A range of sand-expansive clay mixtures were investigated for swell and compression properties. The swelling and compressibility indices were found to increase with increasing clay content. The use of highly expansive material can result in large volume changes due to swell and shrinkage. The inclusion of less expansive soil material as partial replacement of bentonite by one-third to two-thirds is found to reduce the compressibility by 60% to 70% for 10% and 15% clay content, respectively. The swelling pressure and swell percent were also found significantly reduced. Adding less expansive natural clay to bentonite can produce liners that are still sufficiently impervious and at the same time less problematic.

  1. Plans for longitudinal and transverse neutralized beam compression experiments, and initial results from solenoid transport experiments

    International Nuclear Information System (INIS)

    Seidl, P.A.; Armijo, J.; Baca, D.; Bieniosek, F.M.; Coleman, J.; Davidson, R.C.; Efthimion, P.C.; Friedman, A.; Gilson, E.P.; Grote, D.; Haber, I.; Henestroza, E.; Kaganovich, I.; Leitner, M.; Logan, B.G.; Molvik, A.W.; Rose, D.V.; Roy, P.K.; Sefkow, A.B.; Sharp, W.M.; Vay, J.L.; Waldron, W.L.; Welch, D.R.; Yu, S.S.

    2007-01-01

    This paper presents plans for neutralized drift compression experiments, precursors to future target heating experiments. The target-physics objective is to study warm dense matter (WDM) using short-duration (∼1 ns) ion beams that enter the targets at energies just above that at which dE/dx is maximal. High intensity on target is to be achieved by a combination of longitudinal compression and transverse focusing. This work will build upon recent success in longitudinal compression, where the ion beam was compressed lengthwise by a factor of more than 50 by first applying a linear head-to-tail velocity tilt to the beam, and then allowing the beam to drift through a dense, neutralizing background plasma. Studies on a novel pulse line ion accelerator were also carried out. It is planned to demonstrate simultaneous transverse focusing and longitudinal compression in a series of future experiments, thereby achieving conditions suitable for future WDM target experiments. Future experiments may use solenoids for transverse focusing of un-neutralized ion beams during acceleration. Recent results are reported in the transport of a high-perveance heavy ion beam in a solenoid transport channel. The principal objectives of this solenoid transport experiment are to match and transport a space-charge-dominated ion beam, and to study associated electron-cloud and gas effects that may limit the beam quality in a solenoid transport system. Ideally, the beam will establish a Brillouin-flow condition (rotation at one-half the cyclotron frequency). Other mechanisms that potentially degrade beam quality are being studied, such as focusing-field aberrations, beam halo, and separation of lattice focusing elements

  2. Correlation between compressive strength and ultrasonic pulse velocity of high strength concrete incorporating chopped basalt fibre

    Science.gov (United States)

    Shafiq, Nasir; Fadhilnuruddin, Muhd; Elshekh, Ali Elheber Ahmed; Fathi, Ahmed

    2015-07-01

    Ultrasonic pulse velocity (UPV), is considered as the most important test for non-destructive techniques that are used to evaluate the mechanical characteristics of high strength concrete (HSC). The relationship between the compressive strength of HSC containing chopped basalt fibre stands (CBSF) and UPV was investigated. The concrete specimens were prepared using a different ratio of CBSF as internal strengthening materials. The compressive strength measurements were conducted at the sample ages of 3, 7, 28, 56 and 90 days; whilst, the ultrasonic pulse velocity was measured at 28 days. The result of HSC's compressive strength with the chopped basalt fibre did not show any improvement; instead, it was decreased. The UPV of the chopped basalt fibre reinforced concrete has been found to be less than that of the control mix for each addition ratio of the basalt fibre. A relationship plot is gained between the cube compressive strength for HSC and UPV with various amounts of chopped basalt fibres.

  3. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  4. Image compression with Iris-C

    Science.gov (United States)

    Gains, David

    2009-05-01

    Iris-C is an image codec designed for streaming video applications that demand low bit rate, low latency, lossless image compression. To achieve compression and low latency the codec features the discrete wavelet transform, Exp-Golomb coding, and online processes that construct dynamic models of the input video. Like H.264 and Dirac, the Iris-C codec accepts input video from both the YUV and YCOCG colour spaces, but the system can also operate on Bayer RAW data read directly from an image sensor. Testing shows that the Iris-C codec is competitive with the Dirac low delay syntax codec which is typically regarded as the state-of-the-art low latency, lossless video compressor.

  5. Space-Efficient Re-Pair Compression

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Prezza, Nicola

    2017-01-01

    Re-Pair [5] is an effective grammar-based compression scheme achieving strong compression rates in practice. Let n, σ, and d be the text length, alphabet size, and dictionary size of the final grammar, respectively. In their original paper, the authors show how to compute the Re-Pair grammar...... in expected linear time and 5n + 4σ2 + 4d + √n words of working space on top of the text. In this work, we propose two algorithms improving on the space of their original solution. Our model assumes a memory word of [log2 n] bits and a re-writable input text composed by n such words. Our first algorithm runs...

  6. Using gasoline in an advanced compression ignition engine

    Energy Technology Data Exchange (ETDEWEB)

    Cracknell, R.F.; Ariztegui, J.; Dubois, T.; Hamje, H.D.C.; Pellegrini, L.; Rickeard, D.J.; Rose, K.D. [CONCAWE, Brussels (Belgium); Heuser, B. [RWTH Aachen Univ. (Germany). Inst. for Combustion Engines; Schnorbus, T.; Kolbeck, A.F. [FEV GmbH, Aachen (Germany)

    2013-06-01

    Future vehicles will be required to improve their efficiency, reduce both regulated and CO{sub 2} emissions, and maintain acceptable driveability, safety, and noise. To achieve this overall performance, they will be configured with more advanced hardware, sensors, and control technologies that will also enable their operation on a broader range of fuel properties. Fuel flexibility has already been demonstrated in previous studies on a compression ignition bench engine and a demonstration vehicle equipped with an advanced engine management system, closed-loop combustion control, and air-path control strategies. An unresolved question is whether engines of this sort can also operate on market gasoline while achieving diesel-like efficiency and acceptable emissions and noise levels. In this study, a compression ignition bench engine having a higher compression ratio, optimised valve timing, advanced engine management system, and flexible fuel injection could be operated on a European gasoline over full to medium part loads. The combustion was sensitive to EGR rates, however, and optimising all emissions and combustion noise was a considerable challenge at lower loads. (orig.)

  7. Compressed Air Working in Chennai During Metro Tunnel Construction: Occupational Health Problems.

    Science.gov (United States)

    Kulkarni, Ajit C

    2017-01-01

    Chennai metropolis has been growing rapidly. Need was felt of a metro rail system. Two corridors were planned. Corridor 1, of 23 km starting from Washermanpet to Airport. 14.3 km of this would be underground. Corridor 2, of 22 km starting from Chennai Central Railway station to St. Thomas Mount. 9.7 km of this would be underground. Occupational health centre's role involved selection of miners and assessing their fitness to work under compressed air. Planning and execution of compression and decompression, health monitoring and treatment of compression related illnesses. More than thirty five thousand man hours of work was carried out under compressed air pressure ranged from 1.2 to 1.9 bar absolute. There were only three cases of pain only ( Type I) decompression sickness which were treated with recompression. Vigilant medical supervision, experienced lock operators and reduced working hours under pressure because of inclement environmental conditions viz. high temperature and humidity, has helped achieve this low incident. Tunnelling activity will increase in India as more cities will soon opt for underground metro railway. Indian standard IS 4138 - 1977 " Safety code for working in compressed air" needs to be updated urgently keeping pace with modern working methods.

  8. Efficient Joins with Compressed Bitmap Indexes

    Energy Technology Data Exchange (ETDEWEB)

    Computational Research Division; Madduri, Kamesh; Wu, Kesheng

    2009-08-19

    We present a new class of adaptive algorithms that use compressed bitmap indexes to speed up evaluation of the range join query in relational databases. We determine the best strategy to process a join query based on a fast sub-linear time computation of the join selectivity (the ratio of the number of tuples in the result to the total number of possible tuples). In addition, we use compressed bitmaps to represent the join output compactly: the space requirement for storing the tuples representing the join of two relations is asymptotically bounded by min(h; n . cb), where h is the number of tuple pairs in the result relation, n is the number of tuples in the smaller of the two relations, and cb is the cardinality of the larger column being joined. We present a theoretical analysis of our algorithms, as well as experimental results on large-scale synthetic and real data sets. Our implementations are efficient, and consistently outperform well-known approaches for a range of join selectivity factors. For instance, our count-only algorithm is up to three orders of magnitude faster than the sort-merge approach, and our best bitmap index-based algorithm is 1.2x-80x faster than the sort-merge algorithm, for various query instances. We achieve these speedups by exploiting several inherent performance advantages of compressed bitmap indexes for join processing: an implicit partitioning of the attributes, space-efficiency, and tolerance of high-cardinality relations.

  9. After microvascular decompression to treat trigeminal neuralgia, both immediate pain relief and recurrence rates are higher in patients with arterial compression than with venous compression.

    Science.gov (United States)

    Shi, Lei; Gu, Xiaoyan; Sun, Guan; Guo, Jun; Lin, Xin; Zhang, Shuguang; Qian, Chunfa

    2017-07-04

    We explored differences in postoperative pain relief achieved through decompression of the trigeminal nerve compressed by arteries and veins. Clinical characteristics, intraoperative findings, and postoperative curative effects were analyzed in 72 patients with trigeminal neuralgia who were treated by microvascular decompression. The patients were divided into arterial and venous compression groups based on intraoperative findings. Surgical curative effects included immediate relief, delayed relief, obvious reduction, and invalid result. Among the 40 patients in the arterial compression group, 32 had immediate pain relief of pain (80.0%), 5 cases had delayed relief (12.5%), and 3 cases had an obvious reduction (7.5%). In the venous compression group, 12 patients had immediate relief of pain (37.5%), 13 cases had delayed relief (40.6%), and 7 cases had an obvious reduction (21.9%). During 2-year follow-up period, 6 patients in the arterial compression group experienced recurrence of trigeminal neuralgia, but there were no recurrences in the venous compression group. Simple artery compression was followed by early relief of trigeminal neuralgia more often than simple venous compression. However, the trigeminal neuralgia recurrence rate was higher in the artery compression group than in the venous compression group.

  10. Signal Recovery in Compressive Sensing via Multiple Sparsifying Bases

    DEFF Research Database (Denmark)

    Wijewardhana, U. L.; Belyaev, Evgeny; Codreanu, M.

    2017-01-01

    is sparse is the key assumption utilized by such algorithms. However, the basis in which the signal is the sparsest is unknown for many natural signals of interest. Instead there may exist multiple bases which lead to a compressible representation of the signal: e.g., an image is compressible in different...... wavelet transforms. We show that a significant performance improvement can be achieved by utilizing multiple estimates of the signal using sparsifying bases in the context of signal reconstruction from compressive samples. Further, we derive a customized interior-point method to jointly obtain multiple...... estimates of a 2-D signal (image) from compressive measurements utilizing multiple sparsifying bases as well as the fact that the images usually have a sparse gradient....

  11. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  12. On the characterisation of the dynamic compressive behaviour of silicon carbides subjected to isentropic compression experiments

    Directory of Open Access Journals (Sweden)

    Zinszner Jean-Luc

    2015-01-01

    Full Text Available Ceramic materials are commonly used as protective materials particularly due to their very high hardness and compressive strength. However, the microstructure of a ceramic has a great influence on its compressive strength and on its ballistic efficiency. To study the influence of microstructural parameters on the dynamic compressive behaviour of silicon carbides, isentropic compression experiments have been performed on two silicon carbide grades using a high pulsed power generator called GEPI. Contrary to plate impact experiments, the use of the GEPI device and of the lagrangian analysis allows determining the whole loading path. The two SiC grades studied present different Hugoniot elastic limit (HEL due to their different microstructures. For these materials, the experimental technique allowed evaluating the evolution of the equivalent stress during the dynamic compression. It has been observed that these two grades present a work hardening more or less pronounced after the HEL. The densification of the material seems to have more influence on the HEL than the grain size.

  13. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  14. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  15. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  16. Low-latency video transmission over high-speed WPANs based on low-power video compression

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Ann

    2010-01-01

    This paper presents latency-constrained video transmission over high-speed wireless personal area networks (WPANs). Low-power video compression is proposed as an alternative to uncompressed video transmission. A video source rate control based on MINMAX quality criteria is introduced. Practical...

  17. A JPEG backward-compatible HDR image compression

    Science.gov (United States)

    Korshunov, Pavel; Ebrahimi, Touradj

    2012-10-01

    High Dynamic Range (HDR) imaging is expected to become one of the technologies that could shape next generation of consumer digital photography. Manufacturers are rolling out cameras and displays capable of capturing and rendering HDR images. The popularity and full public adoption of HDR content is however hindered by the lack of standards in evaluation of quality, file formats, and compression, as well as large legacy base of Low Dynamic Range (LDR) displays that are unable to render HDR. To facilitate wide spread of HDR usage, the backward compatibility of HDR technology with commonly used legacy image storage, rendering, and compression is necessary. Although many tone-mapping algorithms were developed for generating viewable LDR images from HDR content, there is no consensus on which algorithm to use and under which conditions. This paper, via a series of subjective evaluations, demonstrates the dependency of perceived quality of the tone-mapped LDR images on environmental parameters and image content. Based on the results of subjective tests, it proposes to extend JPEG file format, as the most popular image format, in a backward compatible manner to also deal with HDR pictures. To this end, the paper provides an architecture to achieve such backward compatibility with JPEG and demonstrates efficiency of a simple implementation of this framework when compared to the state of the art HDR image compression.

  18. Effect of High-Temperature Curing Methods on the Compressive Strength Development of Concrete Containing High Volumes of Ground Granulated Blast-Furnace Slag

    Directory of Open Access Journals (Sweden)

    Wonsuk Jung

    2017-01-01

    Full Text Available This paper investigates the effect of the high-temperature curing methods on the compressive strength of concrete containing high volumes of ground granulated blast-furnace slag (GGBS. GGBS was used to replace Portland cement at a replacement ratio of 60% by binder mass. The high-temperature curing parameters used in this study were the delay period, temperature rise, peak temperature (PT, peak period, and temperature down. Test results demonstrate that the compressive strength of the samples with PTs of 65°C and 75°C was about 88% higher than that of the samples with a PT of 55°C after 1 day. According to this investigation, there might be optimum high-temperature curing conditions for preparing a concrete containing high volumes of GGBS, and incorporating GGBS into precast concrete mixes can be a very effective tool in increasing the applicability of this by-product.

  19. Thermo-electrochemical production of compressed hydrogen from methane with near-zero energy loss

    Science.gov (United States)

    Malerød-Fjeld, Harald; Clark, Daniel; Yuste-Tirados, Irene; Zanón, Raquel; Catalán-Martinez, David; Beeaff, Dustin; Morejudo, Selene H.; Vestre, Per K.; Norby, Truls; Haugsrud, Reidar; Serra, José M.; Kjølseth, Christian

    2017-11-01

    Conventional production of hydrogen requires large industrial plants to minimize energy losses and capital costs associated with steam reforming, water-gas shift, product separation and compression. Here we present a protonic membrane reformer (PMR) that produces high-purity hydrogen from steam methane reforming in a single-stage process with near-zero energy loss. We use a BaZrO3-based proton-conducting electrolyte deposited as a dense film on a porous Ni composite electrode with dual function as a reforming catalyst. At 800 °C, we achieve full methane conversion by removing 99% of the formed hydrogen, which is simultaneously compressed electrochemically up to 50 bar. A thermally balanced operation regime is achieved by coupling several thermo-chemical processes. Modelling of a small-scale (10 kg H2 day-1) hydrogen plant reveals an overall energy efficiency of >87%. The results suggest that future declining electricity prices could make PMRs a competitive alternative for industrial-scale hydrogen plants integrating CO2 capture.

  20. Look at energy compression as an assist for high power rf production

    International Nuclear Information System (INIS)

    Birx, D.L.; Farkas, Z.D.; Wilson, P.B.

    1984-01-01

    The desire to construct electron linacs of higher and higher energies, coupled with the realities of available funding and real estate, has forced machine designers to reassess the limitations in both accelerator gradient (MeV/m) and energy. The gradients achieved in current radio-frequency (RF) linacs are sometimes set by electrical breakdown in the accelerating structure, but are in most cases determined by the RF power level available to drive the linac. In this paper we will not discuss RF power sources in general, but rather take a brief look at several energy compression schemes which might be of service in helping to make better use of the sources we employ. We will, however, diverge for a bit and discuss what the RF power requirements are. 12 references, 21 figures, 3 tables

  1. Catholic High Schools and Rural Academic Achievement.

    Science.gov (United States)

    Sander, William

    1997-01-01

    A study of national longitudinal data examined effects of rural Catholic high schools on mathematics achievement, high school graduation rates, and the likelihood that high school graduates attend college. Findings indicate that rural Catholic high schools had a positive effect on mathematics test scores and no effect on graduation rates or rates…

  2. Novel Use of a Pneumatic Compression Device for Haemostasis of Haemodialysis Fistula Access Catheterisation Sites

    Energy Technology Data Exchange (ETDEWEB)

    O’Reilly, Michael K., E-mail: moreilly1@mater.ie; Ryan, David; Sugrue, Gavin; Geoghegan, Tony; Lawler, Leo P.; Farrelly, Cormac T. [Mater Misericordiae University Hospital, Department of Interventional Radiology (Ireland)

    2016-12-15

    PurposeTransradial pneumatic compression devices can be used to achieve haemostasis following radial artery puncture. This article describes a novel technique for acquiring haemostasis of arterio-venous haemodialysis fistula access sites without the need for suture placement using one such compression device.Materials and MethodsA retrospective review of fistulograms with or without angioplasty/thrombectomy in a single institution was performed. 20 procedures performed on 12 patients who underwent percutaneous intervention of failing or thrombosed arterio-venous fistulas (AVF) had 27 puncture sites. Haemostasis was achieved using a pneumatic compression device at all access sites. Procedure details including size of access sheath, heparin administration and complications were recorded.ResultsTwo diagnostic fistulograms, 14 fistulograms and angioplasties and four thrombectomies were performed via access sheaths with an average size (±SD) of 6 Fr (±1.12). IV unfractionated heparin was administered in 11 of 20 procedures. Haemostasis was achieved in 26 of 27 access sites following 15–20 min of compression using the pneumatic compression device. One case experienced limited bleeding from an inflow access site that was successfully treated with reinflation of the device for a further 5 min. No other complication was recorded.ConclusionsHaemostasis of arterio-venous haemodialysis fistula access sites can be safely and effectively achieved using a pneumatic compression device. This is a technically simple, safe and sutureless technique for acquiring haemostasis after AVF intervention.

  3. Single stock dynamics on high-frequency data: from a compressed coding perspective.

    Directory of Open Access Journals (Sweden)

    Hsieh Fushing

    Full Text Available High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS, and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.

  4. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  5. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  6. Efficient burst image compression using H.265/HEVC

    Science.gov (United States)

    Roodaki-Lavasani, Hoda; Lainema, Jani

    2014-02-01

    New imaging use cases are emerging as more powerful camera hardware is entering consumer markets. One family of such use cases is based on capturing multiple pictures instead of just one when taking a photograph. That kind of a camera operation allows e.g. selecting the most successful shot from a sequence of images, showing what happened right before or after the shot was taken or combining the shots by computational means to improve either visible characteristics of the picture (such as dynamic range or focus) or the artistic aspects of the photo (e.g. by superimposing pictures on top of each other). Considering that photographic images are typically of high resolution and quality and the fact that these kind of image bursts can consist of at least tens of individual pictures, an efficient compression algorithm is desired. However, traditional video coding approaches fail to provide the random access properties these use cases require to achieve near-instantaneous access to the pictures in the coded sequence. That feature is critical to allow users to browse the pictures in an arbitrary order or imaging algorithms to extract desired pictures from the sequence quickly. This paper proposes coding structures that provide such random access properties while achieving coding efficiency superior to existing image coders. The results indicate that using HEVC video codec with a single reference picture fixed for the whole sequence can achieve nearly as good compression as traditional IPPP coding structures. It is also shown that the selection of the reference frame can further improve the coding efficiency.

  7. Effect of compressive force on PEM fuel cell performance

    Science.gov (United States)

    MacDonald, Colin Stephen

    Polymer electrolyte membrane (PEM) fuel cells possess the potential, as a zero-emission power source, to replace the internal combustion engine as the primary option for transportation applications. Though there are a number of obstacles to vast PEM fuel cell commercialization, such as high cost and limited durability, there has been significant progress in the field to achieve this goal. Experimental testing and analysis of fuel cell performance has been an important tool in this advancement. Experimental studies of the PEM fuel cell not only identify unfiltered performance response to manipulation of variables, but also aid in the advancement of fuel cell modelling, by allowing for validation of computational schemes. Compressive force used to contain a fuel cell assembly can play a significant role in how effectively the cell functions, the most obvious example being to ensure proper sealing within the cell. Compression can have a considerable impact on cell performance beyond the sealing aspects. The force can manipulate the ability to deliver reactants and the electrochemical functions of the cell, by altering the layers in the cell susceptible to this force. For these reasons an experimental study was undertaken, presented in this thesis, with specific focus placed on cell compression; in order to study its effect on reactant flow fields and performance response. The goal of the thesis was to develop a consistent and accurate general test procedure for the experimental analysis of a PEM fuel cell in order to analyse the effects of compression on performance. The factors potentially affecting cell performance, which were a function of compression, were identified as: (1) Sealing and surface contact; (2) Pressure drop across the flow channel; (3) Porosity of the GDL. Each factor was analysed independently in order to determine the individual contribution to changes in performance. An optimal degree of compression was identified for the cell configuration in

  8. Coefficient αcc in design value of concrete compressive strength

    Directory of Open Access Journals (Sweden)

    Goleš Danica

    2016-01-01

    Full Text Available Coefficient αcc introduces the effects of rate and duration of loading on compressive strength of concrete. These effects may be partially or completely compensated by the increase in concrete strength over time. Selection of the value of this coefficient, in recommended range between 0.8 and 1.0, is carried out through the National Annexes to Eurocode 2. This paper presents some considerations related to the introduction of this coefficient and its value adopted in some European countries. The article considers the effect of the adoption of conservative value αcc=0.85 on design value of compressive and flexural resistance of rectangular cross-section made of normal and high strength concrete. It analyzes the influence of different values of coefficient αcc on the area of reinforcement required to achieve the desired resistance of cross-section.

  9. View compensated compression of volume rendered images for remote visualization.

    Science.gov (United States)

    Lalgudi, Hariharan G; Marcellin, Michael W; Bilgin, Ali; Oh, Han; Nadar, Mariappan S

    2009-07-01

    Remote visualization of volumetric images has gained importance over the past few years in medical and industrial applications. Volume visualization is a computationally intensive process, often requiring hardware acceleration to achieve a real time viewing experience. One remote visualization model that can accomplish this would transmit rendered images from a server, based on viewpoint requests from a client. For constrained server-client bandwidth, an efficient compression scheme is vital for transmitting high quality rendered images. In this paper, we present a new view compensation scheme that utilizes the geometric relationship between viewpoints to exploit the correlation between successive rendered images. The proposed method obviates motion estimation between rendered images, enabling significant reduction to the complexity of a compressor. Additionally, the view compensation scheme, in conjunction with JPEG2000 performs better than AVC, the state of the art video compression standard.

  10. Theoretical x-ray absorption investigation of high pressure ice and compressed graphite

    International Nuclear Information System (INIS)

    Shaw, Dawn M; Tse, John S

    2007-01-01

    The x-ray absorption spectra (XAS) of high pressure ices II, VIII, and IX have been computed with the Car-Parrinello plane wave pseudopotential method. XAS for the intermediate structures obtained from uniaxial compression of hexagonal graphite along the c-axis are also studied. Whenever possible, comparisons to available experimental results are made. The reliability of the computational methods for the XAS for these structures is discussed

  11. Effects of bandwidth, compression speed, and gain at high frequencies on preferences for amplified music.

    Science.gov (United States)

    Moore, Brian C J

    2012-09-01

    This article reviews a series of studies on the factors influencing sound quality preferences, mostly for jazz and classical music stimuli. The data were obtained using ratings of individual stimuli or using the method of paired comparisons. For normal-hearing participants, the highest ratings of sound quality were obtained when the reproduction bandwidth was wide (55 to 16000 Hz) and ripples in the frequency response were small (less than ± 5 dB). For hearing-impaired participants listening via a simulated five-channel compression hearing aid with gains set using the CAM2 fitting method, preferences for upper cutoff frequency varied across participants: Some preferred a 7.5- or 10-kHz upper cutoff frequency over a 5-kHz cutoff frequency, and some showed the opposite preference. Preferences for a higher upper cutoff frequency were associated with a shallow high-frequency slope of the audiogram. A subsequent study comparing the CAM2 and NAL-NL2 fitting methods, with gains slightly reduced for participants who were not experienced hearing aid users, showed a consistent preference for CAM2. Since the two methods differ mainly in the gain applied for frequencies above 4 kHz (CAM2 recommending higher gain than NAL-NL2), these results suggest that extending the upper cutoff frequency is beneficial. A system for reducing "overshoot" effects produced by compression gave small but significant benefits for sound quality of a percussion instrument (xylophone). For a high-input level (80 dB SPL), slow compression was preferred over fast compression.

  12. Componential Analysis of Analogical-Reasoning Performance of High and Low Achievers.

    Science.gov (United States)

    Armour-Thomas, Eleanor; Allen, Brenda A.

    1990-01-01

    Assessed analogical reasoning in high- and low-achieving students at the high school level and determined whether analogical reasoning was related to academic achievement in ninth grade students (N=54). Results indicated that high achievers performed better than low achievers on all types of analogical-reasoning processes. (Author/ABL)

  13. A study on the effect of nano silica on compressive strength of high volume fly ash mortars and concretes

    International Nuclear Information System (INIS)

    Shaikh, F.U.A.; Supit, S.W.M.; Sarker, P.K.

    2014-01-01

    Highlights: • The addition of NS compensates low early age compressive strength of HVFA system. • NS also contributes to later age compressive strength gain of HVFA system. • The XRD results confirm the reduction of CH in HVFA paste due to addition of NS. - Abstract: This paper presents the effect of nano silica (NS) on the compressive strength of mortars and concretes containing different high volume fly ash (HVFA) contents ranging from 40% to 70% (by weight) as partial replacement of cement. The compressive strength of mortars is measured at 7 and 28 days and that for concretes is measured at 3, 7, 28, 56 and 90 days. The effects of NS in microstructure development and pozzolanic reaction of pastes containing above HVFA contents are also studied through backscattered electron (BSE) image and X-ray diffraction (XRD) analysis. Results show that among different NS contents ranging from 1% to 6%, cement mortar containing 2% NS exhibited highest 7 and 28 days compressive strength. This NS content (2%) is then added to the HVFA mortars and concretes and the results show that the addition of 2% NS improved the early age (7 days) compressive strength of mortars containing 40% and 50% fly ash by 5% and 7%, respectively. However, this improvement is not observed at high fly ash contents beyond 50%. On the other hand, all HVFA mortars exhibited improvement in 28 days compressive strength due to addition of 2% NS and the most significant improvement is noticed in mortars containing more than 50% fly ash. In HVFA concretes, the improvement of early age (3 days) compressive strength is also noticed due to addition of 2% NS. The BSE and XRD analysis results also support the above findings

  14. Metal Hydride Compression

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)

    2017-07-01

    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  15. Symmetric compression of 'laser greenhouse' targets by a few laser beams

    International Nuclear Information System (INIS)

    Gus'kov, Sergei Yu; Demchenko, N N; Rozanov, Vladislav B; Stepanov, R V; Zmitrenko, N V; Caruso, A; Strangio, C

    2003-01-01

    The possibility of efficient and symmetric compression of a target with a low-density structured absorber by a few laser beams is considered. An equation of state is proposed for a porous medium, which takes into account the special features of the absorption of high-power nanosecond laser pulses. The open version of this target is shown to allow the use of ordinary Gaussian beams, requiring no special profiling of the absorber surface. The conditions are defined under which such targets can be compressed efficiently by only two laser beams (or beam clusters). Simulations show that for a 2.1-MJ laser pulse, a seven-fold gain for the target under study is achieved. (special issue devoted to the 80th anniversary of academician n g basov's birth)

  16. Experimental investigation on high temperature anisotropic compression properties of ceramic-fiber-reinforced SiO2 aerogel

    International Nuclear Information System (INIS)

    Shi, Duoqi; Sun, Yantao; Feng, Jian; Yang, Xiaoguang; Han, Shiwei; Mi, Chunhu; Jiang, Yonggang; Qi, Hongyu

    2013-01-01

    Compression tests were conducted on a ceramic-fiber-reinforced SiO 2 aerogel at high temperature. Anisotropic mechanical property was found. In-plane Young's modulus is more than 10 times higher than that of out-of-plane, but fracture strain is much lower by a factor of 100. Out-of-plane Young's modulus decreases with increasing temperature, but the in-plane modulus and fracture stress increase with temperature. The out-of-plane property does not change with loading rates. Viscous flow at high temperature is found to cause in-plane shrinkage, and both in-plane and out-of-plane properties change. Compression induced densification of aerogel matrix was also found by Scanning Electron Microscope analysis

  17. Modelling for Fuel Optimal Control of a Variable Compression Engine

    OpenAIRE

    Nilsson, Ylva

    2007-01-01

    Variable compression engines are a mean to meet the demand on lower fuel consumption. A high compression ratio results in high engine efficiency, but also increases the knock tendency. On conventional engines with fixed compression ratio, knock is avoided by retarding the ignition angle. The variable compression engine offers an extra dimension in knock control, since both ignition angle and compression ratio can be adjusted. The central question is thus for what combination of compression ra...

  18. GPU Lossless Hyperspectral Data Compression System for Space Applications

    Science.gov (United States)

    Keymeulen, Didier; Aranki, Nazeeh; Hopson, Ben; Kiely, Aaron; Klimesh, Matthew; Benkrid, Khaled

    2012-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. At JPL, a novel, adaptive and predictive technique for lossless compression of hyperspectral data, named the Fast Lossless (FL) algorithm, was recently developed. This technique uses an adaptive filtering method and achieves state-of-the-art performance in both compression effectiveness and low complexity. Because of its outstanding performance and suitability for real-time onboard hardware implementation, the FL compressor is being formalized as the emerging CCSDS Standard for Lossless Multispectral & Hyperspectral image compression. The FL compressor is well-suited for parallel hardware implementation. A GPU hardware implementation was developed for FL targeting the current state-of-the-art GPUs from NVIDIA(Trademark). The GPU implementation on a NVIDIA(Trademark) GeForce(Trademark) GTX 580 achieves a throughput performance of 583.08 Mbits/sec (44.85 MSamples/sec) and an acceleration of at least 6 times a software implementation running on a 3.47 GHz single core Intel(Trademark) Xeon(Trademark) processor. This paper describes the design and implementation of the FL algorithm on the GPU. The massively parallel implementation will provide in the future a fast and practical real-time solution for airborne and space applications.

  19. Efficient traveltime compression for 3D prestack Kirchhoff migration

    KAUST Repository

    Alkhalifah, Tariq

    2010-12-13

    Kirchhoff 3D prestack migration, as part of its execution, usually requires repeated access to a large traveltime table data base. Access to this data base implies either a memory intensive or I/O bounded solution to the storage problem. Proper compression of the traveltime table allows efficient 3D prestack migration without relying on the usually slow access to the computer hard drive. Such compression also allows for faster access to desirable parts of the traveltime table. Compression is applied to the traveltime field for each source location on the surface on a regular grid using 3D Chebyshev polynomial or cosine transforms of the traveltime field represented in the spherical coordinates or the Celerity domain. We obtain practical compression levels up to and exceeding 20 to 1. In fact, because of the smaller size traveltime table, we obtain exceptional traveltime extraction speed during migration that exceeds conventional methods. Additional features of the compression include better interpolation of traveltime tables and more stable estimates of amplitudes from traveltime curvatures. Further compression is achieved using bit encoding, by representing compression parameters values with fewer bits. © 2010 European Association of Geoscientists & Engineers.

  20. Compressed Speech: Potential Application for Air Force Technical Training. Final Report, August 73-November 73.

    Science.gov (United States)

    Dailey, K. Anne

    Time-compressed speech (also called compressed speech, speeded speech, or accelerated speech) is an extension of the normal recording procedure for reproducing the spoken word. Compressed speech can be used to achieve dramatic reductions in listening time without significant loss in comprehension. The implications of such temporal reductions in…

  1. CoGI: Towards Compressing Genomes as an Image.

    Science.gov (United States)

    Xie, Xiaojing; Zhou, Shuigeng; Guan, Jihong

    2015-01-01

    Genomic science is now facing an explosive increase of data thanks to the fast development of sequencing technology. This situation poses serious challenges to genomic data storage and transferring. It is desirable to compress data to reduce storage and transferring cost, and thus to boost data distribution and utilization efficiency. Up to now, a number of algorithms / tools have been developed for compressing genomic sequences. Unlike the existing algorithms, most of which treat genomes as one-dimensional text strings and compress them based on dictionaries or probability models, this paper proposes a novel approach called CoGI (the abbreviation of Compressing Genomes as an Image) for genome compression, which transforms the genomic sequences to a two-dimensional binary image (or bitmap), then applies a rectangular partition coding algorithm to compress the binary image. CoGI can be used as either a reference-based compressor or a reference-free compressor. For the former, we develop two entropy-based algorithms to select a proper reference genome. Performance evaluation is conducted on various genomes. Experimental results show that the reference-based CoGI significantly outperforms two state-of-the-art reference-based genome compressors GReEn and RLZ-opt in both compression ratio and compression efficiency. It also achieves comparable compression ratio but two orders of magnitude higher compression efficiency in comparison with XM--one state-of-the-art reference-free genome compressor. Furthermore, our approach performs much better than Gzip--a general-purpose and widely-used compressor, in both compression speed and compression ratio. So, CoGI can serve as an effective and practical genome compressor. The source code and other related documents of CoGI are available at: http://admis.fudan.edu.cn/projects/cogi.htm.

  2. Wavelet-based compression of pathological images for telemedicine applications

    Science.gov (United States)

    Chen, Chang W.; Jiang, Jianfei; Zheng, Zhiyong; Wu, Xue G.; Yu, Lun

    2000-05-01

    In this paper, we present the performance evaluation of wavelet-based coding techniques as applied to the compression of pathological images for application in an Internet-based telemedicine system. We first study how well suited the wavelet-based coding is as it applies to the compression of pathological images, since these images often contain fine textures that are often critical to the diagnosis of potential diseases. We compare the wavelet-based compression with the DCT-based JPEG compression in the DICOM standard for medical imaging applications. Both objective and subjective measures have been studied in the evaluation of compression performance. These studies are performed in close collaboration with expert pathologists who have conducted the evaluation of the compressed pathological images and communication engineers and information scientists who designed the proposed telemedicine system. These performance evaluations have shown that the wavelet-based coding is suitable for the compression of various pathological images and can be integrated well with the Internet-based telemedicine systems. A prototype of the proposed telemedicine system has been developed in which the wavelet-based coding is adopted for the compression to achieve bandwidth efficient transmission and therefore speed up the communications between the remote terminal and the central server of the telemedicine system.

  3. Explosive magnetic flux compression plate generators as fast high-energy power sources

    International Nuclear Information System (INIS)

    Caird, R.S.; Erickson, D.J.; Garn, W.B.; Fowler, C.M.

    1976-01-01

    A type of explosive driven generator, called a plate generator, is described. It is capable of delivering electrical energies in the MJ range at TW power levels. Plane wave detonated explosive systems accelerate two large-area metal plates to high opposing velocities. An initial magnetic field is compressed and the flux transferred to an external load. The characteristics of the plate generator are described and compared with those of other types of generators. Methods of load matching are discussed. The results of several high-power experiments are also given

  4. High-speed video analysis improves the accuracy of spinal cord compression measurement in a mouse contusion model.

    Science.gov (United States)

    Fournely, Marion; Petit, Yvan; Wagnac, Éric; Laurin, Jérôme; Callot, Virginie; Arnoux, Pierre-Jean

    2018-01-01

    Animal models of spinal cord injuries aim to utilize controlled and reproducible conditions. However, a literature review reveals that mouse contusion studies using equivalent protocols may show large disparities in the observed impact force vs. cord compression relationship. The overall purpose of this study was to investigate possible sources of bias in these measurements. The specific objective was to improve spinal cord compression measurements using a video-based setup to detect the impactor-spinal cord time-to-contact. A force-controlled 30kDyn unilateral contusion at C4 vertebral level was performed in six mice with the Infinite Horizon impactor (IH). High-speed video was used to determine the time-to-contact between the impactor tip and the spinal cord and to compute the related displacement of the tip into the tissue: the spinal cord compression and the compression ratio. Delayed time-to-contact detection with the IH device led to an underestimation of the cord compression. Compression values indicated by the IH were 64% lower than those based on video analysis (0.33mm vs. 0.88mm). Consequently, the mean compression ratio derived from the device was underestimated when compared to the value derived from video analysis (22% vs. 61%). Default time-to-contact detection from the IH led to significant errors in spinal cord compression assessment. Accordingly, this may explain some of the reported data discrepancies in the literature. The proposed setup could be implemented by users of contusion devices to improve the quantative description of the primary injury inflicted to the spinal cord. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Academic attainment and the high school science experiences among high-achieving African American males

    Science.gov (United States)

    Trice, Rodney Nathaniel

    This study examines the educational experiences of high achieving African American males. More specifically, it analyzes the influences on their successful navigation through high school science. Through a series of interviews, observations, questionnaires, science portfolios, and review of existing data the researcher attempted to obtain a deeper understanding of high achieving African American males and their limitations to academic attainment and high school science experiences. The investigation is limited to ten high achieving African American male science students at Woodcrest High School. Woodcrest is situated at the cross section of a suburban and rural community located in the southeastern section of the United States. Although this investigation involves African American males, all of whom are successful in school, its findings should not be generalized to this nor any other group of students. The research question that guided this study is: What are the limitations to academic attainment and the high school science experiences of high achieving African American males? The student participants expose how suspension and expulsion, special education placement, academic tracking, science instruction, and teacher expectation influence academic achievement. The role parents play, student self-concept, peer relationships, and student learning styles are also analyzed. The anthology of data rendered three overarching themes: (1) unequal access to education, (2) maintenance of unfair educational structures, and (3) authentic characterizations of African American males. Often the policies and practices set in place by school officials aid in creating hurdles to academic achievement. These policies and practices are often formed without meaningful consideration of the unintended consequences that may affect different student populations, particularly the most vulnerable. The findings from this study expose that high achieving African American males face major

  6. Evaluation of a wavelet-based compression algorithm applied to the silicon drift detectors data of the ALICE experiment at CERN

    International Nuclear Information System (INIS)

    Falchieri, Davide; Gandolfi, Enzo; Masotti, Matteo

    2004-01-01

    This paper evaluates the performances of a wavelet-based compression algorithm applied to the data produced by the silicon drift detectors of the ALICE experiment at CERN. This compression algorithm is a general purpose lossy technique, in other words, its application could prove useful even on a wide range of other data reduction's problems. In particular the design targets relevant for our wavelet-based compression algorithm are the following ones: a high-compression coefficient, a reconstruction error as small as possible and a very limited execution time. Interestingly, the results obtained are quite close to the ones achieved by the algorithm implemented in the first prototype of the chip CARLOS, the chip that will be used in the silicon drift detectors readout chain

  7. BENEFITS AND CHALLENGES OF VARIABLE COMPRESSION RATIO AT DIESEL ENGINES

    Directory of Open Access Journals (Sweden)

    Radivoje B Pešić

    2010-01-01

    Full Text Available The compression ratio strongly affects the working process and provides an exceptional degree of control over engine performance. In conventional internal combustion engines, the compression ratio is fixed and their performance is therefore a compromise between conflicting requirements. One fundamental problem is that drive units in the vehicles must successfully operate at variable speeds and loads and in different ambient conditions. If a diesel engine has a fixed compression ratio, a minimal value must be chosen that can achieve a reliable self-ignition when starting the engine in cold start conditions. In diesel engines, variable compression ratio provides control of peak cylinder pressure, improves cold start ability and low load operation, enabling the multi-fuel capability, increase of fuel economy and reduction of emissions. This paper contains both theoretical and experimental investigation of the impact that automatic variable compression ratios has on working process parameters in experimental diesel engine. Alternative methods of implementing variable compression ratio are illustrated and critically examined.

  8. Performance of vapor compression systems with compressor oil flooding and regeneration

    Energy Technology Data Exchange (ETDEWEB)

    Bell, Ian H.; Groll, Eckhard A.; Braun, James E. [Purdue University, Department of Mechanical Engineering, 140 S. Martin Jischke Drive, West Lafayette, IN 47906 (United States)

    2011-01-15

    Vapor compression refrigeration technology has seen great improvement over the last several decades in terms of cycle efficiency through a concerted effort of manufacturers, regulators, and research engineers. As the standard vapor compression systems approach practical limits, cycle modifications should be investigated to increase system efficiency and capacity. One possible means of increasing cycle efficiency is to flood the compressor with a large quantity of oil to achieve a quasi-isothermal compression process, in addition to using a regenerator to increase refrigerant subcooling. In theory, compressor flooding and regeneration can provide a significant increase in system efficiency over the standard vapor compression system. The effectiveness of compressor flooding and regeneration increases as the temperature lift of the system increases. Therefore, this technology is particularly well suited towards lower evaporating temperatures and high ambient temperatures as seen in supermarket refrigeration applications. While predicted increases in cycle efficiency are over 40% for supermarket refrigeration applications, this technology is still very beneficial for typical air-conditioning applications, for which improvements in cycle efficiency greater than 5% are predicted. It has to be noted though that the beneficial effects of compressor flooding can only be realized if a regenerator is used to exchange heat between the refrigerant vapor exiting the evaporator and the liquid exiting the condenser. (author)

  9. Low and high achievers in math

    DEFF Research Database (Denmark)

    Overgaard, Steffen; Tonnesen, Pia Beck; Weng, Peter

    2016-01-01

    In this session we will present the results of the preliminary analysis of the qualitative and quantitative data, which can be used to enhance the teaching of low and high mathematics achievers so as to increase their mathematical knowledge and confidence....

  10. Nonlinear frequency compression: effects on sound quality ratings of speech and music.

    Science.gov (United States)

    Parsa, Vijay; Scollie, Susan; Glista, Danielle; Seelisch, Andreas

    2013-03-01

    Frequency lowering technologies offer an alternative amplification solution for severe to profound high frequency hearing losses. While frequency lowering technologies may improve audibility of high frequency sounds, the very nature of this processing can affect the perceived sound quality. This article reports the results from two studies that investigated the impact of a nonlinear frequency compression (NFC) algorithm on perceived sound quality. In the first study, the cutoff frequency and compression ratio parameters of the NFC algorithm were varied, and their effect on the speech quality was measured subjectively with 12 normal hearing adults, 12 normal hearing children, 13 hearing impaired adults, and 9 hearing impaired children. In the second study, 12 normal hearing and 8 hearing impaired adult listeners rated the quality of speech in quiet, speech in noise, and music after processing with a different set of NFC parameters. Results showed that the cutoff frequency parameter had more impact on sound quality ratings than the compression ratio, and that the hearing impaired adults were more tolerant to increased frequency compression than normal hearing adults. No statistically significant differences were found in the sound quality ratings of speech-in-noise and music stimuli processed through various NFC settings by hearing impaired listeners. These findings suggest that there may be an acceptable range of NFC settings for hearing impaired individuals where sound quality is not adversely affected. These results may assist an Audiologist in clinical NFC hearing aid fittings for achieving a balance between high frequency audibility and sound quality.

  11. Venous Leg Ulcers: Effectiveness of new compression therapy/moist ...

    African Journals Online (AJOL)

    (Cutimed Sorbact) and compression bandages (Comprilan,. Tensoplast) in the initial oedema phase, followed by a compression stocking system delivering 40mmHg (JOBST. UlcerCARE). Due to their high stiffness characteristics, these compression products exert a high working pressure during walking and a comfortably ...

  12. Overlap of electron core states for very high compressions

    International Nuclear Information System (INIS)

    Straub, G.

    1985-01-01

    At normal density and for modest compressions, the electronic structure of a metal can be accurately described by treating the conduction electrons and their interactions with the usual methods of band theory. The core electrons remain essentially the same as for an isolated free atom and do not participate in the bonding forces responsible for creating a condensed phase. As the density increases, the core electrons begin to ''see'' one another as the overlap of the tails of wave functions can no longer be neglected. The electronic structure of the core electrons is responsible for an effective repulsive interaction that eventually becomes free-electron-like at very high compressions. The electronic structure of the interacting core electrons may be treated in a simple manner using the Atomic Surface Method (ASM). The ASM is a first-principles treatment of the electronic structure involving a rigorous integration of the Schroedinger equation within the atomic-sphere approximation. Solid phase wave functions are constructed from isolated atom wave functions and the band width W/sub l/ and the center of gravity of the band C/sub l/ are obtained from simple formulas. The ASM can also utilize analytic forms of the atomic wave functions and thus provide direct functional dependence of various aspects of the electronic structure. Of particular use in understanding the behavior of the core electrons, the ASM provides the analytic density dependence of the band widths and positions. 8 refs., 2 figs., 1 tab

  13. Local control and survival in spinal cord compression from lymphoma and myeloma

    International Nuclear Information System (INIS)

    Wallington, M.; Mendis, S.; Premawardhana, U.; Sanders, P.; Shahsavar-Haghighi, K.

    1997-01-01

    Background: Between 1979 and 1989, 48 cases of extradural spinal cord and cauda equina compression in patients with lymphoma (24) and myeloma (24) received local radiation therapy for control of cord compression. Twenty five (52%) of the cases were treated by surgical decompression prior to irradiation. Thirty five (73%) of the cases received chemotherapy following the diagnosis of spinal cord compression. Post-treatment outcome was assessed at a minimum follow-up of 24 months to determine the significant clinical and treatment factors following irradiation. Results: Seventeen (71%) of the lymphoma and 15 (63%) of the myeloma patients achieved local control, here defined as improvement to, or maintenance of ambulation with minimal or no assistance for 3 months from the start of radiotherapy. At a median follow-up of 30 (2-98) for the lymphoma and 10 (1-87) months for the myeloma patients, the results showed that survival following local radiation therapy for cord compression was independently influenced by the underlying disease type in favour of lymphoma compared to myeloma (P<0.01). The median duration of local control and survival figures were 23 and 48 months for the lymphomas compared to 4.5 and 10 months for the myeloma cases. Survival was also independently influenced by preservation of sphincter function at initial presentation (P<0.02) and the achievement of local control following treatment (P<0.01). Discussion: We conclude that while disease type independently impacts on outcome following treatment of spinal cord compression in lymphoma and myeloma, within both of these disease type the achievement of local control of spinal cord compression is an important management priority, for without local control survival may be adversely affected

  14. Self-compression of femtosecond deep-ultraviolet pulses by filamentation in krypton.

    Science.gov (United States)

    Adachi, Shunsuke; Suzuki, Toshinori

    2017-05-15

    We demonstrate self-compression of deep-ultraviolet (DUV) pulses by filamentation in krypton. In contrast to self-compression in the near-infrared, that in the DUV is associated with a red-shifted sub-pulse appearing in the pulse temporal profile. The achieved pulse width of 15 fs is the shortest among demonstrated sub-mJ deep-ultraviolet pulses.

  15. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    Science.gov (United States)

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  16. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  17. Effect of rice husk ash and fly ash on the compressive strength of high performance concrete

    Science.gov (United States)

    Van Lam, Tang; Bulgakov, Boris; Aleksandrova, Olga; Larsen, Oksana; Anh, Pham Ngoc

    2018-03-01

    The usage of industrial and agricultural wastes for building materials production plays an important role to improve the environment and economy by preserving nature materials and land resources, reducing land, water and air pollution as well as organizing and storing waste costs. This study mainly focuses on mathematical modeling dependence of the compressive strength of high performance concrete (HPC) at the ages of 3, 7 and 28 days on the amount of rice husk ash (RHA) and fly ash (FA), which are added to the concrete mixtures by using the Central composite rotatable design. The result of this study provides the second-order regression equation of objective function, the images of the surface expression and the corresponding contours of the objective function of the regression equation, as the optimal points of HPC compressive strength. These objective functions, which are the compressive strength values of HPC at the ages of 3, 7 and 28 days, depend on two input variables as: x1 (amount of RHA) and x2 (amount of FA). The Maple 13 program, solving the second-order regression equation, determines the optimum composition of the concrete mixture for obtaining high performance concrete and calculates the maximum value of the HPC compressive strength at the ages of 28 days. The results containMaxR28HPC = 76.716 MPa when RHA = 0.1251 and FA = 0.3119 by mass of Portland cement.

  18. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  19. Image Segmentation, Registration, Compression, and Matching

    Science.gov (United States)

    Yadegar, Jacob; Wei, Hai; Yadegar, Joseph; Ray, Nilanjan; Zabuawala, Sakina

    2011-01-01

    A novel computational framework was developed of a 2D affine invariant matching exploiting a parameter space. Named as affine invariant parameter space (AIPS), the technique can be applied to many image-processing and computer-vision problems, including image registration, template matching, and object tracking from image sequence. The AIPS is formed by the parameters in an affine combination of a set of feature points in the image plane. In cases where the entire image can be assumed to have undergone a single affine transformation, the new AIPS match metric and matching framework becomes very effective (compared with the state-of-the-art methods at the time of this reporting). No knowledge about scaling or any other transformation parameters need to be known a priori to apply the AIPS framework. An automated suite of software tools has been created to provide accurate image segmentation (for data cleaning) and high-quality 2D image and 3D surface registration (for fusing multi-resolution terrain, image, and map data). These tools are capable of supporting existing GIS toolkits already in the marketplace, and will also be usable in a stand-alone fashion. The toolkit applies novel algorithmic approaches for image segmentation, feature extraction, and registration of 2D imagery and 3D surface data, which supports first-pass, batched, fully automatic feature extraction (for segmentation), and registration. A hierarchical and adaptive approach is taken for achieving automatic feature extraction, segmentation, and registration. Surface registration is the process of aligning two (or more) data sets to a common coordinate system, during which the transformation between their different coordinate systems is determined. Also developed here are a novel, volumetric surface modeling and compression technique that provide both quality-guaranteed mesh surface approximations and compaction of the model sizes by efficiently coding the geometry and connectivity

  20. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  1. A renormalization group scaling analysis for compressible two-phase flow

    International Nuclear Information System (INIS)

    Chen, Y.; Deng, Y.; Glimm, J.; Li, G.; Zhang, Q.; Sharp, D.H.

    1993-01-01

    Computational solutions to the Rayleigh--Taylor fluid mixing problem, as modeled by the two-fluid two-dimensional Euler equations, are presented. Data from these solutions are analyzed from the point of view of Reynolds averaged equations, using scaling laws derived from a renormalization group analysis. The computations, carried out with the front tracking method on an Intel iPSC/860, are highly resolved and statistical convergence of ensemble averages is achieved. The computations are consistent with the experimentally observed growth rates for nearly incompressible flows. The dynamics of the interior portion of the mixing zone is simplified by the use of scaling variables. The size of the mixing zone suggests fixed-point behavior. The profile of statistical quantities within the mixing zone exhibit self-similarity under fixed-point scaling to a limited degree. The effect of compressibility is also examined. It is found that, for even moderate compressibility, the growth rates fail to satisfy universal scaling, and moreover, increase significantly with increasing compressibility. The growth rates predicted from a renormalization group fixed-point model are in a reasonable agreement with the results of the exact numerical simulations, even for flows outside of the incompressible limit

  2. Performance evaluation of breast image compression techniques

    International Nuclear Information System (INIS)

    Anastassopoulos, G.; Lymberopoulos, D.; Panayiotakis, G.; Bezerianos, A.

    1994-01-01

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors)

  3. Behaviour of venous flow rates in intermittent sequential pneumatic compression of the legs using different compression strengths

    International Nuclear Information System (INIS)

    Fassmann-Glaser, I.

    1984-01-01

    A study with 25 patients was performed in order to find out whether intermittent, sequential, pneumatic leg compression is of value in the preventive management of thrombosis due to its effect on the venous flow rates. For this purpose, xenon 133 was injected into one of the foot veins and the flow rate in each case determined for the distance between instep and inguen using different compression strengths, with pressure being exerted on the ankle, calf and thigh. Increased flow rates were already measured at an average pressure value of 34.5 mmHg, while the maximum effect was achieved by exerting a pressure of 92.5 mmHg, which increased the flow rate by 366% as compared to the baseline value. The results point to a significant improvement of the venous flow rates due to intermittent, sequential, pneumatic leg compression and thus provide evidence to prove the value of this method in the prevention of hemostasis and thrombosis. (TRV) [de

  4. Least median of squares filtering of locally optimal point matches for compressible flow image registration

    International Nuclear Information System (INIS)

    Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)

  5. Single-event transient imaging with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor.

    Science.gov (United States)

    Mochizuki, Futa; Kagawa, Keiichiro; Okihara, Shin-ichiro; Seo, Min-Woong; Zhang, Bo; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji

    2016-02-22

    In the work described in this paper, an image reproduction scheme with an ultra-high-speed temporally compressive multi-aperture CMOS image sensor was demonstrated. The sensor captures an object by compressing a sequence of images with focal-plane temporally random-coded shutters, followed by reconstruction of time-resolved images. Because signals are modulated pixel-by-pixel during capturing, the maximum frame rate is defined only by the charge transfer speed and can thus be higher than those of conventional ultra-high-speed cameras. The frame rate and optical efficiency of the multi-aperture scheme are discussed. To demonstrate the proposed imaging method, a 5×3 multi-aperture image sensor was fabricated. The average rising and falling times of the shutters were 1.53 ns and 1.69 ns, respectively. The maximum skew among the shutters was 3 ns. The sensor observed plasma emission by compressing it to 15 frames, and a series of 32 images at 200 Mfps was reconstructed. In the experiment, by correcting disparities and considering temporal pixel responses, artifacts in the reconstructed images were reduced. An improvement in PSNR from 25.8 dB to 30.8 dB was confirmed in simulations.

  6. A Sparsity-Promoted Decomposition for Compressed Fault Diagnosis of Roller Bearings

    Directory of Open Access Journals (Sweden)

    Huaqing Wang

    2016-09-01

    Full Text Available The traditional approaches for condition monitoring of roller bearings are almost always achieved under Shannon sampling theorem conditions, leading to a big-data problem. The compressed sensing (CS theory provides a new solution to the big-data problem. However, the vibration signals are insufficiently sparse and it is difficult to achieve sparsity using the conventional techniques, which impedes the application of CS theory. Therefore, it is of great significance to promote the sparsity when applying the CS theory to fault diagnosis of roller bearings. To increase the sparsity of vibration signals, a sparsity-promoted method called the tunable Q-factor wavelet transform based on decomposing the analyzed signals into transient impact components and high oscillation components is utilized in this work. The former become sparser than the raw signals with noise eliminated, whereas the latter include noise. Thus, the decomposed transient impact components replace the original signals for analysis. The CS theory is applied to extract the fault features without complete reconstruction, which means that the reconstruction can be completed when the components with interested frequencies are detected and the fault diagnosis can be achieved during the reconstruction procedure. The application cases prove that the CS theory assisted by the tunable Q-factor wavelet transform can successfully extract the fault features from the compressed samples.

  7. Pulse Compression of Phase-matched High Harmonic Pulses from a Time-Delay Compensated Monochromator

    Directory of Open Access Journals (Sweden)

    Ito Motohiko

    2013-03-01

    Full Text Available Pulse compression of single 32.6-eV high harmonic pulses from a time-delay compensated monochromator was demonstrated down to 11±3 fs by compensating the pulse front tilt. The photon flux was intensified up to 5.7×109 photons/s on target by implementing high harmonic generation under a phase matching condition in a hollow fiber used for increasing the interaction length.

  8. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  9. A test data compression scheme based on irrational numbers stored coding.

    Science.gov (United States)

    Wu, Hai-feng; Cheng, Yu-sheng; Zhan, Wen-fa; Cheng, Yi-fei; Wu, Qiong; Zhu, Shi-juan

    2014-01-01

    Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS), is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  10. Laser-pulse compression in a collisional plasma under weak-relativistic ponderomotive nonlinearity

    International Nuclear Information System (INIS)

    Singh, Mamta; Gupta, D. N.

    2016-01-01

    We present theory and numerical analysis which demonstrate laser-pulse compression in a collisional plasma under the weak-relativistic ponderomotive nonlinearity. Plasma equilibrium density is modified due to the ohmic heating of electrons, the collisions, and the weak relativistic-ponderomotive force during the interaction of a laser pulse with plasmas. First, within one-dimensional analysis, the longitudinal self-compression mechanism is discussed. Three-dimensional analysis (spatiotemporal) of laser pulse propagation is also investigated by coupling the self-compression with the self-focusing. In the regime in which the laser becomes self-focused due to the weak relativistic-ponderomotive nonlinearity, we provide results for enhanced pulse compression. The results show that the matched interplay between self-focusing and self-compression can improve significantly the temporal profile of the compressed pulse. Enhanced pulse compression can be achieved by optimizing and selecting the parameters such as collision frequency, ion-temperature, and laser intensity.

  11. Laser-pulse compression in a collisional plasma under weak-relativistic ponderomotive nonlinearity

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Mamta; Gupta, D. N., E-mail: dngupta@physics.du.ac.in [Department of Physics and Astrophysics, North Campus, University of Delhi, Delhi 110 007 (India)

    2016-05-15

    We present theory and numerical analysis which demonstrate laser-pulse compression in a collisional plasma under the weak-relativistic ponderomotive nonlinearity. Plasma equilibrium density is modified due to the ohmic heating of electrons, the collisions, and the weak relativistic-ponderomotive force during the interaction of a laser pulse with plasmas. First, within one-dimensional analysis, the longitudinal self-compression mechanism is discussed. Three-dimensional analysis (spatiotemporal) of laser pulse propagation is also investigated by coupling the self-compression with the self-focusing. In the regime in which the laser becomes self-focused due to the weak relativistic-ponderomotive nonlinearity, we provide results for enhanced pulse compression. The results show that the matched interplay between self-focusing and self-compression can improve significantly the temporal profile of the compressed pulse. Enhanced pulse compression can be achieved by optimizing and selecting the parameters such as collision frequency, ion-temperature, and laser intensity.

  12. Effectiveness of feedback with a smartwatch for high-quality chest compressions during adult cardiac arrest: A randomized controlled simulation study.

    Science.gov (United States)

    Ahn, Chiwon; Lee, Juncheol; Oh, Jaehoon; Song, Yeongtak; Chee, Youngjoon; Lim, Tae Ho; Kang, Hyunggoo; Shin, Hyungoo

    2017-01-01

    Previous studies have demonstrated the potential for using smartwatches with a built-in accelerometer as feedback devices for high-quality chest compression during cardiopulmonary resuscitation. However, to the best of our knowledge, no previous study has reported the effects of this feedback on chest compressions in action. A randomized, parallel controlled study of 40 senior medical students was conducted to examine the effect of chest compression feedback via a smartwatch during cardiopulmonary resuscitation of manikins. A feedback application was developed for the smartwatch, in which visual feedback was provided for chest compression depth and rate. Vibrations from smartwatch were used to indicate the chest compression rate. The participants were randomly allocated to the intervention and control groups, and they performed chest compressions on manikins for 2 min continuously with or without feedback, respectively. The proportion of accurate chest compression depth (≥5 cm and ≤6 cm) was assessed as the primary outcome, and the chest compression depth, chest compression rate, and the proportion of complete chest decompression (≤1 cm of residual leaning) were recorded as secondary outcomes. The proportion of accurate chest compression depth in the intervention group was significantly higher than that in the control group (64.6±7.8% versus 43.1±28.3%; p = 0.02). The mean compression depth and rate and the proportion of complete chest decompressions did not differ significantly between the two groups (all p>0.05). Cardiopulmonary resuscitation-related feedback via a smartwatch could provide assistance with respect to the ideal range of chest compression depth, and this can easily be applied to patients with out-of-hospital arrest by rescuers who wear smartwatches.

  13. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Science.gov (United States)

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; hide

    2015-01-01

    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  14. Behaviour of concrete under high confinement: study in triaxial compression and in triaxial extension at the mesoscopic scale

    International Nuclear Information System (INIS)

    Dupray, F.

    2008-12-01

    This Ph.D. thesis aims at characterising and modeling the mechanical behaviour of concrete under high confinement at the mesoscopic scale. This scale corresponds to that of the large aggregates and the cementitious matrix. The more general scope of this study is the understanding of concrete behaviour under dynamic loading. A dynamic impact can generate mean pressures around 1 GPa. But the characterisation of a material response, in an homogeneous state of stress, can only be achieved through quasi-static tests. The experimentations led in 3S-R Laboratory have underlined the importance of the aggregates in the triaxial response of concrete. Modeling concrete at the mesoscopic level, as a composite of an aggregates phase and a mortar phase, permits a representation of the aggregates effect. An experimental study of the behaviour of mortar phase is performed. Usual tests and hydrostatic and triaxial high confinement tests are realised. The parameters of a constitutive model that couples plasticity with a damage law are identified from these tests. This model is able to reproduce the nonlinear compaction of mortar, the damage behaviour under uniaxial tension or compression, and plasticity under high confinement. The biphasic model uses the finite element method with a cubic and regular mesh. A Monte-Carlo method is used to place quasi-spherical aggregates that respect the given particle size of a reference concrete. Each element is identified by belonging either to the mortar or to the aggregate phase. Numerical simulations are compared with the experimental tests on this concrete. The parameters for these simulations are only identified on the mortar. The simulations reproduce the different phases observed in hydrostatic compression. The evolution of axial moduli under growing confinement is shown, as is the good reproduction of the limit-states experimentally observed under high confinement. The fracture aspect of numerical simulations is comparable with that of

  15. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  16. ITERATION FREE FRACTAL COMPRESSION USING GENETIC ALGORITHM FOR STILL COLOUR IMAGES

    Directory of Open Access Journals (Sweden)

    A.R. Nadira Banu Kamal

    2014-02-01

    Full Text Available The storage requirements for images can be excessive, if true color and a high-perceived image quality are desired. An RGB image may be viewed as a stack of three gray-scale images that when fed into the red, green and blue inputs of a color monitor, produce a color image on the screen. The abnormal size of many images leads to long, costly, transmission times. Hence, an iteration free fractal algorithm is proposed in this research paper to design an efficient search of the domain pools for colour image compression using Genetic Algorithm (GA. The proposed methodology reduces the coding process time and intensive computation tasks. Parameters such as image quality, compression ratio and coding time are analyzed. It is observed that the proposed method achieves excellent performance in image quality with reduction in storage space.

  17. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  18. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  20. Image Compression using Haar and Modified Haar Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Mohannad Abid Shehab Ahmed

    2013-04-01

    Full Text Available Efficient image compression approaches can provide the best solutions to the recent growth of the data intensive and multimedia based applications. As presented in many papers the Haar matrix–based methods and wavelet analysis can be used in various areas of image processing such as edge detection, preserving, smoothing or filtering. In this paper, color image compression analysis and synthesis based on Haar and modified Haar is presented. The standard Haar wavelet transformation with N=2 is composed of a sequence of low-pass and high-pass filters, known as a filter bank, the vertical and horizontal Haar filters are composed to construct four 2-dimensional filters, such filters applied directly to the image to speed up the implementation of the Haar wavelet transform. Modified Haar technique is studied and implemented for odd based numbers i.e. (N=3 & N=5 to generate many solution sets, these sets are tested using the energy function or numerical method to get the optimum one.The Haar transform is simple, efficient in memory usage due to high zero value spread (it can use sparse principle, and exactly reversible without the edge effects as compared to DCT (Discrete Cosine Transform. The implemented Matlab simulation results prove the effectiveness of DWT (Discrete Wave Transform algorithms based on Haar and Modified Haar techniques in attaining an efficient compression ratio (C.R, achieving higher peak signal to noise ratio (PSNR, and the resulting images are of much smoother as compared to standard JPEG especially for high C.R. A comparison between standard JPEG, Haar, and Modified Haar techniques is done finally which approves the highest capability of Modified Haar between others.

  1. Optimum SNR data compression in hardware using an Eigencoil array.

    Science.gov (United States)

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  2. Annual report on the high temperature triaxial compression device

    International Nuclear Information System (INIS)

    Williams, N.D.; Menk, P.; Tully, R.; Houston, W.N.

    1981-01-01

    The investigation of the environmental effects on the mechanical and engineering properties of deep-sea sediments was initiated on June 15, 1980. The task is divided into three categories. First, the design and fabrication of a High Temperature Triaxial Compression Device (HITT). Second, an investigation of the mechanical and engineering properties of the deep-sea sediments at temperatures ranging from 277 to 473 degrees kelvin. Third, assist in the development of constitutive relationships and an analytical model which describe the temperature dependent creep deformations of the deep-sea sediments. The environmental conditions under which the soil specimens are to be tested are variations in temperature from 277 to 473 degrees kelvin. The corresponding water pressure will vary up to about 2.75 MPa as required to prevent boiling of the water and assure saturation of the test specimens. Two groups of tests are to be performed. First, triaxial compression tests during which strength measurements and constant head permeability determinations shall be made. Second, constant stress creep tests, during which axial and lateral strains shall be measured. In addition to the aforementioned variables, data shall also be acquired to incorporate the effects of consolidation history, strain rate, and heating rate. The bulk of the triaxial tests are to be performed undrained. The strength measurement tests are to be constant-rate-of-strain and the creep tests are to be constant-stress tests. The study of the mechanical properties of the deep-sea sediments as a function of temperature is an integrated program

  3. Expandable image compression system: A modular approach

    International Nuclear Information System (INIS)

    Ho, B.K.T.; Chan, K.K.; Ishimitsu, Y.; Lo, S.C.; Huang, H.K.

    1987-01-01

    The full-frame bit allocation algorithm for radiological image compression developed in the authors' laboratory can achieve compression ratios as high as 30:1. The software development and clinical evaluation of this algorithm has been completed. It involves two stages of operations: a two-dimensional discrete cosine transform and pixel quantization in the transform space with pixel depth kept accountable by a bit allocation table. Their design took an expandable modular approach based on the VME bus system which has a maximum data transfer rate of 48 Mbytes per second and a Motorola 68020 microprocessor as the master controller. The transform modules are based on advanced digital signal processor (DSP) chips microprogrammed to perform fast cosine transforms. Four DSP's built into a single-board transform module can process an 1K x 1K image in 1.7 seconds. Additional transform modules working in parallel can be added if even greater speeds are desired. The flexibility inherent in the microcode extends the capabilities of the system to incorporate images of variable sizes. Their design allows for a maximum image size of 2K x 2K

  4. Fixed-Rate Compressed Floating-Point Arrays.

    Science.gov (United States)

    Lindstrom, Peter

    2014-12-01

    Current compression schemes for floating-point data commonly take fixed-precision values and compress them to a variable-length bit stream, complicating memory management and random access. We present a fixed-rate, near-lossless compression scheme that maps small blocks of 4(d) values in d dimensions to a fixed, user-specified number of bits per block, thereby allowing read and write random access to compressed floating-point data at block granularity. Our approach is inspired by fixed-rate texture compression methods widely adopted in graphics hardware, but has been tailored to the high dynamic range and precision demands of scientific applications. Our compressor is based on a new, lifted, orthogonal block transform and embedded coding, allowing each per-block bit stream to be truncated at any point if desired, thus facilitating bit rate selection using a single compression scheme. To avoid compression or decompression upon every data access, we employ a software write-back cache of uncompressed blocks. Our compressor has been designed with computational simplicity and speed in mind to allow for the possibility of a hardware implementation, and uses only a small number of fixed-point arithmetic operations per compressed value. We demonstrate the viability and benefits of lossy compression in several applications, including visualization, quantitative data analysis, and numerical simulation.

  5. Compressed Data Transmission Among Nodes in BigData

    OpenAIRE

    Thirunavukarasu B; Sudhahar V M; VasanthaKumar U; Dr Kalaikumaran T; Dr Karthik S

    2014-01-01

    Many organizations are now dealing with large amount of data. Traditionally they used relational data. But nowadays they are supposed to use structured and semi structured data. To work effectively these organizations uses virtualization, parallel processing in compression etc., out of which the compression is most effective one. The data transmission of high volume usually causes high transmission time. This compression of unstructured data is immediately done when the data is being trans...

  6. Channel coding/decoding alternatives for compressed TV data on advanced planetary missions.

    Science.gov (United States)

    Rice, R. F.

    1972-01-01

    The compatibility of channel coding/decoding schemes with a specific TV compressor developed for advanced planetary missions is considered. Under certain conditions, it is shown that compressed data can be transmitted at approximately the same rate as uncompressed data without any loss in quality. Thus, the full gains of data compression can be achieved in real-time transmission.

  7. Multiband and Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raffaele Pizzolante

    2016-02-01

    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  8. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  9. Real time network traffic monitoring for wireless local area networks based on compressed sensing

    Science.gov (United States)

    Balouchestani, Mohammadreza

    2017-05-01

    A wireless local area network (WLAN) is an important type of wireless networks which connotes different wireless nodes in a local area network. WLANs suffer from important problems such as network load balancing, large amount of energy, and load of sampling. This paper presents a new networking traffic approach based on Compressed Sensing (CS) for improving the quality of WLANs. The proposed architecture allows reducing Data Delay Probability (DDP) to 15%, which is a good record for WLANs. The proposed architecture is increased Data Throughput (DT) to 22 % and Signal to Noise (S/N) ratio to 17 %, which provide a good background for establishing high qualified local area networks. This architecture enables continuous data acquisition and compression of WLAN's signals that are suitable for a variety of other wireless networking applications. At the transmitter side of each wireless node, an analog-CS framework is applied at the sensing step before analog to digital converter in order to generate the compressed version of the input signal. At the receiver side of wireless node, a reconstruction algorithm is applied in order to reconstruct the original signals from the compressed signals with high probability and enough accuracy. The proposed algorithm out-performs existing algorithms by achieving a good level of Quality of Service (QoS). This ability allows reducing 15 % of Bit Error Rate (BER) at each wireless node.

  10. High Temperature Uniaxial Compression and Stress-Relaxation Behavior of India-Specific RAFM Steel

    Science.gov (United States)

    Shah, Naimish S.; Sunil, Saurav; Sarkar, Apu

    2018-05-01

    India-specific reduced activity ferritic martensitic steel (INRAFM), a modified 9Cr-1Mo grade, has been developed by India as its own structural material for fabrication of the Indian Test Blanket Module (TBM) to be installed in the International Thermonuclear Energy Reactor (ITER). The extensive study on mechanical and physical properties of this material has been currently going on for appraisal of this material before being put to use in the ITER. High temperature compression, stress-relaxation, and strain-rate change behavior of the INRAFM steel have been investigated. The optical microscopic and scanning electron microscopic characterizations were carried out to observe the microstructural changes that occur during uniaxial compressive deformation test. Comparable true plastic stress values at 300 °C and 500 °C and a high drop in true plastic stress at 600 °C were observed during the compression test. Stress-relaxation behaviors were investigated at 500 °C, 550 °C, and 600 °C at a strain rate of 10-3 s-1. The creep properties of the steel at different temperatures were predicted from the stress-relaxation test. The Norton's stress exponent (n) was found to decrease with the increasing temperature. Using Bird-Mukherjee-Dorn relationship, the temperature-compensated normalized strain rate vs stress was plotted. The stress exponent (n) value of 10.05 was obtained from the normalized plot. The increasing nature of the strain rate sensitivity (m) with the test temperature was found from strain-rate change test. The low plastic stability with m 0.06 was observed at 600 °C. The activation volume (V *) values were obtained in the range of 100 to 300 b3. By comparing the experimental values with the literature, the rate-controlling mechanisms at the thermally activated region of high temperature were found to be the nonconservative movement of jogged screw dislocations and thermal breaking of attractive junctions.

  11. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  12. Free-beam soliton self-compression in air

    Science.gov (United States)

    Voronin, A. A.; Mitrofanov, A. V.; Sidorov-Biryukov, D. A.; Fedotov, A. B.; Pugžlys, A.; Panchenko, V. Ya; Shumakova, V.; Ališauskas, S.; Baltuška, A.; Zheltikov, A. M.

    2018-02-01

    We identify a physical scenario whereby soliton transients generated in freely propagating laser beams within the regions of anomalous dispersion in air can be compressed as a part of their free-beam spatiotemporal evolution to yield few-cycle mid- and long-wavelength-infrared field waveforms, whose peak power is substantially higher than the peak power of the input pulses. We show that this free-beam soliton self-compression scenario does not require ionization or laser-induced filamentation, enabling high-throughput self-compression of mid- and long-wavelength-infrared laser pulses within a broad range of peak powers from tens of gigawatts up to the terawatt level. We also demonstrate that this method of pulse compression can be extended to long-range propagation, providing self-compression of high-peak-power laser pulses in atmospheric air within propagation ranges as long as hundreds of meters, suggesting new ways towards longer-range standoff detection and remote sensing.

  13. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  14. Stem compression reversibly reduces phloem transport in Pinus sylvestris trees.

    Science.gov (United States)

    Henriksson, Nils; Tarvainen, Lasse; Lim, Hyungwoo; Tor-Ngern, Pantana; Palmroth, Sari; Oren, Ram; Marshall, John; Näsholm, Torgny

    2015-10-01

    Manipulating tree belowground carbon (C) transport enables investigation of the ecological and physiological roles of tree roots and their associated mycorrhizal fungi, as well as a range of other soil organisms and processes. Girdling remains the most reliable method for manipulating this flux and it has been used in numerous studies. However, girdling is destructive and irreversible. Belowground C transport is mediated by phloem tissue, pressurized through the high osmotic potential resulting from its high content of soluble sugars. We speculated that phloem transport may be reversibly blocked through the application of an external pressure on tree stems. Thus, we here introduce a technique based on compression of the phloem, which interrupts belowground flow of assimilates, but allows trees to recover when the external pressure is removed. Metal clamps were wrapped around the stems and tightened to achieve a pressure theoretically sufficient to collapse the phloem tissue, thereby aiming to block transport. The compression's performance was tested in two field experiments: a (13)C canopy labelling study conducted on small Scots pine (Pinus sylvestris L.) trees [2-3 m tall, 3-7 cm diameter at breast height (DBH)] and a larger study involving mature pines (∼15 m tall, 15-25 cm DBH) where stem respiration, phloem and root carbohydrate contents, and soil CO2 efflux were measured. The compression's effectiveness was demonstrated by the successful blockage of (13)C transport. Stem compression doubled stem respiration above treatment, reduced soil CO2 efflux by 34% and reduced phloem sucrose content by 50% compared with control trees. Stem respiration and soil CO2 efflux returned to normal within 3 weeks after pressure release, and (13)C labelling revealed recovery of phloem function the following year. Thus, we show that belowground phloem C transport can be reduced by compression, and we also demonstrate that trees recover after treatment, resuming C

  15. Comparison of chest compression quality between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method during CPR.

    Science.gov (United States)

    Park, Sang-Sub

    2014-01-01

    The purpose of this study is to grasp difference in quality of chest compression accuracy between the modified chest compression method with the use of smartphone application and the standardized traditional chest compression method. Participants were progressed 64 people except 6 absentees among 70 people who agreed to participation with completing the CPR curriculum. In the classification of group in participants, the modified chest compression method was called as smartphone group (33 people). The standardized chest compression method was called as traditional group (31 people). The common equipments in both groups were used Manikin for practice and Manikin for evaluation. In the meantime, the smartphone group for application was utilized Android and iOS Operating System (OS) of 2 smartphone products (G, i). The measurement period was conducted from September 25th to 26th, 2012. Data analysis was used SPSS WIN 12.0 program. As a result of research, the proper compression depth (mm) was shown the proper compression depth (p< 0.01) in traditional group (53.77 mm) compared to smartphone group (48.35 mm). Even the proper chest compression (%) was formed suitably (p< 0.05) in traditional group (73.96%) more than smartphone group (60.51%). As for the awareness of chest compression accuracy, the traditional group (3.83 points) had the higher awareness of chest compression accuracy (p< 0.001) than the smartphone group (2.32 points). In the questionnaire that was additionally carried out 1 question only in smartphone group, the modified chest compression method with the use of smartphone had the high negative reason in rescuer for occurrence of hand back pain (48.5%) and unstable posture (21.2%).

  16. Exploring High-Achieving Students' Images of Mathematicians

    Science.gov (United States)

    Aguilar, Mario Sánchez; Rosas, Alejandro; Zavaleta, Juan Gabriel Molina; Romo-Vázquez, Avenilde

    2016-01-01

    The aim of this study is to describe the images that a group of high-achieving Mexican students hold of mathematicians. For this investigation, we used a research method based on the Draw-A-Scientist Test (DAST) with a sample of 63 Mexican high school students. The group of students' pictorial and written descriptions of mathematicians assisted us…

  17. Performance evaluation of breast image compression techniques

    Energy Technology Data Exchange (ETDEWEB)

    Anastassopoulos, G; Lymberopoulos, D [Wire Communications Laboratory, Electrical Engineering Department, University of Patras, Greece (Greece); Panayiotakis, G; Bezerianos, A [Medical Physics Department, School of Medicine, University of Patras, Greece (Greece)

    1994-12-31

    Novel diagnosis orienting tele working systems manipulate, store, and process medical data through real time communication - conferencing schemes. One of the most important factors affecting the performance of these systems is image handling. Compression algorithms can be applied to the medical images, in order to minimize : a) the volume of data to be stored in the database, b) the demanded bandwidth from the network, c) the transmission costs, and to minimize the speed of the transmitted data. In this paper an estimation of all the factors of the process that affect the presentation of breast images is made, from the time the images are produced from a modality, till the compressed images are stored, or transmitted in a Broadband network (e.g. B-ISDN). The images used were scanned images of the TOR(MAX) Leeds breast phantom, as well as typical breast images. A comparison of seven compression techniques has been done, based on objective criteria such as Mean Square Error (MSE), resolution, contrast, etc. The user can choose the appropriate compression ratio in order to achieve the desired image quality. (authors). 12 refs, 4 figs.

  18. A Test Data Compression Scheme Based on Irrational Numbers Stored Coding

    Directory of Open Access Journals (Sweden)

    Hai-feng Wu

    2014-01-01

    Full Text Available Test question has already become an important factor to restrict the development of integrated circuit industry. A new test data compression scheme, namely irrational numbers stored (INS, is presented. To achieve the goal of compress test data efficiently, test data is converted into floating-point numbers, stored in the form of irrational numbers. The algorithm of converting floating-point number to irrational number precisely is given. Experimental results for some ISCAS 89 benchmarks show that the compression effect of proposed scheme is better than the coding methods such as FDR, AARLC, INDC, FAVLC, and VRL.

  19. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  20. Early predictors of high school mathematics achievement.

    Science.gov (United States)

    Siegler, Robert S; Duncan, Greg J; Davis-Kean, Pamela E; Duckworth, Kathryn; Claessens, Amy; Engel, Mimi; Susperreguy, Maria Ines; Chen, Meichu

    2012-07-01

    Identifying the types of mathematics content knowledge that are most predictive of students' long-term learning is essential for improving both theories of mathematical development and mathematics education. To identify these types of knowledge, we examined long-term predictors of high school students' knowledge of algebra and overall mathematics achievement. Analyses of large, nationally representative, longitudinal data sets from the United States and the United Kingdom revealed that elementary school students' knowledge of fractions and of division uniquely predicts those students' knowledge of algebra and overall mathematics achievement in high school, 5 or 6 years later, even after statistically controlling for other types of mathematical knowledge, general intellectual ability, working memory, and family income and education. Implications of these findings for understanding and improving mathematics learning are discussed.

  1. Addition of Audiovisual Feedback During Standard Compressions Is Associated with Improved Ability

    Directory of Open Access Journals (Sweden)

    Nicholas Asakawa

    2018-02-01

    Full Text Available Introduction: A benefit of in-hospital cardiac arrest is the opportunity for rapid initiation of “high-quality” chest compressions as defined by current American Heart Association (AHA adult guidelines as a depth 2–2.4 inches, full chest recoil, rate 100–120 per minute, and minimal interruptions with a chest compression fraction (CCF ≥ 60%. The goal of this study was to assess the effect of audiovisual feedback on the ability to maintain high-quality chest compressions as per 2015 updated guidelines. Methods: Ninety-eight participants were randomized into four groups. Participants were randomly assigned to perform chest compressions with or without use of audiovisual feedback (+/− AVF. Participants were further assigned to perform either standard compressions with a ventilation ratio of 30:2 to simulate cardiopulmonary resuscitation (CPR without an advanced airway or continuous chest compressions to simulate CPR with an advanced airway. The primary outcome measured was ability to maintain high-quality chest compressions as defined by current 2015 AHA guidelines. Results: Overall comparisons between continuous and standard chest compressions (n=98 were without significant differences in chest compression dynamics (p’s >0.05. Overall comparisons between +/− AVF (n = 98 were significant for differences in average rate of compressions per minute (p= 0.0241 and proportion of chest compressions within guideline rate recommendations (p = 0.0084. There was a significant difference in the proportion of high quality-chest compressions favoring AVF (p = 0.0399. Comparisons between chest compression strategy groups +/− AVF were significant for differences in compression dynamics favoring AVF (p’s < 0.05. Conclusion: Overall, AVF is associated with greater ability to maintain high-quality chest compressions per most-recent AHA guidelines. Specifically, AVF was associated with a greater proportion of compressions within ideal rate with

  2. Experimental investigation on high temperature anisotropic compression properties of ceramic-fiber-reinforced SiO{sub 2} aerogel

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Duoqi; Sun, Yantao [School of Energy and Power Engineering, Beihang University, P.O. Box 405, Beijing 100191 (China); Feng, Jian [National Key Laboratory of Science and Technology on Advanced Ceramic Fibers and Composites, College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073 (China); Yang, Xiaoguang, E-mail: yxg@buaa.edu.cn [School of Energy and Power Engineering, Beihang University, P.O. Box 405, Beijing 100191 (China); Han, Shiwei; Mi, Chunhu [School of Energy and Power Engineering, Beihang University, P.O. Box 405, Beijing 100191 (China); Jiang, Yonggang [National Key Laboratory of Science and Technology on Advanced Ceramic Fibers and Composites, College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073 (China); Qi, Hongyu [School of Energy and Power Engineering, Beihang University, P.O. Box 405, Beijing 100191 (China)

    2013-11-15

    Compression tests were conducted on a ceramic-fiber-reinforced SiO{sub 2} aerogel at high temperature. Anisotropic mechanical property was found. In-plane Young's modulus is more than 10 times higher than that of out-of-plane, but fracture strain is much lower by a factor of 100. Out-of-plane Young's modulus decreases with increasing temperature, but the in-plane modulus and fracture stress increase with temperature. The out-of-plane property does not change with loading rates. Viscous flow at high temperature is found to cause in-plane shrinkage, and both in-plane and out-of-plane properties change. Compression induced densification of aerogel matrix was also found by Scanning Electron Microscope analysis.

  3. Enhancing Perceived Quality of Compressed Images and Video with Anisotropic Diffusion and Fuzzy Filtering

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Korhonen, Jari; Forchhammer, Søren

    2013-01-01

    and subjective results on JPEG compressed images, as well as MJPEG and H.264/AVC compressed video, indicate that the proposed algorithms employing directional and spatial fuzzy filters achieve better artifact reduction than other methods. In particular, robust improvements with H.264/AVC video have been gained...

  4. Achievement goals and perfectionism of high school students

    Directory of Open Access Journals (Sweden)

    Milojević Milica

    2009-01-01

    Full Text Available This research has been investigating one of the most contemporary approaches of achievement motivation - Achievement Goal Theory, which uses the construct of achievement goals. The construct of achievement goals involves three types of achievement goals: mastery goals, performance approach goals and performance avoidance goals. The main goal of the research was to examine correlation between perfectionism and its aspects with particular types of achievement goals. Also, the goal was to investigate the difference concerning gender regarding the achievement goals. The sample consisted of 200 senior year high school participants. The following instruments were used: Multi-dimensional scale of perfectionism (MSP and Test of achievement goals (TCP. The research results indicate that there is significant positive correlation between: perfectionism with performance approach goals and performance avoidance goals, concern over mistakes and parental expectations with performance approach goals and performance avoidance goals, personal standards and organization with mastery goals and performance approach goals, parental criticism and doubts about action with performance avoidance goals. Significant negative correlation was found between parental criticism and mastery goals. The results concerning the second goal indicates the female subjects have higher average scores in mastery goals.

  5. High school achievement as a predictor for university performance

    Directory of Open Access Journals (Sweden)

    Meshkani Z

    2004-07-01

    Full Text Available Background: The high-school grade point average ( GPA-H and university entrance examination can predict the university achievement and Purpose. To examine the predictive value of GPA-H for GPA-U Methods: In this cross sectional study, the subjects were 240 medical students at basic science phase of their medical education. Data were collected by a questionnaire, consisting of questions measuring factual background variable and 10 Llikert-type questions measuring attitude. The multiple regression analysis was used. Results: The analysis showed that student GPA were a better predictor for educational achievement of medical students than rank on university entrance exam and students with high GPA have not been on probation at all. Also parent's education and occupation influence the students' attitudes toward their medical study. Conclusion: High-school GPA is a predictor for university GPA .This may warrant further investigation into criteria of medical university entrance exam. Keywords: UNIVERSITY ACHIEVEMENT, HIGH-SCHOOL GPA, UNIVERSITY SUCCESS, PREDICTOR

  6. Mechanical behavior of silicon carbide nanoparticles under uniaxial compression

    Energy Technology Data Exchange (ETDEWEB)

    He, Qiuxiang; Fei, Jing; Tang, Chao; Zhong, Jianxin; Meng, Lijun, E-mail: ljmeng@xtu.edu.cn [Xiangtan University, Hunan Key Laboratory for Micro-Nano Energy Materials and Devices, Faculty of School of Physics and Optoelectronics (China)

    2016-03-15

    The mechanical behavior of SiC nanoparticles under uniaxial compression was investigated using an atomic-level compression simulation technique. The results revealed that the mechanical deformation of SiC nanocrystals is highly dependent on compression orientation, particle size, and temperature. A structural transformation from the original zinc-blende to a rock-salt phase is identified for SiC nanoparticles compressed along the [001] direction at low temperature. However, the rock-salt phase is not observed for SiC nanoparticles compressed along the [110] and [111] directions irrespective of size and temperature. The high-pressure-generated rock-salt phase strongly affects the mechanical behavior of the nanoparticles, including their hardness and deformation process. The hardness of [001]-compressed nanoparticles decreases monotonically as their size increases, different from that of [110] and [111]-compressed nanoparticles, which reaches a maximal value at a critical size and then decreases. Additionally, a temperature-dependent mechanical response was observed for all simulated SiC nanoparticles regardless of compression orientation and size. Interestingly, the hardness of SiC nanocrystals with a diameter of 8 nm compressed in [001]-orientation undergoes a steep decrease at 0.1–200 K and then a gradual decline from 250 to 1500 K. This trend can be attributed to different deformation mechanisms related to phase transformation and dislocations. Our results will be useful for practical applications of SiC nanoparticles under high pressure.

  7. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar.

    Science.gov (United States)

    Tsao, Kuei-Chi; Lee, Ling; Chu, Ta-Shun; Huang, Yuan-Hao

    2018-04-05

    Complementary metal-oxide-semiconductor (CMOS) radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP) is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA). The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  8. A Two-Stage Reconstruction Processor for Human Detection in Compressive Sensing CMOS Radar

    Directory of Open Access Journals (Sweden)

    Kuei-Chi Tsao

    2018-04-01

    Full Text Available Complementary metal-oxide-semiconductor (CMOS radar has recently gained much research attraction because small and low-power CMOS devices are very suitable for deploying sensing nodes in a low-power wireless sensing system. This study focuses on the signal processing of a wireless CMOS impulse radar system that can detect humans and objects in the home-care internet-of-things sensing system. The challenges of low-power CMOS radar systems are the weakness of human signals and the high computational complexity of the target detection algorithm. The compressive sensing-based detection algorithm can relax the computational costs by avoiding the utilization of matched filters and reducing the analog-to-digital converter bandwidth requirement. The orthogonal matching pursuit (OMP is one of the popular signal reconstruction algorithms for compressive sensing radar; however, the complexity is still very high because the high resolution of human respiration leads to high-dimension signal reconstruction. Thus, this paper proposes a two-stage reconstruction algorithm for compressive sensing radar. The proposed algorithm not only has lower complexity than the OMP algorithm by 75% but also achieves better positioning performance than the OMP algorithm especially in noisy environments. This study also designed and implemented the algorithm by using Vertex-7 FPGA chip (Xilinx, San Jose, CA, USA. The proposed reconstruction processor can support the 256 × 13 real-time radar image display with a throughput of 28.2 frames per second.

  9. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Shock compression profiles in ceramics

    Energy Technology Data Exchange (ETDEWEB)

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  11. Medical image compression and its application to TDIS-FILE equipment

    International Nuclear Information System (INIS)

    Tsubura, Shin-ichi; Nishihara, Eitaro; Iwai, Shunsuke

    1990-01-01

    In order to compress medical images for filing and communication, we have developed a compression algorithm which compresses images with remarkable quality using a high-pass filtering method. Hardware for this compression algorithm was also developed and applied to TDIS (total digital imaging system)-FILE equipment. In the future, hardware based on this algorithm will be developed for various types of diagnostic equipment and PACS. This technique has the following characteristics: (1) significant reduction of artifacts; (2) acceptable quality for clinical evaluation at 15:1 to 20:1 compression ratio; and (3) high-speed processing and compact hardware. (author)

  12. Low complexity lossless compression of underwater sound recordings.

    Science.gov (United States)

    Johnson, Mark; Partan, Jim; Hurst, Tom

    2013-03-01

    Autonomous listening devices are increasingly used to study vocal aquatic animals, and there is a constant need to record longer or with greater bandwidth, requiring efficient use of memory and battery power. Real-time compression of sound has the potential to extend recording durations and bandwidths at the expense of increased processing operations and therefore power consumption. Whereas lossy methods such as MP3 introduce undesirable artifacts, lossless compression algorithms (e.g., flac) guarantee exact data recovery. But these algorithms are relatively complex due to the wide variety of signals they are designed to compress. A simpler lossless algorithm is shown here to provide compression factors of three or more for underwater sound recordings over a range of noise environments. The compressor was evaluated using samples from drifting and animal-borne sound recorders with sampling rates of 16-240 kHz. It achieves >87% of the compression of more-complex methods but requires about 1/10 of the processing operations resulting in less than 1 mW power consumption at a sampling rate of 192 kHz on a low-power microprocessor. The potential to triple recording duration with a minor increase in power consumption and no loss in sound quality may be especially valuable for battery-limited tags and robotic vehicles.

  13. Effect of In-Situ Curing on Compressive Strength of Reactive Powder Concrete

    Directory of Open Access Journals (Sweden)

    Bali Ika

    2016-01-01

    Full Text Available A development of Reactive Powder Concrete (RPC currently is the use of quartz powder as a stabilizing agent with the content to cement ratio of 30% and steam curing method in an autoclave temperature of 250ºC which produced a high compressive strength of 180 MPa. That RPC can be generated due to one reason for using the technique of steam curing in an autoclave in the laboratory. This study proposes in-situ curing method in order the curing can be applied in the field and with a reasonable compressive strength results of RPC. As the benchmarks in this study are the curing methods in laboratory that are steam curing of 90°C for 8 hours (C1, and water curing for 28 days (C2. For the in-situ curing methods that are covering with tarpaulins and flowed steam of 3 hours per day for 7 days (C3, covering with wet sacks for 28 days (C4, and covering with wet sacks for 28 days for specimen with unwashed sand as fine aggregate (C5. The comparison of compressive strength of the specimens in this study showed compressive strength of RPC with in-situ steam curing (101.64 MPa close to the compressive strength of RPC with steam curing in the laboratory with 8.2% of different. While in-situ wet curing compared with the water curing in laboratory has the different of 3.4%. These results indicated that the proposed in-situ curing methods are reasonable good in term of the compressive strength that can be achieved.

  14. The application of sparse linear prediction dictionary to compressive sensing in speech signals

    Directory of Open Access Journals (Sweden)

    YOU Hanxu

    2016-04-01

    Full Text Available Appling compressive sensing (CS,which theoretically guarantees that signal sampling and signal compression can be achieved simultaneously,into audio and speech signal processing is one of the most popular research topics in recent years.In this paper,K-SVD algorithm was employed to learn a sparse linear prediction dictionary regarding as the sparse basis of underlying speech signals.Compressed signals was obtained by applying random Gaussian matrix to sample original speech frames.Orthogonal matching pursuit (OMP and compressive sampling matching pursuit (CoSaMP were adopted to recovery original signals from compressed one.Numbers of experiments were carried out to investigate the impact of speech frames length,compression ratios,sparse basis and reconstruction algorithms on CS performance.Results show that sparse linear prediction dictionary can advance the performance of speech signals reconstruction compared with discrete cosine transform (DCT matrix.

  15. Selecting a general-purpose data compression algorithm

    Science.gov (United States)

    Mathews, Gary Jason

    1995-01-01

    The National Space Science Data Center's Common Data Formate (CDF) is capable of storing many types of data such as scalar data items, vectors, and multidimensional arrays of bytes, integers, or floating point values. However, regardless of the dimensionality and data type, the data break down into a sequence of bytes that can be fed into a data compression function to reduce the amount of data without losing data integrity and thus remaining fully reconstructible. Because of the diversity of data types and high performance speed requirements, a general-purpose, fast, simple data compression algorithm is required to incorporate data compression into CDF. The questions to ask are how to evaluate and compare compression algorithms, and what compression algorithm meets all requirements. The object of this paper is to address these questions and determine the most appropriate compression algorithm to use within the CDF data management package that would be applicable to other software packages with similar data compression needs.

  16. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  17. Accelerated high-resolution photoacoustic tomography via compressed sensing

    Science.gov (United States)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  18. Drift compression and final focus systems for heavy ion inertial fusion

    Energy Technology Data Exchange (ETDEWEB)

    de Hoon, Michiel Jan Laurens [Univ. of California, Berkeley, CA (United States)

    2001-01-01

    Longitudinal compression of space-charge dominated beams can be achieved by imposing a head-to-tail velocity tilt on the beam. This tilt has to be carefully tailored, such that it is removed by the longitudinal space-charge repulsion by the time the beam reaches the end of the drift compression section. The transverse focusing lattice should be designed such that all parts of the beam stay approximately matched, while the beam smoothly expands transversely to the larger beam radius needed in the final focus system following drift compression. In this thesis, several drift compression systems were designed within these constraints, based on a given desired pulse shape at the end of drift compression systems were designed within these constraints, based on a given desired pulse shape at the end of drift compression. The occurrence of mismatches due to a rapidly increasing current was analyzed. In addition, the sensitivity of drift compression to errors in the initial velocity tilt and current profile was studied. These calculations were done using a new computer code that accurately calculates the longitudinal electric field in the space-charge dominated regime.

  19. Effectiveness of compressed sensing and transmission in wireless sensor networks for structural health monitoring

    Science.gov (United States)

    Fujiwara, Takahiro; Uchiito, Haruki; Tokairin, Tomoya; Kawai, Hiroyuki

    2017-04-01

    Regarding Structural Health Monitoring (SHM) for seismic acceleration, Wireless Sensor Networks (WSN) is a promising tool for low-cost monitoring. Compressed sensing and transmission schemes have been drawing attention to achieve effective data collection in WSN. Especially, SHM systems installing massive nodes of WSN require efficient data transmission due to restricted communications capability. The dominant frequency band of seismic acceleration is occupied within 100 Hz or less. In addition, the response motions on upper floors of a structure are activated at a natural frequency, resulting in induced shaking at the specified narrow band. Focusing on the vibration characteristics of structures, we introduce data compression techniques for seismic acceleration monitoring in order to reduce the amount of transmission data. We carry out a compressed sensing and transmission scheme by band pass filtering for seismic acceleration data. The algorithm executes the discrete Fourier transform for the frequency domain and band path filtering for the compressed transmission. Assuming that the compressed data is transmitted through computer networks, restoration of the data is performed by the inverse Fourier transform in the receiving node. This paper discusses the evaluation of the compressed sensing for seismic acceleration by way of an average error. The results present the average error was 0.06 or less for the horizontal acceleration, in conditions where the acceleration was compressed into 1/32. Especially, the average error on the 4th floor achieved a small error of 0.02. Those results indicate that compressed sensing and transmission technique is effective to reduce the amount of data with maintaining the small average error.

  20. Lossy image compression for digital medical imaging systems

    Science.gov (United States)

    Wilhelm, Paul S.; Haynor, David R.; Kim, Yongmin; Nelson, Alan C.; Riskin, Eve A.

    1990-07-01

    Image compression at rates of 10:1 or greater could make PACS much more responsive and economically attractive. This paper describes a protocol for subjective and objective evaluation of the fidelity of compressed/decompressed images to the originals and presents the results ofits application to four representative and promising compression methods. The methods examined are predictive pruned tree-structured vector quantization, fractal compression, the discrete cosine transform with equal weighting of block bit allocation, and the discrete cosine transform with human visual system weighting of block bit allocation. Vector quantization is theoretically capable of producing the best compressed images, but has proven to be difficult to effectively implement. It has the advantage that it can reconstruct images quickly through a simple lookup table. Disadvantages are that codebook training is required, the method is computationally intensive, and achieving the optimum performance would require prohibitively long vector dimensions. Fractal compression is a relatively new compression technique, but has produced satisfactory results while being computationally simple. It is fast at both image compression and image reconstruction. Discrete cosine iransform techniques reproduce images well, but have traditionally been hampered by the need for intensive computing to compress and decompress images. A protocol was developed for side-by-side observer comparison of reconstructed images with originals. Three 1024 X 1024 CR (Computed Radiography) images and two 512 X 512 X-ray CT images were viewed at six bit rates (0.2, 0.4, 0.6, 0.9, 1.2, and 1.5 bpp for CR, and 1.0, 1.3, 1.6, 1.9, 2.2, 2.5 bpp for X-ray CT) by nine radiologists at the University of Washington Medical Center. The CR images were viewed on a Pixar II Megascan (2560 X 2048) monitor and the CT images on a Sony (1280 X 1024) monitor. The radiologists' subjective evaluations of image fidelity were compared to

  1. Compressibility of Ir-Os alloys under high pressure

    International Nuclear Information System (INIS)

    Yusenko, Kirill V.; Bykova, Elena; Bykov, Maxim; Gromilov, Sergey A.; Kurnosov, Alexander V.; Prescher, Clemens; Prakapenka, Vitali B.; Hanfland, Michael; Smaalen, Sander van; Margadonna, Serena; Dubrovinsky, Leonid S.

    2015-01-01

    Highlights: • fcc- and hcp-Ir-Os alloys were prepared from single-source precursors. • Their atomic volumes measured at ambient conditions using powder X-ray diffraction follow nearly linear dependence. • Compressibility of alloys have been studied up to 30 GPa at room temperature in diamond anvil cells. • Their bulk moduli increase with increasing osmium content. - Abstract: Several fcc- and hcp-structured Ir-Os alloys were prepared from single-source precursors in hydrogen atmosphere at 873 K. Their atomic volumes measured at ambient conditions using powder X-ray diffraction follow nearly linear dependence as a function of composition. Alloys have been studied up to 30 GPa at room temperature by means of synchrotron-based X-ray powder diffraction in diamond anvil cells. Their bulk moduli increase with increasing osmium content and show a deviation from linearity. Bulk modulus of hcp-Ir 0.20 Os 0.80 is identical to that of pure Os (411 GPa) within experimental errors. Peculiarities on fcc-Ir 0.80 Os 0.20 compressibility curve indicate possible changes of its electronic properties at ∼20 GPa

  2. EBLAST: an efficient high-compression image transformation 3. application to Internet image and video transmission

    Science.gov (United States)

    Schmalz, Mark S.; Ritter, Gerhard X.; Caimi, Frank M.

    2001-12-01

    A wide variety of digital image compression transforms developed for still imaging and broadcast video transmission are unsuitable for Internet video applications due to insufficient compression ratio, poor reconstruction fidelity, or excessive computational requirements. Examples include hierarchical transforms that require all, or large portion of, a source image to reside in memory at one time, transforms that induce significant locking effect at operationally salient compression ratios, and algorithms that require large amounts of floating-point computation. The latter constraint holds especially for video compression by small mobile imaging devices for transmission to, and compression on, platforms such as palmtop computers or personal digital assistants (PDAs). As Internet video requirements for frame rate and resolution increase to produce more detailed, less discontinuous motion sequences, a new class of compression transforms will be needed, especially for small memory models and displays such as those found on PDAs. In this, the third series of papers, we discuss the EBLAST compression transform and its application to Internet communication. Leading transforms for compression of Internet video and still imagery are reviewed and analyzed, including GIF, JPEG, AWIC (wavelet-based), wavelet packets, and SPIHT, whose performance is compared with EBLAST. Performance analysis criteria include time and space complexity and quality of the decompressed image. The latter is determined by rate-distortion data obtained from a database of realistic test images. Discussion also includes issues such as robustness of the compressed format to channel noise. EBLAST has been shown to perform superiorly to JPEG and, unlike current wavelet compression transforms, supports fast implementation on embedded processors with small memory models.

  3. High Achievement in Mathematics Education in India: A Report from Mumbai

    Science.gov (United States)

    Raman, Manya

    2010-01-01

    This paper reports a study aimed at characterizing the conditions that lead to high achievement in mathematics in India. The study involved eight schools in the greater Mumbai region. The main result of the study is that the notion of high achievement itself is problematic, as reflected in the reports about mathematics achievement within and…

  4. Compressed optimization of device architectures

    Energy Technology Data Exchange (ETDEWEB)

    Frees, Adam [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Gamble, John King [Microsoft Research, Redmond, WA (United States). Quantum Architectures and Computation Group; Ward, Daniel Robert [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Blume-Kohout, Robin J [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Eriksson, M. A. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Friesen, Mark [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Coppersmith, Susan N. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics

    2014-09-01

    Recent advances in nanotechnology have enabled researchers to control individual quantum mechanical objects with unprecedented accuracy, opening the door for both quantum and extreme- scale conventional computation applications. As these devices become more complex, designing for facility of control becomes a daunting and computationally infeasible task. Here, motivated by ideas from compressed sensing, we introduce a protocol for the Compressed Optimization of Device Architectures (CODA). It leads naturally to a metric for benchmarking and optimizing device designs, as well as an automatic device control protocol that reduces the operational complexity required to achieve a particular output. Because this protocol is both experimentally and computationally efficient, it is readily extensible to large systems. For this paper, we demonstrate both the bench- marking and device control protocol components of CODA through examples of realistic simulations of electrostatic quantum dot devices, which are currently being developed experimentally for quantum computation.

  5. Binary rf pulse compression experiment at SLAC

    International Nuclear Information System (INIS)

    Lavine, T.L.; Spalek, G.; Farkas, Z.D.; Menegat, A.; Miller, R.H.; Nantista, C.; Wilson, P.B.

    1990-06-01

    Using rf pulse compression it will be possible to boost the 50- to 100-MW output expected from high-power microwave tubes operating in the 10- to 20-GHz frequency range, to the 300- to 1000-MW level required by the next generation of high-gradient linacs for linear for linear colliders. A high-power X-band three-stage binary rf pulse compressor has been implemented and operated at the Stanford Linear Accelerator Center (SLAC). In each of three successive stages, the rf pulse-length is compressed by half, and the peak power is approximately doubled. The experimental results presented here have been obtained at low-power (1-kW) and high-power (15-MW) input levels in initial testing with a TWT and a klystron. Rf pulses initially 770 nsec long have been compressed to 60 nsec. Peak power gains of 1.8 per stage, and 5.5 for three stages, have been measured. This corresponds to a peak power compression efficiency of about 90% per stage, or about 70% for three stages, consistent with the individual component losses. The principle of operation of a binary pulse compressor (BPC) is described in detail elsewhere. We recently have implemented and operated at SLAC a high-power (high-vacuum) three-stage X-band BPC. First results from the high-power three-stage BPC experiment are reported here

  6. High Involvement Mothers of High Achieving Children: Potential Theoretical Explanations

    Science.gov (United States)

    Hunsaker, Scott L.

    2013-01-01

    In American society, parents who have high aspirations for the achievements of their children are often viewed by others in a negative light. Various pejoratives such as "pushy parent," "helicopter parent," "stage mother," and "soccer mom" are used in the common vernacular to describe these parents. Multiple…

  7. Compressibility characteristics of Sabak Bernam Marine Clay

    Science.gov (United States)

    Lat, D. C.; Ali, N.; Jais, I. B. M.; Baharom, B.; Yunus, N. Z. M.; Salleh, S. M.; Azmi, N. A. C.

    2018-04-01

    This study is carried out to determine the geotechnical properties and compressibility characteristics of marine clay collected at Sabak Bernam. The compressibility characteristics of this soil are determined from 1-D consolidation test and verified by existing correlations by other researchers. No literature has been found on the compressibility characteristics of Sabak Bernam Marine Clay. It is important to carry out this study since this type of marine clay covers large coastal area of west coast Malaysia. This type of marine clay was found on the main road connecting Klang to Perak and the road keeps experiencing undulation and uneven settlement which jeopardise the safety of the road users. The soil is indicated in the Generalised Soil Map of Peninsular Malaysia as a CLAY with alluvial soil on recent marine and riverine alluvium. Based on the British Standard Soil Classification and Plasticity Chart, the soil is classified as a CLAY with very high plasticity (CV). Results from laboratory test on physical properties and compressibility parameters show that Sabak Bernam Marine Clay (SBMC) is highly compressible, has low permeability and poor drainage characteristics. The compressibility parameters obtained for SBMC is in a good agreement with other researchers in the same field.

  8. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a highcompression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  9. Percutaneous micro-balloon compression for treatment of high risk idiopathic trigeminal neuralgia

    International Nuclear Information System (INIS)

    Zou Jianjun; Ma Yi; Wang Bin; Li Yanfeng; Huang Haitao; Li Fuyong

    2008-01-01

    Objective: To evaluate the clinical effectiveness and complications of percutaneous micro- balloon compression (PMC) of trigeminal ganglion for high risk idiopathic trigeminal neuralgia. Methods: To analyze retrospectively the clinical data of 3053 cases of idiopathic trigeminal nemalgia, of which 804 cases were in high risk, who underwent PMC from Jan. 2001 to Dec. 2007 in our department. Results: 833 procedures were performed on these 804 patients. The immediate effective rate was 97.3%; with recurrence rate of 6.8%, ipsilateral paresthesia incidence 3.7%; and no keratohelcosis with approximately 2/3 masticator, muscles weakness and diplopia 0.2%. Mean follow-up time was 36 months. Conclusions: PMC procedure is very effective for idiopathic trigeminal neuralgia especially in high risk patients, and especially prefer for the pain involved the first branch neuralgia. (authors)

  10. An Analysis of Java Programming Behaviors, Affect, Perceptions, and Syntax Errors among Low-Achieving, Average, and High-Achieving Novice Programmers

    Science.gov (United States)

    Rodrigo, Ma. Mercedes T.; Andallaza, Thor Collin S.; Castro, Francisco Enrique Vicente G.; Armenta, Marc Lester V.; Dy, Thomas T.; Jadud, Matthew C.

    2013-01-01

    In this article we quantitatively and qualitatively analyze a sample of novice programmer compilation log data, exploring whether (or how) low-achieving, average, and high-achieving students vary in their grasp of these introductory concepts. High-achieving students self-reported having the easiest time learning the introductory programming…

  11. Image splitting and remapping method for radiological image compression

    Science.gov (United States)

    Lo, Shih-Chung B.; Shen, Ellen L.; Mun, Seong K.

    1990-07-01

    A new decomposition method using image splitting and gray-level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in our radiological image compression study. In our experiments, we tested the impact of this decomposition method on image compression by employing it with two coding techniques on a set of clinically used CT images and several laser film digitized chest radiographs. One of the compression techniques used was full-frame bit-allocation in the discrete cosine transform domain, which has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree-structured encoding, which through recent research has also been found to produce a low mean-square-error and a high compression ratio. The parameters we used in this study were mean-square-error and the bit rate required for the compressed file. In addition to these parameters, the difference between the original and reconstructed images will be presented so that the specific artifacts generated by both techniques can be discerned by visual perception.

  12. Measurement and Improvement the Quality of the Compressive Strength of Product Concrete

    Directory of Open Access Journals (Sweden)

    Zohair Hassan Abdullah

    2018-01-01

    Full Text Available The research dealt with studying path technology to manufacture of concrete cubes according to specification design of Iraq to the degree of concrete C20 No. 52 of 1984, and in which sample was cubic shape and the dimensions (150 × 150 × 150 mm for each dimensions and the proportion of mixing of the concrete is (1:2:4 using in the casting floor. For concrete resistance required that achieve the degree of confidence of 100%, were examined compressive strength 40 samples of concrete cubes of age 28 days in the Labs section of Civil Department – Technical Institute of Babylon, all made from the same mixing concrete. Where, these samples classified within the acceptable tests were adopted in the implementation of investment projects in the construction sector. The research aims first, to measure the compressive strength of concrete cubes because the decrease or increase the compressive strength from specification design contributes to the failure of investment projects in the construction sector therefore, test was classified units that produced within damaged units. Second, to study an improvement the quality of compressive strength of concrete cubes. Results show that the proportion of damaged cubes are 0.00685, compressive strength was achieve confidence level 99.5% and producing of concrete cubes within the acceptable level of quality (3 Sigma. The quality of compressive strength was improved to good level use advanced sigma  levels. DOI: http://dx.doi.org/10.25130/tjes.24.2017.20

  13. Parallel Tensor Compression for Large-Scale Scientific Data.

    Energy Technology Data Exchange (ETDEWEB)

    Kolda, Tamara G. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ballard, Grey [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Austin, Woody Nathan [Univ. of Texas, Austin, TX (United States)

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  14. Quasi-isentropic compression using compressed water flow generated by underwater electrical explosion of a wire array

    Science.gov (United States)

    Gurovich, V.; Virozub, A.; Rososhek, A.; Bland, S.; Spielman, R. B.; Krasik, Ya. E.

    2018-05-01

    A major experimental research area in material equation-of-state today involves the use of off-Hugoniot measurements rather than shock experiments that give only Hugoniot data. There is a wide range of applications using quasi-isentropic compression of matter including the direct measurement of the complete isentrope of materials in a single experiment and minimizing the heating of flyer plates for high-velocity shock measurements. We propose a novel approach to generating quasi-isentropic compression of matter. Using analytical modeling and hydrodynamic simulations, we show that a working fluid composed of compressed water, generated by an underwater electrical explosion of a planar wire array, might be used to efficiently drive the quasi-isentropic compression of a copper target to pressures ˜2 × 1011 Pa without any complex target designs.

  15. Feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory.

    Science.gov (United States)

    Wang, Haoyu; Miao, Yanwei; Zhou, Kun; Yu, Yanming; Bao, Shanglian; He, Qiang; Dai, Yongming; Xuan, Stephanie Y; Tarabishy, Bisher; Ye, Yongquan; Hu, Jiani

    2010-09-01

    To investigate the feasibility of high temporal resolution breast DCE-MRI using compressed sensing theory. Two experiments were designed to investigate the feasibility of using reference image based compressed sensing (RICS) technique in DCE-MRI of the breast. The first experiment examined the capability of RICS to faithfully reconstruct uptake curves using undersampled data sets extracted from fully sampled clinical breast DCE-MRI data. An average approach and an approach using motion estimation and motion compensation (ME/MC) were implemented to obtain reference images and to evaluate their efficacy in reducing motion related effects. The second experiment, an in vitro phantom study, tested the feasibility of RICS for improving temporal resolution without degrading the spatial resolution. For the uptake-curve reconstruction experiment, there was a high correlation between uptake curves reconstructed from fully sampled data by Fourier transform and from undersampled data by RICS, indicating high similarity between them. The mean Pearson correlation coefficients for RICS with the ME/MC approach and RICS with the average approach were 0.977 +/- 0.023 and 0.953 +/- 0.031, respectively. The comparisons of final reconstruction results between RICS with the average approach and RICS with the ME/MC approach suggested that the latter was superior to the former in reducing motion related effects. For the in vitro experiment, compared to the fully sampled method, RICS improved the temporal resolution by an acceleration factor of 10 without degrading the spatial resolution. The preliminary study demonstrates the feasibility of RICS for faithfully reconstructing uptake curves and improving temporal resolution of breast DCE-MRI without degrading the spatial resolution.

  16. Tribological properties of high-speed steel treated by compression plasma flow

    International Nuclear Information System (INIS)

    Cherenda, K.K.; Uglov, V.V.; Anishchik, V.M.; Stalmashonak, A.K.; Astashinski, V.M.

    2004-01-01

    Full text: The investigation of tribological properties of two high-speed steels AISI M2 and AISI Tl treated by the nitrogen compression plasma flow was the main aim of this work. Two types of samples were investigated before and after quenching. The plasma flow was received in a magneto-plasma compressor. The impulse duration was ∼100 μs, the number of impulses varied in the range of 1-5, the nitrogen pressure in the chamber was 400-4000 Pa, the energy absorbed by the sample was 2-10 J/cm 2 per impulse. Tribological properties were examined by means of a tribometer TAYl under conditions of dry friction. The Vickers's microhardness was measured by a hard meter PMT3. X-ray diffraction analysis, Auger electron spectroscopy, scanning electron microscopy and energy dispersion microanalysis were used for samples characterization. The earlier conducted investigations showed that the compression plasma flow suited well for the improvement of tribological properties of iron and low-alloyed steels due to the formation of hardening nitrides in the near surface layer. It was found that in the case of high-speed steels only not quenched samples had increased hardness after treatment. The latter can be explained by the formation of hardening nitrides though the phase analysis did not clearly reveal their presence. The element composition confirmed the presence of nitrogen in the surface layer with the concentration up to 30 at. %. The treatment of quenched samples almost always resulted in the hardness decrease due to the dissolution or partial dissolution of alloying elements carbides: M 6 C, MC, M 23 C 6 . The rate of carbides dissolution increased with the growth of the energy absorbed by the sample. The treated samples showed a lower value of the friction coefficient than the untreated one. It could be explained by the formation of nitrogenous austenite which was found out by the phase analysis. At the same time the compression plasma flow strongly influenced surface

  17. A practical discrete-adjoint method for high-fidelity compressible turbulence simulations

    International Nuclear Information System (INIS)

    Vishnampet, Ramanathan; Bodony, Daniel J.; Freund, Jonathan B.

    2015-01-01

    Methods and computing hardware advances have enabled accurate predictions of complex compressible turbulence phenomena, such as the generation of jet noise that motivates the present effort. However, limited understanding of underlying physical mechanisms restricts the utility of such predictions since they do not, by themselves, indicate a route to design improvements. Gradient-based optimization using adjoints can circumvent the flow complexity to guide designs, though this is predicated on the availability of a sufficiently accurate solution of the forward and adjoint systems. These are challenging to obtain, since both the chaotic character of the turbulence and the typical use of discretizations near their resolution limits in order to efficiently represent its smaller scales will amplify any approximation errors made in the adjoint formulation. Formulating a practical exact adjoint that avoids such errors is especially challenging if it is to be compatible with state-of-the-art simulation methods used for the turbulent flow itself. Automatic differentiation (AD) can provide code to calculate a nominally exact adjoint, but existing general-purpose AD codes are inefficient to the point of being prohibitive for large-scale turbulence simulations. Here, we analyze the compressible flow equations as discretized using the same high-order workhorse methods used for many high-fidelity compressible turbulence simulations, and formulate a practical space–time discrete-adjoint method without changing the basic discretization. A key step is the definition of a particular discrete analog of the continuous norm that defines our cost functional; our selection leads directly to an efficient Runge–Kutta-like scheme, though it would be just first-order accurate if used outside the adjoint formulation for time integration, with finite-difference spatial operators for the adjoint system. Its computational cost only modestly exceeds that of the flow equations. We confirm that

  18. Effect of radiation losses on the compression of hydrogen by imploding solid liners

    International Nuclear Information System (INIS)

    Hussey, T.W.; Kiuttu, G.F.; Degnan, J.H.; Peterkin, R.E.; Smith, G.A.; Turchi, P.J.

    1996-01-01

    Quasispherical solid liner implosions with little or no instability growth have been achieved experimentally. Applications for such implosions include the uniform, shock-free compression of some sort of on-axis target. One proposed means of obtaining such compression is to inject a 1 eV hydrogen plasma working fluid between the liner and the target, and imploding the liner around it. the high initial temperature assures that the sound speed within the liner is always greater than the inner surface implosion velocity of the liner, and the initial density is chosen so that the volume of the working fluid at peak compression is sufficiently large so that perfectly spherical convergence of the liner is not required. One concern with such an approach is that energy losses associated with ionization and radiation will degrade the effective gamma of the compression. To isolate and, therefore, understand these effects the authors have developed a simple zero-dimensional model for the liner implosion that accurately accounts for the shape and thickness of the liner as it implodes and compresses the working fluid. Based on simple considerations they make a crude estimate of the range of initial densities of interest for this technique. They then observe that within this density rage, for the temperatures of interest, the lines are strongly self-absorbed so that the transport of radiation is dominated by bound-free and free-free processes

  19. Low-Complexity Lossless and Near-Lossless Data Compression Technique for Multispectral Imagery

    Science.gov (United States)

    Xie, Hua; Klimesh, Matthew A.

    2009-01-01

    This work extends the lossless data compression technique described in Fast Lossless Compression of Multispectral- Image Data, (NPO-42517) NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26. The original technique was extended to include a near-lossless compression option, allowing substantially smaller compressed file sizes when a small amount of distortion can be tolerated. Near-lossless compression is obtained by including a quantization step prior to encoding of prediction residuals. The original technique uses lossless predictive compression and is designed for use on multispectral imagery. A lossless predictive data compression algorithm compresses a digitized signal one sample at a time as follows: First, a sample value is predicted from previously encoded samples. The difference between the actual sample value and the prediction is called the prediction residual. The prediction residual is encoded into the compressed file. The decompressor can form the same predicted sample and can decode the prediction residual from the compressed file, and so can reconstruct the original sample. A lossless predictive compression algorithm can generally be converted to a near-lossless compression algorithm by quantizing the prediction residuals prior to encoding them. In this case, since the reconstructed sample values will not be identical to the original sample values, the encoder must determine the values that will be reconstructed and use these values for predicting later sample values. The technique described here uses this method, starting with the original technique, to allow near-lossless compression. The extension to allow near-lossless compression adds the ability to achieve much more compression when small amounts of distortion are tolerable, while retaining the low complexity and good overall compression effectiveness of the original algorithm.

  20. Managment oriented analysis of sediment yield time compression

    Science.gov (United States)

    Smetanova, Anna; Le Bissonnais, Yves; Raclot, Damien; Nunes, João P.; Licciardello, Feliciana; Le Bouteiller, Caroline; Latron, Jérôme; Rodríguez Caballero, Emilio; Mathys, Nicolle; Klotz, Sébastien; Mekki, Insaf; Gallart, Francesc; Solé Benet, Albert; Pérez Gallego, Nuria; Andrieux, Patrick; Moussa, Roger; Planchon, Olivier; Marisa Santos, Juliana; Alshihabi, Omran; Chikhaoui, Mohamed

    2016-04-01

    The understanding of inter- and intra-annual variability of sediment yield is important for the land use planning and management decisions for sustainable landscapes. It is of particular importance in the regions where the annual sediment yield is often highly dependent on the occurrence of few large events which produce the majority of sediments, such as in the Mediterranean. This phenomenon is referred as time compression, and relevance of its consideration growths with the increase in magnitude and frequency of extreme events due to climate change in many other regions. So far, time compression has ben studied mainly on events datasets, providing high resolution, but (in terms of data amount, required data precision and methods), demanding analysis. In order to provide an alternative simplified approach, the monthly and yearly time compressions were evaluated in eight Mediterranean catchments (of the R-OSMed network), representing a wide range of Mediterranean landscapes. The annual sediment yield varied between 0 to ~27100 Mg•km-2•a-1, and the monthly sediment yield between 0 to ~11600 Mg•km-2•month-1. The catchment's sediment yield was un-equally distributed at inter- and intra-annual scale, and large differences were observed between the catchments. Two types of time compression were distinguished - (i) the inter-annual (based on annual values) and intra- annual (based on monthly values). Four different rainfall-runoff-sediment yield time compression patterns were observed: (i) no time-compression of rainfall, runoff, nor sediment yield, (ii) low time compression of rainfall and runoff, but high compression of sediment yield, (iii) low compression of rainfall and high of runoff and sediment yield, and (iv) low, medium and high compression of rainfall, runoff and sediment yield. All four patterns were present at inter-annual scale, while at intra-annual scale only the two latter were present. This implies that high sediment yields occurred in

  1. Prospective Nonrandomized Trial of Manual Compression and Angio-Seal and Starclose Arterial Closure Devices in Common Femoral Punctures

    International Nuclear Information System (INIS)

    Ratnam, Lakshmi A.; Raja, Jowad; Munneke, Graham J.; Morgan, Robert A.; Belli, Anna-Maria

    2007-01-01

    We compared the use of manual compression and Angio-Seal and Starclose arterial closure devices to achieve hemostasis following common femoral artery (CFA) punctures in order to evaluate safety and efficacy. A prospective nonrandomized, single-center study was carried out on all patients undergoing CFA punctures over 1 year. Hemostasis was achieved using manual compression in 108 cases, Angio-Seal in 167 cases, and Starclose in 151 cases. Device-failure rates were low and not significantly different in the two groups (manual compression and closure devices; p = 0.8). There were significantly more Starclose (11.9%) patients compared to Angio-Seal (2.4%), with successful initial deployment subsequently requiring additional manual compression to achieve hemostasis (p < 0.0001). A significant number of very thin patients failed to achieve hemostasis (p = 0.014). Major complications were seen in 2.9% of Angio-Seal, 1.9% of Starclose, and 3.7% of manual compression patients, with no significant difference demonstrated; 4.7% of the major complications were seen in female patients compared to 1.3% in males (p = 0.0415). All three methods showed comparable safety and efficacy. Very thin patients are more likely to have failed hemostasis with the Starclose device, although this did not translate into an increased complication rate. There is a significant increased risk of major puncture-site complications in women with peripheral vascular disease

  2. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    Energy Technology Data Exchange (ETDEWEB)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  3. Patterns of neurovascular compression in patients with classic trigeminal neuralgia: A high-resolution MRI-based study

    International Nuclear Information System (INIS)

    Lorenzoni, José; David, Philippe; Levivier, Marc

    2012-01-01

    Purpose: To describe the anatomical characteristics and patterns of neurovascular compression in patients suffering classic trigeminal neuralgia (CTN), using high-resolution magnetic resonance imaging (MRI). Materials and methods: The analysis of the anatomy of the trigeminal nerve, brain stem and the vascular structures related to this nerve was made in 100 consecutive patients treated with a Gamma Knife radiosurgery for CTN between December 1999 and September 2004. MRI studies (T1, T1 enhanced and T2-SPIR) with axial, coronal and sagital simultaneous visualization were dynamically assessed using the software GammaPlan™. Three-dimensional reconstructions were also developed in some representative cases. Results: In 93 patients (93%), there were one or several vascular structures in contact, either, with the trigeminal nerve, or close to its origin in the pons. The superior cerebellar artery was involved in 71 cases (76%). Other vessels identified were the antero-inferior cerebellar artery, the basilar artery, the vertebral artery, and some venous structures. Vascular compression was found anywhere along the trigeminal nerve. The mean distance between the nerve compression and the origin of the nerve in the brainstem was 3.76 ± 2.9 mm (range 0–9.8 mm). In 39 patients (42%), the vascular compression was located proximally and in 42 (45%) the compression was located distally. Nerve dislocation or distortion by the vessel was observed in 30 cases (32%). Conclusions: The findings of this study are similar to those reported in surgical and autopsy series. This non-invasive MRI-based approach could be useful for diagnostic and therapeutic decisions in CTN, and it could help to understand its pathogenesis.

  4. Combustion visualization and experimental study on spark induced compression ignition (SICI) in gasoline HCCI engines

    International Nuclear Information System (INIS)

    Wang Zhi; He Xu; Wang Jianxin; Shuai Shijin; Xu Fan; Yang Dongbo

    2010-01-01

    Spark induced compression ignition (SICI) is a relatively new combustion control technology and a promising combustion mode in gasoline engines with high efficiency. SICI can be divided into two categories, SACI and SI-CI. This paper investigated the SICI combustion process using combustion visualization and engine experiment respectively. Ignition process of SICI was captured by high speed photography in an optical engine with different compression ratios. The results show that SICI is a combustion mode combined with partly flame propagation and main auto-ignition. The spark ignites the local mixture near spark electrodes and the flame propagation occurs before the homogeneous mixture is auto-ignited. The heat release from central burned zone due to the flame propagation increases the in-cylinder pressure and temperature, resulting in the unburned mixture auto-ignition. The SICI combustion process can be divided into three stages of the spark induced stage, the flame propagation stage and the compression ignition stage. The SICI combustion mode is different from the spark ignition (SI) knocking in terms of the combustion and emission characteristics. Furthermore, three typical combustion modes including HCCI, SICI, SI, were compared on a gasoline direct injection engine with higher compression ratio and switchable cam-profiles. The results show that SICI has an obvious combustion characteristic with two-stage heat release and lower pressure rise rate. The SICI combustion mode can be controlled by spark timings and EGR rates and utilized as an effective method for high load extension on the gasoline HCCI engine. The maximum IMEP of 0.82 MPa can be achieved with relatively low NO x emission and high thermal efficiency. The SICI combustion mode can be applied in medium-high load region for high efficiency gasoline engines.

  5. Combustion visualization and experimental study on spark induced compression ignition (SICI) in gasoline HCCI engines

    Energy Technology Data Exchange (ETDEWEB)

    Wang Zhi, E-mail: wangzhi@tsinghua.edu.c [State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100084 (China); He Xu; Wang Jianxin; Shuai Shijin; Xu Fan; Yang Dongbo [State Key Laboratory of Automotive Safety and Energy, Tsinghua University, Beijing 100084 (China)

    2010-05-15

    Spark induced compression ignition (SICI) is a relatively new combustion control technology and a promising combustion mode in gasoline engines with high efficiency. SICI can be divided into two categories, SACI and SI-CI. This paper investigated the SICI combustion process using combustion visualization and engine experiment respectively. Ignition process of SICI was captured by high speed photography in an optical engine with different compression ratios. The results show that SICI is a combustion mode combined with partly flame propagation and main auto-ignition. The spark ignites the local mixture near spark electrodes and the flame propagation occurs before the homogeneous mixture is auto-ignited. The heat release from central burned zone due to the flame propagation increases the in-cylinder pressure and temperature, resulting in the unburned mixture auto-ignition. The SICI combustion process can be divided into three stages of the spark induced stage, the flame propagation stage and the compression ignition stage. The SICI combustion mode is different from the spark ignition (SI) knocking in terms of the combustion and emission characteristics. Furthermore, three typical combustion modes including HCCI, SICI, SI, were compared on a gasoline direct injection engine with higher compression ratio and switchable cam-profiles. The results show that SICI has an obvious combustion characteristic with two-stage heat release and lower pressure rise rate. The SICI combustion mode can be controlled by spark timings and EGR rates and utilized as an effective method for high load extension on the gasoline HCCI engine. The maximum IMEP of 0.82 MPa can be achieved with relatively low NO{sub x} emission and high thermal efficiency. The SICI combustion mode can be applied in medium-high load region for high efficiency gasoline engines.

  6. Experimental Investigation and Prediction of Compressive Strength of Ultra-High Performance Concrete Containing Supplementary Cementitious Materials

    Directory of Open Access Journals (Sweden)

    Jisong Zhang

    2017-01-01

    Full Text Available Ultra-high performance concrete (UHPC has superior mechanical properties and durability to normal strength concrete. However, the high amount of cement, high environmental impact, and initial cost are regarded as disadvantages, restricting its wider application. Incorporation of supplementary cementitious materials (SCMs in UHPC is an effective way to reduce the amount of cement needed while contributing to the sustainability and cost. This paper investigates the mechanical properties and microstructure of UHPC containing fly ash (FA and silica fume (SF with the aim of contributing to this issue. The results indicate that, on the basis of 30% FA replacement, the incorporation of 10% and 20% SF showed equivalent or higher mechanical properties compared to the reference samples. The microstructure and pore volume of the UHPCs were also examined. Furthermore, to minimise the experimental workload of future studies, a prediction model is developed to predict the compressive strength of the UHPC using artificial neural networks (ANNs. The results indicate that the developed ANN model has high accuracy and can be used for the prediction of the compressive strength of UHPC with these SCMs.

  7. Achieving High Reliability with People, Processes, and Technology.

    Science.gov (United States)

    Saunders, Candice L; Brennan, John A

    2017-01-01

    High reliability as a corporate value in healthcare can be achieved by meeting the "Quadruple Aim" of improving population health, reducing per capita costs, enhancing the patient experience, and improving provider wellness. This drive starts with the board of trustees, CEO, and other senior leaders who ingrain high reliability throughout the organization. At WellStar Health System, the board developed an ambitious goal to become a top-decile health system in safety and quality metrics. To achieve this goal, WellStar has embarked on a journey toward high reliability and has committed to Lean management practices consistent with the Institute for Healthcare Improvement's definition of a high-reliability organization (HRO): one that is committed to the prevention of failure, early identification and mitigation of failure, and redesign of processes based on identifiable failures. In the end, a successful HRO can provide safe, effective, patient- and family-centered, timely, efficient, and equitable care through a convergence of people, processes, and technology.

  8. Assessment of the impact of modeling axial compression on PET image reconstruction.

    Science.gov (United States)

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    frequencies. Modeling the axial compression also achieved a lower coefficient of variation but with an increase of intervoxel correlations. The unmatched projector/backprojector achieved similar contrast values to the matched version at considerably lower reconstruction times, but at the cost of noisier images. For a line source scan, the reconstructions with modeling of the axial compression achieved similar resolution to the span 1 reconstructions. Axial compression applied to PET sinograms was found to have a negligible impact for span values lower than 7. For span values up to 21, the spatial resolution degradation due to the axial compression can be almost completely compensated for by modeling this effect in the system matrix at the expense of considerably larger processing times and higher intervoxel correlations, while retaining the storage benefit of compressed data. For even higher span values, the resolution loss cannot be completely compensated possibly due to an effective null space in the system. The use of an unmatched projector/backprojector proved to be a practical solution to compensate for the spatial resolution degradation at a reasonable computational cost but can lead to noisier images. © 2017 The Authors. Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  9. Benign compression fractures of the spine: signal patterns

    International Nuclear Information System (INIS)

    Ryu, Kyung Nam; Choi, Woo Suk; Lee, Sun Wha; Lim, Jae Hoon

    1992-01-01

    Fifteen patients with 38 compression fractures of the spine underwent magnetic resonance(MR) imaging. We retrospectively evaluated MR images in those benign compression fractures. MR images showed four patterns in T1-weighted images. MR imaging patterns were normal signal(21), band like low signal(8), low signal with preservation of peripheral portion of the body(8), and diffuse low signal through the vertebral body(1). The low signal portions were changed to high signal intensities in T2-weighted images. In 7 of 15 patients (11 compression fractures), there was a history of trauma, and the remaining 8 patients (27 compression fractures) had no history of trauma. Benign compression fractures of trauma, remained 8 patients (27 compression fractures) were non-traumatic. Benign compression fractures of the spine reveal variable signal intensities in MR imagings. These patterns of benign compression fractures may be useful in interpretation of MR imagings of the spine

  10. The Statistical Analysis of Relation between Compressive and Tensile/Flexural Strength of High Performance Concrete

    Directory of Open Access Journals (Sweden)

    Kępniak M.

    2016-12-01

    Full Text Available This paper addresses the tensile and flexural strength of HPC (high performance concrete. The aim of the paper is to analyse the efficiency of models proposed in different codes. In particular, three design procedures from: the ACI 318 [1], Eurocode 2 [2] and the Model Code 2010 [3] are considered. The associations between design tensile strength of concrete obtained from these three codes and compressive strength are compared with experimental results of tensile strength and flexural strength by statistical tools. Experimental results of tensile strength were obtained in the splitting test. Based on this comparison, conclusions are drawn according to the fit between the design methods and the test data. The comparison shows that tensile strength and flexural strength of HPC depend on more influential factors and not only compressive strength.

  11. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian; Dutta, Aritra; Sun, Qiyu; Foroosh, Hassan

    2017-01-01

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  12. Fast Detection of Compressively Sensed IR Targets Using Stochastically Trained Least Squares and Compressed Quadratic Correlation Filters

    KAUST Repository

    Millikan, Brian

    2017-05-02

    Target detection of potential threats at night can be deployed on a costly infrared focal plane array with high resolution. Due to the compressibility of infrared image patches, the high resolution requirement could be reduced with target detection capability preserved. For this reason, a compressive midwave infrared imager (MWIR) with a low-resolution focal plane array has been developed. As the most probable coefficient indices of the support set of the infrared image patches could be learned from the training data, we develop stochastically trained least squares (STLS) for MWIR image reconstruction. Quadratic correlation filters (QCF) have been shown to be effective for target detection and there are several methods for designing a filter. Using the same measurement matrix as in STLS, we construct a compressed quadratic correlation filter (CQCF) employing filter designs for compressed infrared target detection. We apply CQCF to the U.S. Army Night Vision and Electronic Sensors Directorate dataset. Numerical simulations show that the recognition performance of our algorithm matches that of the standard full reconstruction methods, but at a fraction of the execution time.

  13. Cation-dependent anomalous compression of gallosilicate zeolites with CGS topology: A high-pressure synchrotron powder diffraction study

    International Nuclear Information System (INIS)

    Lee, Yongjae; Lee, Hyun-Hwi; Lee, Dong Ryeol; Kim, Sun Jin; Kao, Chi-chang

    2008-01-01

    The high-pressure compression behaviour of 3 different cation forms of gallosilicate zeolite with CGS topology has been investigated using in situ synchrotron X-ray powder diffraction and a diamond-anvil cell technique. Under hydrostatic conditions mediated by a nominally penetrating pressure-transmitting medium, unit-cell lengths and volume compression is modulated by different degrees of pressure-induced hydration and accompanying channel distortion. In a Na-exchanged CGS (Na 10 Ga 10 Si 22 O 64 .16H 2 O), the unit-cell volume expands by ca. 0.6% upon applying hydrostatic pressure to 0.2 GPa, whereas, in an as-synthesized K-form (K 10 Ga 10 Si 22 O 64 .5H 2 O), this initial volume expansion is suppressed to ca. 0.1% at 0.16 GPa. In the early stage of hydrostatic compression below ∼1 GPa, relative decrease in the ellipticity of the non-planar 10-rings is observed, which is then reverted to a gradual increase in the ellipticity at higher pressures above ∼1 GPa, implying a change in the compression mechanism. In a Sr-exchanged sample (Sr 5 Ga 10 Si 22 O 64 .19H 2 O), on the other hand, no initial volume expansion is observed. Instead, a change in the slope of volume contraction is observed near 1.5 GPa, which leads to a 2-fold increase in the compressibility. This is interpreted as pressure-induced rearrangement of water molecules to facilitate further volume contraction at higher pressures. - Graphical abstract: Three different cation forms of gallosilicate CGS zeolites have been investigated using synchrotron X-ray powder diffraction and a diamond-anvil cell. Under hydrostatic conditions, unit-cell lengths and volume show anomalous compression behaviours depending on the non-framework cation type and initial hydration level, which implies different modes of pressure-induced hydration and channel distortion

  14. The compressed word problem for groups

    CERN Document Server

    Lohrey, Markus

    2014-01-01

    The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...

  15. Compressed air system audit in a chemical company

    Energy Technology Data Exchange (ETDEWEB)

    Radgen, P. [Fraunhofer ISI, Karlsruhe (Germany)

    2005-07-01

    This paper describes the results achieved during a compressed air system audit at a chemical company in Switzerland. The aim of the audit conducted in Muttenz at the site of Clariant Schweiz AG was to analyse the installed compressed air system and its operation in order to identify energy and cost saving potentials. Because there was measurement equipment already installed, it was not necessary to install a new meter. Instead the existing data had to be extracted from the controlled system and regrouped for the analysis. Aggregated data for 2003 and 2004 and a set of detailed data acquired in the course of one week were used for the analysis. The audit identified a number of measures to improve the compressed air system, but had to conclude that the saving potentials at this site are below average. The audit included the compressors, the air treatment and air distribution up to production or storage buildings. The saving potential identified was quantified as about 300 000 kWh/a, or 13.3% of the compressed air energy demand. The cost savings were calculated to be around 41 852 Swiss Franks. (orig.)

  16. Multispectral Image Compression Based on DSC Combined with CCSDS-IDC

    Directory of Open Access Journals (Sweden)

    Jin Li

    2014-01-01

    Full Text Available Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC combined with image data compression (IDC approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE. Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS-based algorithm has better compression performance than the traditional compression approaches.

  17. Multispectral image compression based on DSC combined with CCSDS-IDC.

    Science.gov (United States)

    Li, Jin; Xing, Fei; Sun, Ting; You, Zheng

    2014-01-01

    Remote sensing multispectral image compression encoder requires low complexity, high robust, and high performance because it usually works on the satellite where the resources, such as power, memory, and processing capacity, are limited. For multispectral images, the compression algorithms based on 3D transform (like 3D DWT, 3D DCT) are too complex to be implemented in space mission. In this paper, we proposed a compression algorithm based on distributed source coding (DSC) combined with image data compression (IDC) approach recommended by CCSDS for multispectral images, which has low complexity, high robust, and high performance. First, each band is sparsely represented by DWT to obtain wavelet coefficients. Then, the wavelet coefficients are encoded by bit plane encoder (BPE). Finally, the BPE is merged to the DSC strategy of Slepian-Wolf (SW) based on QC-LDPC by deep coupling way to remove the residual redundancy between the adjacent bands. A series of multispectral images is used to test our algorithm. Experimental results show that the proposed DSC combined with the CCSDS-IDC (DSC-CCSDS)-based algorithm has better compression performance than the traditional compression approaches.

  18. An Adaptive Joint Sparsity Recovery for Compressive Sensing Based EEG System

    Directory of Open Access Journals (Sweden)

    Hamza Djelouat

    2017-01-01

    Full Text Available The last decade has witnessed tremendous efforts to shape the Internet of things (IoT platforms to be well suited for healthcare applications. These platforms are comprised of a network of wireless sensors to monitor several physical and physiological quantities. For instance, long-term monitoring of brain activities using wearable electroencephalogram (EEG sensors is widely exploited in the clinical diagnosis of epileptic seizures and sleeping disorders. However, the deployment of such platforms is challenged by the high power consumption and system complexity. Energy efficiency can be achieved by exploring efficient compression techniques such as compressive sensing (CS. CS is an emerging theory that enables a compressed acquisition using well-designed sensing matrices. Moreover, system complexity can be optimized by using hardware friendly structured sensing matrices. This paper quantifies the performance of a CS-based multichannel EEG monitoring. In addition, the paper exploits the joint sparsity of multichannel EEG using subspace pursuit (SP algorithm as well as a designed sparsifying basis in order to improve the reconstruction quality. Furthermore, the paper proposes a modification to the SP algorithm based on an adaptive selection approach to further improve the performance in terms of reconstruction quality, execution time, and the robustness of the recovery process.

  19. Crystal and Particle Engineering Strategies for Improving Powder Compression and Flow Properties to Enable Continuous Tablet Manufacturing by Direct Compression.

    Science.gov (United States)

    Chattoraj, Sayantan; Sun, Changquan Calvin

    2018-04-01

    Continuous manufacturing of tablets has many advantages, including batch size flexibility, demand-adaptive scale up or scale down, consistent product quality, small operational foot print, and increased manufacturing efficiency. Simplicity makes direct compression the most suitable process for continuous tablet manufacturing. However, deficiencies in powder flow and compression of active pharmaceutical ingredients (APIs) limit the range of drug loading that can routinely be considered for direct compression. For the widespread adoption of continuous direct compression, effective API engineering strategies to address power flow and compression problems are needed. Appropriate implementation of these strategies would facilitate the design of high-quality robust drug products, as stipulated by the Quality-by-Design framework. Here, several crystal and particle engineering strategies for improving powder flow and compression properties are summarized. The focus is on the underlying materials science, which is the foundation for effective API engineering to enable successful continuous manufacturing by the direct compression process. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  20. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  1. Compressed-air work is entering the field of high pressures.

    Science.gov (United States)

    Le Péchon, J Cl; Gourdon, G

    2010-01-01

    Since 1850, compressed-air work has been used to prevent shafts or tunnels under construction from flooding. Until the 1980s, workers were digging in compressed-air environments. Since the introduction of tunnel boring machines (TBMs), very little digging under pressure is needed. However, the wearing out of cutter-head tools requires inspection and repair. Compressed-air workers enter the pressurized working chamber only occasionally to perform such repairs. Pressures between 3.5 and 4.5 bar, that stand outside a reasonable range for air breathing, were reached by 2002. Offshore deep diving technology had to be adapted to TBM work. Several sites have used mixed gases: in Japan for deep shaft sinking (4.8 bar), in The Netherlands at Western Scheldt Tunnels (6.9 bar), in Russia for St. Petersburg Metro (5.8 bar) and in the United States at Seattle (5.8 bar). Several tunnel projects are in progress that may involve higher pressures: Hallandsås (Sweden) interventions in heliox saturation up to 13 bar, and Lake Mead (U.S.) interventions to about 12 bar (2010). Research on TBMs and grouting technologies tries to reduce the requirements for hyperbaric works. Adapted international rules, expertise and services for saturation work, shuttles and trained personnel matching industrial requirements are the challenges.

  2. High-Performance Motion Estimation for Image Sensors with Video Compression

    Directory of Open Access Journals (Sweden)

    Weizhi Xu

    2015-08-01

    Full Text Available It is important to reduce the time cost of video compression for image sensors in video sensor network. Motion estimation (ME is the most time-consuming part in video compression. Previous work on ME exploited intra-frame data reuse in a reference frame to improve the time efficiency but neglected inter-frame data reuse. We propose a novel inter-frame data reuse scheme which can exploit both intra-frame and inter-frame data reuse for ME in video compression (VC-ME. Pixels of reconstructed frames are kept on-chip until they are used by the next current frame to avoid off-chip memory access. On-chip buffers with smart schedules of data access are designed to perform the new data reuse scheme. Three levels of the proposed inter-frame data reuse scheme are presented and analyzed. They give different choices with tradeoff between off-chip bandwidth requirement and on-chip memory size. All three levels have better data reuse efficiency than their intra-frame counterparts, so off-chip memory traffic is reduced effectively. Comparing the new inter-frame data reuse scheme with the traditional intra-frame data reuse scheme, the memory traffic can be reduced by 50% for VC-ME.

  3. Compressive strength of concrete and mortar containing fly ash

    Science.gov (United States)

    Liskowitz, John W.; Wecharatana, Methi; Jaturapitakkul, Chai; Cerkanowicz, deceased, Anthony E.

    1997-01-01

    The present invention relates to concrete, mortar and other hardenable mixtures comprising cement and fly ash for use in construction. The invention includes a method for predicting the compressive strength of such a hardenable mixture, which is very important for planning a project. The invention also relates to hardenable mixtures comprising cement and fly ash which can achieve greater compressive strength than hardenable mixtures containing only concrete over the time period relevant for construction. In a specific embodiment, a formula is provided that accurately predicts compressive strength of concrete containing fly ash out to 180 days. In other specific examples, concrete and mortar containing about 15% to 25% fly ash as a replacement for cement, which are capable of meeting design specifications required for building and highway construction, are provided. Such materials can thus significantly reduce construction costs.

  4. Music information retrieval in compressed audio files: a survey

    Science.gov (United States)

    Zampoglou, Markos; Malamos, Athanasios G.

    2014-07-01

    In this paper, we present an organized survey of the existing literature on music information retrieval systems in which descriptor features are extracted directly from the compressed audio files, without prior decompression to pulse-code modulation format. Avoiding the decompression step and utilizing the readily available compressed-domain information can significantly lighten the computational cost of a music information retrieval system, allowing application to large-scale music databases. We identify a number of systems relying on compressed-domain information and form a systematic classification of the features they extract, the retrieval tasks they tackle and the degree in which they achieve an actual increase in the overall speed-as well as any resulting loss in accuracy. Finally, we discuss recent developments in the field, and the potential research directions they open toward ultra-fast, scalable systems.

  5. Antiproton compression and radial measurements

    CERN Document Server

    Andresen, G B; Bowe, P D; Bray, C C; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Fajans, J; Fujiwara, M C; Funakoshi, R; Gill, D R; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jenkins, M J; Jorgensen, L V; Kurchaninov, L; Lambo, R; Madsen, N; Nolan, P; Olchanski, K; Olin, A; Page R D; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Seif El Nasr, S; Silveira, D M; Storey, J W; Thompson, R I; Van der Werf, D P; Wurtele, J S; Yamazaki, Y

    2008-01-01

    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  6. Analysis of the transient compressible vapor flow in heat pipes

    Science.gov (United States)

    Jang, J. H.; Faghri, A.; Chang, W. S.

    1989-01-01

    The transient compressible one-dimensional vapor flow dynamics in a heat pipe is modeled. The numerical results are obtained by using the implicit non-iterative Beam-Warming finite difference method. The model is tested for simulated heat pipe vapor flow and actual vapor flow in cylindrical heat pipes. A good comparison of the present transient results for the simulated heat pipe vapor flow with the previous results of a two-dimensional numerical model is achieved and the steady state results are in agreement with the existing experimental data. The transient behavior of the vapor flow under subsonic, sonic, and supersonic speeds and high mass flow rates are successfully predicted. The one-dimensional model also describes the vapor flow dynamics in cylindrical heat pipes at high temperatures.

  7. Analysis of the transient compressible vapor flow in heat pipe

    International Nuclear Information System (INIS)

    Jang, J.H.; Faghri, A.; Chang, W.S.

    1989-07-01

    The transient compressible one-dimensional vapor flow dynamics in a heat pipe is modeled. The numerical results are obtained by using the implicit non-iterative Beam-Warming finite difference method. The model is tested for simulated heat pipe vapor flow and actual vapor flow in cylindrical heat pipes. A good comparison of the present transient results for the simulated heat pipe vapor flow with the previous results of a two-dimensional numerical model is achieved and the steady state results are in agreement with the existing experimental data. The transient behavior of the vapor flow under subsonic, sonic, and supersonic speeds and high mass flow rates are successfully predicted. The one-dimensional model also describes the vapor flow dynamics in cylindrical heat pipes at high temperatures

  8. Analysis of the transient compressible vapor flow in heat pipe

    Science.gov (United States)

    Jang, Jong Hoon; Faghri, Amir; Chang, Won Soon

    1989-01-01

    The transient compressible one-dimensional vapor flow dynamics in a heat pipe is modeled. The numerical results are obtained by using the implicit non-iterative Beam-Warming finite difference method. The model is tested for simulated heat pipe vapor flow and actual flow in cylindrical heat pipes. A good comparison of the present transient results for the simulated heat pipe vapor flow with the previous results of a two-dimensional numerical model is achieved and the steady state results are in agreement with the existing experimental data. The transient behavior of the vapor flow under subsonic, sonic, and supersonic speeds and high mass flow rates are successfully predicted. The one-dimensional model also describes the vapor flow dynamics in cylindrical heat pipes at high temperatures.

  9. Data Compression of Seismic Images by Neural Networks Compression d'images sismiques par des réseaux neuronaux

    Directory of Open Access Journals (Sweden)

    Epping W. J. M.

    2006-11-01

    Full Text Available Neural networks with the multi-layered perceptron architecture were trained on an autoassociation task to compress 2D seismic data. Networks with linear transfer functions outperformed nonlinear neural nets with single or multiple hidden layers. This indicates that the correlational structure of the seismic data is predominantly linear. A compression factor of 5 to 7 can be achieved if a reconstruction error of 10% is allowed. The performance on new test data was similar to that achieved with the training data. The hidden units developed feature-detecting properties that resemble oriented line, edge and more complex feature detectors. The feature detectors of linear neural nets are near-orthogonal rotations of the principal eigenvectors of the Karhunen-Loève transformation. Des réseaux neuronaux à architecture de perceptron multicouches ont été expérimentés en auto-association pour permettre la compression de données sismiques bidimensionnelles. Les réseaux neuronaux à fonctions de transfert linéaires s'avèrent plus performants que les réseaux neuronaux non linéaires, à une ou plusieurs couches cachées. Ceci indique que la structure corrélative des données sismiques est à prédominance linéaire. Un facteur de compression de 5 à 7 peut être obtenu si une erreur de reconstruction de 10 % est admise. La performance sur les données de test est très proche de celle obtenue sur les données d'apprentissage. Les unités cachées développent des propriétés de détection de caractéristiques ressemblant à des détecteurs de lignes orientées, de bords et de figures plus complexes. Les détecteurs de caractéristique des réseaux neuronaux linéaires sont des rotations quasi orthogonales des vecteurs propres principaux de la transformation de Karhunen-Loève.

  10. Hardware Implementation of Lossless Adaptive and Scalable Hyperspectral Data Compression for Space

    Science.gov (United States)

    Aranki, Nazeeh; Keymeulen, Didier; Bakhshi, Alireza; Klimesh, Matthew

    2009-01-01

    On-board lossless hyperspectral data compression reduces data volume in order to meet NASA and DoD limited downlink capabilities. The technique also improves signature extraction, object recognition and feature classification capabilities by providing exact reconstructed data on constrained downlink resources. At JPL a novel, adaptive and predictive technique for lossless compression of hyperspectral data was recently developed. This technique uses an adaptive filtering method and achieves a combination of low complexity and compression effectiveness that far exceeds state-of-the-art techniques currently in use. The JPL-developed 'Fast Lossless' algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. It is of low computational complexity and thus well-suited for implementation in hardware. A modified form of the algorithm that is better suited for data from pushbroom instruments is generally appropriate for flight implementation. A scalable field programmable gate array (FPGA) hardware implementation was developed. The FPGA implementation achieves a throughput performance of 58 Msamples/sec, which can be increased to over 100 Msamples/sec in a parallel implementation that uses twice the hardware resources This paper describes the hardware implementation of the 'Modified Fast Lossless' compression algorithm on an FPGA. The FPGA implementation targets the current state-of-the-art FPGAs (Xilinx Virtex IV and V families) and compresses one sample every clock cycle to provide a fast and practical real-time solution for space applications.

  11. Envera Variable Compression Ratio Engine

    Energy Technology Data Exchange (ETDEWEB)

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  12. Effects of polytetrafluoroethylene treatment and compression on gas diffusion layer microstructure using high-resolution X-ray computed tomography

    Science.gov (United States)

    Khajeh-Hosseini-Dalasm, Navvab; Sasabe, Takashi; Tokumasu, Takashi; Pasaogullari, Ugur

    2014-11-01

    The microstructure of a TGP-H-120 Toray paper gas diffusion layer (GDL) was investigated using high resolution X-ray computed tomography (CT) technique, with a resolution of 1.8 μm and a field of view (FOV) of ∼1.8 × 1.8 mm. The images obtained from the tomography scans were further post processed, and image thresholding and binarization methodologies are presented. The validity of Otsu's thresholding method was examined. Detailed information on bulk porosity and porosity distribution of the GDL at various Polytetrafluoroethylene (PTFE) treatments and uniform/non-uniform compression pressures was provided. A sample holder was designed to investigate the effects of non-uniform compression pressure, which enabled regulating compression pressure between 0, and 3 MPa at a gas channel/current collector rib configuration. The results show the heterogeneous and anisotropic microstructure of the GDL, non-uniform distribution of PTFE, and significant microstructural change under uniform/non-uniform compression. These findings provide useful inputs for numerical models to include the effects of microstructural changes in the study of transport phenomena within the GDL and to increase the accuracy and predictability of cell performance.

  13. HVS scheme for DICOM image compression: Design and comparative performance evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, B. [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)]. E-mail: prabhakarb@iitm.ac.in; Reddy, M. Ramasubba [Biomedical and Engineering Division, Indian Institute of Technology Madras, Chennai 600036, Tamil Nadu (India)

    2007-07-15

    Advanced digital imaging technology in medical domain demands efficient and effective DICOM image compression for progressive image transmission and picture archival. Here a compression system, which incorporates sensitivities of HVS coded with SPIHT quantization, is discussed. The weighting factors derived from luminance CSF are used to transform the wavelet subband coefficients to reflect characteristics of HVS in best possible manner. Mannos et al. and Daly HVS models have been used and results are compared. To evaluate the performance, Eskicioglu chart metric is considered. Experiment is done on both Monochrome and Color Dicom images of MRI, CT, OT, and CR, natural and benchmark images. Reconstructed image through our technique showed improvement in visual quality and Eskicioglu chart metric at same compression ratios. Also the Daly HVS model based compression shows better performance perceptually and quantitatively when compared to Mannos et el. model. Further 'bior4.4' wavelet filter provides better results than 'db9' filter for this compression system. Results give strong evidence that under common boundary conditions; our technique achieves competitive visual quality, compression ratio and coding/decoding time, when compared with jpeg2000 (kakadu)

  14. KungFQ: a simple and powerful approach to compress fastq files.

    Science.gov (United States)

    Grassi, Elena; Di Gregorio, Federico; Molineris, Ivan

    2012-01-01

    Nowadays storing data derived from deep sequencing experiments has become pivotal and standard compression algorithms do not exploit in a satisfying manner their structure. A number of reference-based compression algorithms have been developed but they are less adequate when approaching new species without fully sequenced genomes or nongenomic data. We developed a tool that takes advantages of fastq characteristics and encodes them in a binary format optimized in order to be further compressed with standard tools (such as gzip or lzma). The algorithm is straightforward and does not need any external reference file, it scans the fastq only once and has a constant memory requirement. Moreover, we added the possibility to perform lossy compression, losing some of the original information (IDs and/or qualities) but resulting in smaller files; it is also possible to define a quality cutoff under which corresponding base calls are converted to N. We achieve 2.82 to 7.77 compression ratios on various fastq files without losing information and 5.37 to 8.77 losing IDs, which are often not used in common analysis pipelines. In this paper, we compare the algorithm performance with known tools, usually obtaining higher compression levels.

  15. Macroscopic Expressions of Molecular Adiabatic Compressibility of Methyl and Ethyl Caprate under High Pressure and High Temperature

    Directory of Open Access Journals (Sweden)

    Fuxi Shi

    2014-01-01

    Full Text Available The molecular compressibility, which is a macroscopic quantity to reveal the microcompressibility by additivity of molecular constitutions, is considered as a fixed value for specific organic liquids. In this study, we introduced two calculated expressions of molecular adiabatic compressibility to demonstrate its pressure and temperature dependency. The first one was developed from Wada’s constant expression based on experimental data of density and sound velocity. Secondly, by introducing the 2D fitting expressions and their partial derivative of pressure and temperature, molecular compressibility dependency was analyzed further, and a 3D fitting expression was obtained from the calculated data of the first one. The third was derived with introducing the pressure and temperature correction factors based on analogy to Lennard-Jones potential function and energy equipartition theorem. In wide range of temperatures (293compressibility was certified.

  16. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    International Nuclear Information System (INIS)

    Braunschweig, R.; Kaden, Ingmar; Schwarzer, J.; Sprengel, C.; Klose, K.

    2009-01-01

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  17. Image data compression in diagnostic imaging. International literature review and workflow recommendation

    Energy Technology Data Exchange (ETDEWEB)

    Braunschweig, R.; Kaden, Ingmar [Klinik fuer Bildgebende Diagnostik und Interventionsradiologie, BG-Kliniken Bergmannstrost Halle (Germany); Schwarzer, J.; Sprengel, C. [Dept. of Management Information System and Operations Research, Martin-Luther-Univ. Halle Wittenberg (Germany); Klose, K. [Medizinisches Zentrum fuer Radiologie, Philips-Univ. Marburg (Germany)

    2009-07-15

    Purpose: Today healthcare policy is based on effectiveness. Diagnostic imaging became a ''pace-setter'' due to amazing technical developments (e.g. multislice CT), extensive data volumes, and especially the well defined workflow-orientated scenarios on a local and (inter)national level. To make centralized networks sufficient, image data compression has been regarded as the key to a simple and secure solution. In February 2008 specialized working groups of the DRG held a consensus conference. They designed recommended data compression techniques and ratios. Material und methoden: The purpose of our paper is an international review of the literature of compression technologies, different imaging procedures (e.g. DR, CT etc.), and targets (abdomen, etc.) and to combine recommendations for compression ratios and techniques with different workflows. The studies were assigned to 4 different levels (0-3) according to the evidence. 51 studies were assigned to the highest level 3. Results: We recommend a compression factor of 1: 8 (excluding cranial scans 1:5). For workflow reasons data compression should be based on the modalities (CT, etc.). PACS-based compression is currently possible but fails to maximize workflow benefits. Only the modality-based scenarios achieve all benefits. (orig.)

  18. Relationships Between Achievement Emotions, Motivation and Language Learning Strategies of High, Mid and Low English Language Achievers

    Institute of Scientific and Technical Information of China (English)

    TAN; Jun-ming

    2017-01-01

    Overseas research has shown that achievement emotions have direct relationships with "achievement outcome" and"achievement activities". The purpose of the present study aimed to compare the relationships betweenachievement emotions, motivation, and language learning strategies of high, mid and low achievers in Englishlanguage learning at an international university in a southern province in China. Quantitative data were collectedthrough a questionnaire survey of 74 (16 males, 58 females) TESL major students. Results indicated that studentsin general experienced more positive than negative achievement emotions; more intrinsically rather thanextrinsically motivated to learn English; and quite frequently used a variety of learning strategies to overcome theirlearning difficulties. However, Year Four low-achievers experienced more negative achievement emotions. Theyseldom used metacognitive, affective and social learning strategies, and they had lower degrees of intrinsicmotivation. Implications for institutional support for at risk students are discussed.

  19. The effects of lossy compression on diagnostically relevant seizure information in EEG signals.

    Science.gov (United States)

    Higgins, G; McGinley, B; Faul, S; McEvoy, R P; Glavin, M; Marnane, W P; Jones, E

    2013-01-01

    This paper examines the effects of compression on EEG signals, in the context of automated detection of epileptic seizures. Specifically, it examines the use of lossy compression on EEG signals in order to reduce the amount of data which has to be transmitted or stored, while having as little impact as possible on the information in the signal relevant to diagnosing epileptic seizures. Two popular compression methods, JPEG2000 and SPIHT, were used. A range of compression levels was selected for both algorithms in order to compress the signals with varying degrees of loss. This compression was applied to the database of epileptiform data provided by the University of Freiburg, Germany. The real-time EEG analysis for event detection automated seizure detection system was used in place of a trained clinician for scoring the reconstructed data. Results demonstrate that compression by a factor of up to 120:1 can be achieved, with minimal loss in seizure detection performance as measured by the area under the receiver operating characteristic curve of the seizure detection system.

  20. Upgrade of the SLAC SLED II Pulse Compression System Based on Recent High Power Tests

    International Nuclear Information System (INIS)

    Vlieks, A.E.; Fowkes, W.R.; Loewen, R.J.; Tantawi, S.G.

    2011-01-01

    In the Next Linear Collider (NLC) it is expected that the high power rf components be able to handle peak power levels in excess of 400 MW. We present recent results of high power tests designed to investigate the RF breakdown limits of the X-band pulse compression system used at SLAC. (SLED-II). Results of these tests show that both the TE 01 -TE 10 mode converter and the 4-port hybrid have a maximum useful power limit of 220-250 MW. Based on these tests, modifications of these components have been undertaken to improve their peak field handling capability. Results of these modifications will be presented. As part of an international effort to develop a new 0.5-1.5 TeV electron-positron linear collider for the 21st century, SLAC has been working towards a design, referred to as 'The Next Linear Collider' (NLC), which will operate at 11.424 GHz and utilize 50-75 MW klystrons as rf power sources. One of the major challenges in this design, or any other design, is how to generate and efficiently transport extremely high rf power from a source to an accelerator structure. SLAC has been investigating various methods of 'pulse compressing' a relatively wide rf pulse ((ge) 1 μs) from a klystron into a narrower, but more intense, pulse. Currently a SLED-II pulse compression scheme is being used at SLAC in the NLC Test Accelerator (NLCTA) and in the Accelerator Structures Test Area (ASTA) to provide high rf power for accelerator and component testing. In ASTA, a 1.05 μs pulse from a 50 MW klystron was successfully pulse compressed to 205 MW with a pulse width of 150 ns. Since operation in NLC will require generating and transporting rf power in excess of 400 MW it was decided to test the breakdown limits of the SLED-II rf components in ASTA with rf power up to the maximum available of 400 MW. This required the combining of power from two 50 MW klystrons and feeding the summed power into the SLED-II pulse compressor. Results from this experiment demonstrated that two of

  1. Lossless medical image compression using geometry-adaptive partitioning and least square-based prediction.

    Science.gov (United States)

    Song, Xiaoying; Huang, Qijun; Chang, Sheng; He, Jin; Wang, Hao

    2018-06-01

    To improve the compression rates for lossless compression of medical images, an efficient algorithm, based on irregular segmentation and region-based prediction, is proposed in this paper. Considering that the first step of a region-based compression algorithm is segmentation, this paper proposes a hybrid method by combining geometry-adaptive partitioning and quadtree partitioning to achieve adaptive irregular segmentation for medical images. Then, least square (LS)-based predictors are adaptively designed for each region (regular subblock or irregular subregion). The proposed adaptive algorithm not only exploits spatial correlation between pixels but it utilizes local structure similarity, resulting in efficient compression performance. Experimental results show that the average compression performance of the proposed algorithm is 10.48, 4.86, 3.58, and 0.10% better than that of JPEG 2000, CALIC, EDP, and JPEG-LS, respectively. Graphical abstract ᅟ.

  2. Efficient access of compressed data

    International Nuclear Information System (INIS)

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme

  3. On the electrical contact and long-term behavior of compression-type connections with conventional and high-temperature conductor ropes with low sag

    International Nuclear Information System (INIS)

    Hildmann, Christian

    2016-01-01

    In Germany and in Europe it is due to the ''Energiewende'' necessary to transmit more electrical energy with existing overhead transmission lines. One possible technical solution to reach this aim is the use of high temperature low sag conductors (HTLS-conductors). Compared to the common Aluminium Conductor Steel Reinforced (ACSR), HTLS-conductors have higher rated currents and rated temperatures. Thus the electrical connections for HTLS-conductors are stressed to higher temperatures too. These components are most important for the safe and reliable operation of an overhead transmission line. Besides other connection technologies, hexagonal compression connections with ordinary transmission line conductors have proven themselves since decades. From the literature, mostly empirical studies with electrical tests for compression connections are known. The electrical contact behaviour, i.e. the quality of the electrical contact after assembly, of these connections has been investigated insufficiently. This work presents and enhances an electrical model of compression connections, so that the electrical contact behaviour can be determined more accurate. Based on this, principal considerations on the current distribution in the compression connection and its influence on the connection resistance are presented. As a result from the theoretical and the experimental work, recommendations for the design of hexagonal compression connections for transmission line conductors were developed. Furthermore it is known from the functional principle of compression type connections, that the electrical contact behaviour can be influenced from their form fit, force fit and cold welding. In particular the forces in compression connections have been calculated up to now by approximation. The known analytical calculations simplify the geometry and material behaviour and do not consider the correct mechanical load during assembly. For these reasons the joining process

  4. Influence of Eco-Friendly Mineral Additives on Early Age Compressive Strength and Temperature Development of High-Performance Concrete

    Science.gov (United States)

    Kaszynska, Maria; Skibicki, Szymon

    2017-12-01

    High-performance concrete (HPC) which contains increased amount of both higher grade cement and pozzolanic additives generates more hydration heat than the ordinary concrete. Prolonged periods of elevated temperature influence the rate of hydration process in result affecting the development of early-age strength and subsequent mechanical properties. The purpose of the presented research is to determine the relationship between the kinetics of the heat generation process and the compressive strength of early-age high performance concrete. All mixes were based on the Portland Cement CEM I 52.5 with between 7.5% to 15% of the cement mass replaced by the silica fume or metakaolin. Two characteristic for HPC water/binder ratios of w/b = 0.2 and w/b = 0.3 were chosen. A superplasticizer was used to maintain a 20-50 mm slump. Compressive strength was determined at 8h, 24h, 3, 7 and 28 days on 10x10x10 cm specimens that were cured in a calorimeter in a constant temperature of T = 20°C. The temperature inside the concrete was monitored continuously for 7 days. The study determined that the early-age strength (t<24h) of concrete with reactive mineral additives is lower than concrete without them. This is clearly visible for concretes with metakaolin which had the lowest compressive strength in early stages of hardening. The amount of the superplasticizer significantly influenced the early-age compressive strength of concrete. Concretes with additives reached the maximum temperature later than the concretes without them.

  5. Sustained compression and healing of chronic venous ulcers.

    Science.gov (United States)

    Blair, S. D.; Wright, D. D.; Backhouse, C. M.; Riddle, E.; McCollum, C. N.

    1988-01-01

    STUDY OBJECTIVE--Comparison of four layer bandage system with traditional adhesive plaster bandaging in terms of (a) compression achieved and (b) healing of venous ulcers. DESIGN--Part of larger randomised trial of five different dressings. SETTING--Outpatient venous ulcer clinic in university hospital. PATIENTS--(a) Pressure exerted by both bandage systems was measured in the same 20 patients. (b) Healing with the four layer bandage was assessed in 148 legs in 126 consecutive patients (mean age 71 (SE 2); range 30-96) with chronic venous ulcers that had resisted treatment with traditional bandaging for a mean of 27.2 (SE 8) months. INTERVENTIONS--(a) Four layer bandage system or traditional adhesive plaster bandaging for pressure studies; (b) four layer bandaging applied weekly for studies of healing. END POINTS--(a) Comparison of pressures achieved at the ankle for up to one week; (b) complete healing within 12 weeks. MEASUREMENTS AND MAIN RESULTS--(a) Four layer bandage produced higher initial pressures at the ankle of 42.5 (SE 1) mm Hg compared with 29.8 (1.8) for the adhesive plaster (p less than 0.001; 95% confidence interval 18.5 to 6.9). Pressure was maintained for one week with the four layer bandage but fell to 10.4 (3.5) mm Hg at 24 hours with adhesive plaster bandaging. (b) After weekly bandaging with the four layer bandage 110 of 48 venous ulcers had healed completely within 12 (mean 6.3 (0.4)) weeks. CONCLUSION--Sustained compression of over 40 mm Hg achieved with a multilayer bandage results in rapid healing of chronic venous ulcers that have failed to heal in many months of compression at lower pressures with more conventional bandages. PMID:3144330

  6. The humeral origin of the brachioradialis muscle: an unusual site of high radial nerve compression.

    Science.gov (United States)

    Cherchel, A; Zirak, C; De Mey, A

    2013-11-01

    Radial nerve compression is seldom encountered in the upper arm, and most commonly described compression syndromes have their anatomical cause in the forearm. The teres major, the triceps muscle, the intermuscular septum region and the space between the brachialis and brachioradialis muscles have all been identified as radial nerve compression sites above the elbow. We describe the case of a 38-year-old male patient who presented with dorso-lateral forearm pain and paraesthesias without neurological deficit. Surgical exploration revealed radial nerve compression at the humeral origin of the brachioradialis muscle. Liberation of the nerve at this site was successful at relieving the symptoms. To our knowledge, this compression site has not been described in the literature. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Compression-based aggregation model for medical web services.

    Science.gov (United States)

    Al-Shammary, Dhiah; Khalil, Ibrahim

    2010-01-01

    Many organizations such as hospitals have adopted Cloud Web services in applying their network services to avoid investing heavily computing infrastructure. SOAP (Simple Object Access Protocol) is the basic communication protocol of Cloud Web services that is XML based protocol. Generally,Web services often suffer congestions and bottlenecks as a result of the high network traffic that is caused by the large XML overhead size. At the same time, the massive load on Cloud Web services in terms of the large demand of client requests has resulted in the same problem. In this paper, two XML-aware aggregation techniques that are based on exploiting the compression concepts are proposed in order to aggregate the medical Web messages and achieve higher message size reduction.

  8. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  9. Compression and contact area of anterior strut grafts in spinal instrumentation: a biomechanical study.

    Science.gov (United States)

    Pizanis, Antonius; Holstein, Jörg H; Vossen, Felix; Burkhardt, Markus; Pohlemann, Tim

    2013-08-26

    Anterior bone grafts are used as struts to reconstruct the anterior column of the spine in kyphosis or following injury. An incomplete fusion can lead to later correction losses and compromise further healing. Despite the different stabilizing techniques that have evolved, from posterior or anterior fixating implants to combined anterior/posterior instrumentation, graft pseudarthrosis rates remain an important concern. Furthermore, the need for additional anterior implant fixation is still controversial. In this bench-top study, we focused on the graft-bone interface under various conditions, using two simulated spinal injury models and common surgical fixation techniques to investigate the effect of implant-mediated compression and contact on the anterior graft. Calf spines were stabilised with posterior internal fixators. The wooden blocks as substitutes for strut grafts were impacted using a "pressfit" technique and pressure-sensitive films placed at the interface between the vertebral bone and the graft to record the compression force and the contact area with various stabilization techniques. Compression was achieved either with posterior internal fixator alone or with an additional anterior implant. The importance of concomitant ligament damage was also considered using two simulated injury models: pure compression Magerl/AO fracture type A or rotation/translation fracture type C models. In type A injury models, 1 mm-oversized grafts for impaction grafting provided good compression and fair contact areas that were both markedly increased by the use of additional compressing anterior rods or by shortening the posterior fixator construct. Anterior instrumentation by itself had similar effects. For type C injuries, dramatic differences were observed between the techniques, as there was a net decrease in compression and an inadequate contact on the graft occurred in this model. Under these circumstances, both compression and the contact area on graft could only

  10. Emittance control and RF bunch compression in the NSRRC photoinjector

    International Nuclear Information System (INIS)

    Lau, W.K.; Hung, S.B.; Lee, A.P.; Chou, C.S.; Huang, N.Y.

    2011-01-01

    The high-brightness photoinjector being constructed at the National Synchrotron Radiation Research Center is for testing new accelerator and light-source concepts. It is the so-called split photoinjector configuration in which a short solenoid magnet is used for emittance compensation. The UV-drive laser pulses are also shaped to produce uniform cylindrical bunches for further reduction of beam emittance. However, limited by the available power from our microwave power system, the nominal accelerating gradient in the S-band booster linac is set at 18 MV/m. A simulation study with PARMELA shows that the linac operating at this gradient fails to freeze the electron beam emittance at low value. A background solenoid magnetic field is applied for beam emittance control in the linac during acceleration. A satisfactory result that meets our preliminary goal has been achieved with the solenoid magnetic field strength at 0.1 T. RF bunch compression as a means to achieve the required beam brightness for high-gain free-electron laser experiments is also examined. The reduction of bunch length to a few hundred femtoseconds can be obtained.

  11. MAP-MRF-Based Super-Resolution Reconstruction Approach for Coded Aperture Compressive Temporal Imaging

    Directory of Open Access Journals (Sweden)

    Tinghua Zhang

    2018-02-01

    Full Text Available Coded Aperture Compressive Temporal Imaging (CACTI can afford low-cost temporal super-resolution (SR, but limits are imposed by noise and compression ratio on reconstruction quality. To utilize inter-frame redundant information from multiple observations and sparsity in multi-transform domains, a robust reconstruction approach based on maximum a posteriori probability and Markov random field (MAP-MRF model for CACTI is proposed. The proposed approach adopts a weighted 3D neighbor system (WNS and the coordinate descent method to perform joint estimation of model parameters, to achieve the robust super-resolution reconstruction. The proposed multi-reconstruction algorithm considers both total variation (TV and ℓ 2 , 1 norm in wavelet domain to address the minimization problem for compressive sensing, and solves it using an accelerated generalized alternating projection algorithm. The weighting coefficient for different regularizations and frames is resolved by the motion characteristics of pixels. The proposed approach can provide high visual quality in the foreground and background of a scene simultaneously and enhance the fidelity of the reconstruction results. Simulation results have verified the efficacy of our new optimization framework and the proposed reconstruction approach.

  12. Environmentally friendly drive for gas compression applications: enhanced design of high-speed induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Karina Velloso; Pradurat, Jean Francois; Mercier, Jean Charles [Institut National Polytechncique, Lorrain (France). Converteam Motors Div.; Truchot, Patrick [Nancy Universite (France). Equipe de Recherche sur les Processus Innovatifs (ERPI)

    2008-07-01

    Taking into account the key issues faced by gas compressors users, this paper aims to help optimize the choice of the drive equipment as well as the driven equipment, in function of the cost of the whole installation life cycle. The design of the enhanced high-speed induction motor (MGV-Moteuer a Grande Vitesse) represents a technological breakthrough for the industry, it allows the direct coupling to the compressor, without using a gearbox making the system more efficient and reliable. From both micro and macro-economic viewpoints, the high-speed electric driver becomes a more efficient use of natural gas energy resources. This new technology associated with the electric option offers challenging and rewarding work to those responsible for the operation and maintenance of the compressor station. The electric option is not only conceptually viable but has a proven track record that justifies serious consideration as an alternative for reliably powering. Once an operator becomes comfortable with the prospects of motor-driven compression, the analysis of machine options requires only a few new approaches to fairly evaluate the alternatives. The application of this reasoning in projects using compression units is especially opportune, in view of the great variations of operational conditions and environmental issues. (author)

  13. Boiler: lossy compression of RNA-seq alignments using coverage vectors.

    Science.gov (United States)

    Pritt, Jacob; Langmead, Ben

    2016-09-19

    We describe Boiler, a new software tool for compressing and querying large collections of RNA-seq alignments. Boiler discards most per-read data, keeping only a genomic coverage vector plus a few empirical distributions summarizing the alignments. Since most per-read data is discarded, storage footprint is often much smaller than that achieved by other compression tools. Despite this, the most relevant per-read data can be recovered; we show that Boiler compression has only a slight negative impact on results given by downstream tools for isoform assembly and quantification. Boiler also allows the user to pose fast and useful queries without decompressing the entire file. Boiler is free open source software available from github.com/jpritt/boiler. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. In situ oxide dispersion strengthened tungsten alloys with high compressive strength and high strain-to-failure

    International Nuclear Information System (INIS)

    Huang, Lin; Jiang, Lin; Topping, Troy D.; Dai, Chen; Wang, Xin; Carpenter, Ryan; Haines, Christopher; Schoenung, Julie M.

    2017-01-01

    In this work a novel process methodology to concurrently improve the compressive strength (2078 MPa at a strain rate of 5 × 10"−"4 s"−"1) and strain-to-failure (over 40%) of bulk tungsten materials has been described. The process involves the in situ formation of intragranular tungsten oxide nanoparticles, facilitated by the application of a pressure of 1 GPa at a low sintering temperature of 1200 °C during spark plasma sintering (SPS). The results show that the application of a high pressure of 1 GPa during SPS significantly accelerates the densification process. Concurrently, the second phase oxide nanoparticles with an average grain size of 108 nm, which are distributed within the interiors of the W grains, simultaneously provide strengthening and plasticity by inhibiting grain growth, and generating, blocking, and storing dislocations. - Graphical abstract: In this work a novel process methodology to concurrently improve the compressive strength (2078 MPa at a strain rate of 5 × 10"−"4 s"−"1) and strain-to-failure (over 40%) of bulk W materials has been described. The process involves the in situ formation of intragranular tungsten oxide nanoparticles, facilitated by the application of a pressure of 1 GPa at a low sintering temperature of 1200 °C during spark plasma sintering (SPS).

  15. Compressible turbulent flows: aspects of prediction and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Friedrich, R. [TU Muenchen, Garching (Germany). Fachgebiet Stroemungsmechanik

    2007-03-15

    Compressible turbulent flows are an important element of high-speed flight. Boundary layers developing along fuselage and wings of an aircraft and along engine compressor and turbine blades are compressible and mostly turbulent. The high-speed flow around rockets and through rocket nozzles involves compressible turbulence and flow separation. Turbulent mixing and combustion in scramjet engines is another example where compressibility dominates the flow physics. Although compressible turbulent flows have attracted researchers since the fifties of the last century, they are not completely understood. Especially interactions between compressible turbulence and combustion lead to challenging, yet unsolved problems. Direct numerical simulation (DNS) and large-eddy simulation (LES) represent modern powerful research tools which allow to mimic such flows in great detail and to analyze underlying physical mechanisms, even those which cannot be accessed by the experiment. The present lecture provides a short description of these tools and some of their numerical characteristics. It then describes DNS and LES results of fully-developed channel and pipe flow and highlights effects of compressibility on the turbulence structure. The analysis of pressure fluctuations in such flows with isothermal cooled walls leads to the conclusion that the pressure-strain correlation tensor decreases in the wall layer and that the turbulence anisotropy increases, since the mean density falls off relative to the incompressible flow case. Similar increases in turbulence anisotropy due to compressibility are observed in inert and reacting temporal mixing layers. The nature of the pressure fluctuations is however two-facetted. While inert compressible mixing layers reveal wave-propagation effects in the pressure and density fluctuations, compressible reacting mixing layers seem to generate pressure fluctuations that are controlled by the time-rate of change of heat release and mean density

  16. Oil-free centrifugal hydrogen compression technology demonstration

    Energy Technology Data Exchange (ETDEWEB)

    Heshmat, Hooshang [Mohawk Innovative Technology Inc., Albany, NY (United States)

    2014-05-31

    One of the key elements in realizing a mature market for hydrogen vehicles is the deployment of a safe and efficient hydrogen production and delivery infrastructure on a scale that can compete economically with current fuels. The challenge, however, is that hydrogen, being the lightest and smallest of gases with a lower viscosity and density than natural gas, readily migrates through small spaces and is difficult to compresses efficiently. While efficient and cost effective compression technology is crucial to effective pipeline delivery of hydrogen, the compression methods used currently rely on oil lubricated positive displacement (PD) machines. PD compression technology is very costly, has poor reliability and durability, especially for components subjected to wear (e.g., valves, rider bands and piston rings) and contaminates hydrogen with lubricating fluid. Even so called “oil-free” machines use oil lubricants that migrate into and contaminate the gas path. Due to the poor reliability of PD compressors, current hydrogen producers often install duplicate units in order to maintain on-line times of 98-99%. Such machine redundancy adds substantially to system capital costs. As such, DOE deemed that low capital cost, reliable, efficient and oil-free advanced compressor technologies are needed. MiTi’s solution is a completely oil-free, multi-stage, high-speed, centrifugal compressor designed for flow capacity of 500,000 kg/day with a discharge pressure of 1200 psig. The design employs oil-free compliant foil bearings and seals to allow for very high operating speeds, totally contamination free operation, long life and reliability. This design meets the DOE’s performance targets and achieves an extremely aggressive, specific power metric of 0.48 kW-hr/kg and provides significant improvements in reliability/durability, energy efficiency, sealing and freedom from contamination. The multi-stage compressor system concept has been validated through full scale

  17. Variable Frame Rate and Length Analysis for Data Compression in Distributed Speech Recognition

    DEFF Research Database (Denmark)

    Kraljevski, Ivan; Tan, Zheng-Hua

    2014-01-01

    This paper addresses the issue of data compression in distributed speech recognition on the basis of a variable frame rate and length analysis method. The method first conducts frame selection by using a posteriori signal-to-noise ratio weighted energy distance to find the right time resolution...... length for steady regions. The method is applied to scalable source coding in distributed speech recognition where the target bitrate is met by adjusting the frame rate. Speech recognition results show that the proposed approach outperforms other compression methods in terms of recognition accuracy...... for noisy speech while achieving higher compression rates....

  18. Experimental validation of finite element analysis of human vertebral collapse under large compressive strains.

    Science.gov (United States)

    Hosseini, Hadi S; Clouthier, Allison L; Zysset, Philippe K

    2014-04-01

    Osteoporosis-related vertebral fractures represent a major health problem in elderly populations. Such fractures can often only be diagnosed after a substantial deformation history of the vertebral body. Therefore, it remains a challenge for clinicians to distinguish between stable and progressive potentially harmful fractures. Accordingly, novel criteria for selection of the appropriate conservative or surgical treatment are urgently needed. Computer tomography-based finite element analysis is an increasingly accepted method to predict the quasi-static vertebral strength and to follow up this small strain property longitudinally in time. A recent development in constitutive modeling allows us to simulate strain localization and densification in trabecular bone under large compressive strains without mesh dependence. The aim of this work was to validate this recently developed constitutive model of trabecular bone for the prediction of strain localization and densification in the human vertebral body subjected to large compressive deformation. A custom-made stepwise loading device mounted in a high resolution peripheral computer tomography system was used to describe the progressive collapse of 13 human vertebrae under axial compression. Continuum finite element analyses of the 13 compression tests were realized and the zones of high volumetric strain were compared with the experiments. A fair qualitative correspondence of the strain localization zone between the experiment and finite element analysis was achieved in 9 out of 13 tests and significant correlations of the volumetric strains were obtained throughout the range of applied axial compression. Interestingly, the stepwise propagating localization zones in trabecular bone converged to the buckling locations in the cortical shell. While the adopted continuum finite element approach still suffers from several limitations, these encouraging preliminary results towards the prediction of extended vertebral

  19. Effect of Compression Garments on Physiological Responses After Uphill Running.

    Science.gov (United States)

    Struhár, Ivan; Kumstát, Michal; Králová, Dagmar Moc

    2018-03-01

    Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60º·s-1, 120º·s-1) was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  20. Effect of Compression Garments on Physiological Responses After Uphill Running

    Directory of Open Access Journals (Sweden)

    Struhár Ivan

    2018-03-01

    Full Text Available Limited practical recommendations related to wearing compression garments for athletes can be drawn from the literature at the present time. We aimed to identify the effects of compression garments on physiological and perceptual measures of performance and recovery after uphill running with different pressure and distributions of applied compression. In a random, double blinded study, 10 trained male runners undertook three 8 km treadmill runs at a 6% elevation rate, with the intensity of 75% VO2max while wearing low, medium grade compression garments and high reverse grade compression. In all the trials, compression garments were worn during 4 hours post run. Creatine kinase, measurements of muscle soreness, ankle strength of plantar/dorsal flexors and mean performance time were then measured. The best mean performance time was observed in the medium grade compression garments with the time difference being: medium grade compression garments vs. high reverse grade compression garments. A positive trend in increasing peak torque of plantar flexion (60o·s-1, 120o·s-1 was found in the medium grade compression garments: a difference between 24 and 48 hours post run. The highest pain tolerance shift in the gastrocnemius muscle was the medium grade compression garments, 24 hour post run, with the shift being +11.37% for the lateral head and 6.63% for the medial head. In conclusion, a beneficial trend in the promotion of running performance and decreasing muscle soreness within 24 hour post exercise was apparent in medium grade compression garments.

  1. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  2. Data compression with applications to digital radiology

    International Nuclear Information System (INIS)

    Elnahas, S.E.

    1985-01-01

    The structure of arithmetic codes is defined in terms of source parsing trees. The theoretical derivations of algorithms for the construction of optimal and sub-optimal structures are presented. The software simulation results demonstrate how arithmetic coding out performs variable-length to variable-length coding. Linear predictive coding is presented for the compression of digital diagnostic images from several imaging modalities including computed tomography, nuclear medicine, ultrasound, and magnetic resonance imaging. The problem of designing optimal predictors is formulated and alternative solutions are discussed. The results indicate that noiseless compression factors between 1.7 and 7.4 can be achieved. With nonlinear predictive coding, noisy and noiseless compression techniques are combined in a novel way that may have a potential impact on picture archiving and communication systems in radiology. Adaptive fast discrete cosine transform coding systems are used as nonlinear block predictors, and optimal delta modulation systems are used as nonlinear sequential predictors. The off-line storage requirements for archiving diagnostic images are reasonably reduced by the nonlinear block predictive coding. The online performance, however, seems to be bounded by that of the linear systems. The subjective quality of image imperfect reproductions from the cosine transform coding is promising and prompts future research on the compression of diagnostic images by transform coding systems and the clinical evaluation of these systems

  3. Survived ileocecal blowout from compressed air.

    Science.gov (United States)

    Weber, Marco; Kolbus, Frank; Dressler, Jan; Lessig, Rüdiger

    2011-03-01

    Industrial accidents with compressed air entering the gastro-intestinal tract often run fatally. The pressures usually over-exceed those used by medical applications such as colonoscopy and lead to vast injuries of the intestines with high mortality. The case described in this report is of a 26-year-old man who was harmed by compressed air that entered through the anus. He survived because of fast emergency operation. This case underlines necessity of explicit instruction considering hazards handling compressed air devices to maintain safety at work. Further, our observations support the hypothesis that the mucosa is the most elastic layer of the intestine wall.

  4. Modeling DPOAE input/output function compression: comparisons with hearing thresholds.

    Science.gov (United States)

    Bhagat, Shaum P

    2014-09-01

    Basilar membrane input/output (I/O) functions in mammalian animal models are characterized by linear and compressed segments when measured near the location corresponding to the characteristic frequency. A method of studying basilar membrane compression indirectly in humans involves measuring distortion-product otoacoustic emission (DPOAE) I/O functions. Previous research has linked compression estimates from behavioral growth-of-masking functions to hearing thresholds. The aim of this study was to compare compression estimates from DPOAE I/O functions and hearing thresholds at 1 and 2 kHz. A prospective correlational research design was performed. The relationship between DPOAE I/O function compression estimates and hearing thresholds was evaluated with Pearson product-moment correlations. Normal-hearing adults (n = 16) aged 22-42 yr were recruited. DPOAE I/O functions (L₂ = 45-70 dB SPL) and two-interval forced-choice hearing thresholds were measured in normal-hearing adults. A three-segment linear regression model applied to DPOAE I/O functions supplied estimates of compression thresholds, defined as breakpoints between linear and compressed segments and the slopes of the compressed segments. Pearson product-moment correlations between DPOAE compression estimates and hearing thresholds were evaluated. A high correlation between DPOAE compression thresholds and hearing thresholds was observed at 2 kHz, but not at 1 kHz. Compression slopes also correlated highly with hearing thresholds only at 2 kHz. The derivation of cochlear compression estimates from DPOAE I/O functions provides a means to characterize basilar membrane mechanics in humans and elucidates the role of compression in tone detection in the 1-2 kHz frequency range. American Academy of Audiology.

  5. Dynamic Increase Factors for High Performance Concrete in Compression using Split Hopkinson Pressure Bar

    DEFF Research Database (Denmark)

    Riisgaard, Benjamin; Ngo, Tuan; Mendis, Priyan

    2007-01-01

    This paper provides dynamic increase factors (DIF) in compression for two different High Performance Concretes (HPC), 100 MPa and 160 MPa, respectively. In the experimental investigation 2 different Split Hopkinson Pressure Bars are used in order to test over a wide range of strain rates, 100 sec1...... to 700 sec-1. The results are compared with the CEB Model Code and the Spilt Hopkinson Pressure Bar technique is briefly de-scribed....

  6. DCS - A high flux beamline for time resolved dynamic compression science – Design highlights

    Energy Technology Data Exchange (ETDEWEB)

    Capatina, D., E-mail: capatina@aps.anl.gov; D’Amico, K., E-mail: kdamico@aps.anl.gov; Nudell, J., E-mail: jnudell@aps.anl.gov; Collins, J., E-mail: collins@aps.anl.gov; Schmidt, O., E-mail: oschmidt@aps.anl.gov [Advanced Photon Source, Argonne National Laboratory, 9700 S. Cass Avenue, Lemont, IL 60439 (United States)

    2016-07-27

    The Dynamic Compression Sector (DCS) beamline, a national user facility for time resolved dynamic compression science supported by the National Nuclear Security Administration (NNSA) of the Department of Energy (DOE), has recently completed construction and is being commissioned at Sector 35 of the Advanced Photon Source (APS) at Argonne National Laboratory (ANL). The beamline consists of a First Optics Enclosure (FOE) and four experimental enclosures. A Kirkpatrick–Baez focusing mirror system with 2.2 mrad incident angles in the FOE delivers pink beam to the experimental stations. A refocusing Kirkpatrick–Baez mirror system is situated in each of the two most downstream enclosures. Experiments can be conducted in either white, monochromatic, pink or monochromatic-reflected beam mode in any of the experimental stations by changing the position of two interlocked components in the FOE. The beamline Radiation Safety System (RSS) components have been designed to handle the continuous beam provided by two in-line revolver undulators with periods of 27 and 30 mm, at closed gap, 150 mA beam current, and passing through a power limiting aperture of 1.5 x 1.0 mm{sup 2}. A novel pink beam end station stop [1] is used to stop the continuous and focused pink beam which can achieve a peak heat flux of 105 kW/mm{sup 2}. A new millisecond shutter design [2] is used to deliver a quick pulse of beam to the sample, synchronized with the dynamic event, the microsecond shutter, and the storage ring clock.

  7. Development of ultra-lightweight slurries with high compressive strength for use in oil wells

    Energy Technology Data Exchange (ETDEWEB)

    Suzart, J. Walter P. [Halliburton Company, Houston, TX (United States); Farias, A.C. [PETROBRAS S.A., Rio de Janeiro, RJ (Brazil); Ribeiro, Danilo; Fernandes, Thiago; Santos, Reened [Halliburton Energy Services Aberdeen, Scotland (United Kingdom)

    2008-07-01

    Formations with low fracture gradients or depleted reservoirs often lead to difficult oil well cementing operations. Commonly employed cement slurries (14.0 to 15.8 lb/gal), generate an equivalent circulating density (ECD) higher than the fracture gradient and ultimately lead to formation damage, lost circulation and a decreased top of cement. Given the high price of oil, companies are investing in those and other wells that are difficult to explore. Naturally, lightweight cement slurries are used to reduce the ECD (10.0 to 14.0 lb/gal), using additives to trap water and stabilize the slurry. However, when the density reaches 11.0 lb/gal, the increase in water content may cause a change in characteristics. The focus of this study is extreme cases where it is necessary to employ ultra-lightweight cement slurries (5.5 to 10.0 lb/gal). Foamed slurries have been widely used, and the objective is to set an alternative by developing cement slurries containing uncompressible microspheres, aiming for a density of 7.5 lb/gal as well as high compressive strength. Another benefit in contrast to preparing foamed cement slurries is that there is no requirement for special equipment in the field. Routine laboratory tests such as fluid-loss control, sedimentation, thickening time, free water, compressive strength, and rheology (at room and high temperatures) were performed. Thus, it was concluded that the proposed cement slurries can be used in oil wells. (author)

  8. MA-core loaded untuned RF compression cavity for HIRFL-CSR

    International Nuclear Information System (INIS)

    Mei Lirong; Xu Zhe; Yuan Youjin; Jin Peng; Bian Zhibin; Zhao Hongwei; Xia Jiawen

    2012-01-01

    To meet the requirements of high energy density physics and plasma physics research at HIRFL-CSR the goal of achieving a higher accelerating gap voltage was proposed. Therefore, a magnetic alloy (MA)-core loaded radio frequency (RF) cavity that can provide a higher accelerating gap voltage compared to standard ferrite loaded cavities has been studied at IMP. In order to select the proper magnetic alloy material to load the RF compression cavity, measurements of four different kinds of sample MA-cores have been carried out. By testing the small cores, the core composition was selected to obtain the desired performance. According to the theoretical calculation and simulation, which show reasonable consistency for the MA-core loaded cavity, the desired performance can be achieved. Finally about 1000 kW power will be needed to meet the requirements of 50 kV accelerating gap voltage by calculation.

  9. High pressure phase transitions and compressibilities of Er2Zr2O7 and Ho2Zr2O7

    Science.gov (United States)

    Zhang, F. X.; Lang, M.; Becker, U.; Ewing, R. C.; Lian, J.

    2008-01-01

    Phase stability and compressibility of rare earth zirconates with the defect-fluorite structure were investigated by in situ synchrotron x-ray diffraction. A sluggish defect-fluorite to a cotunnitelike phase transformation occurred at pressures of ˜22 and ˜30GPa for Er2Zr2O7 and Ho2Zr2O7, respectively. Enhanced compressibility was found for the high pressure phase as a result of increasing cation coordination number and cation-anion bond length.

  10. High Pressure Phase Transitions and Compressibilities of Er2Zr2O7 and Ho2Zr2O7

    Energy Technology Data Exchange (ETDEWEB)

    Zhang,F.; Lang, M.; Becker, U.; Ewing, R.; Lian, J.

    2008-01-01

    Phase stability and compressibility of rare earth zirconates with the defect-fluorite structure were investigated by in situ synchrotron x-ray diffraction. A sluggish defect-fluorite to a cotunnitelike phase transformation occurred at pressures of {approx} 22 and {approx} 30 GPa for Er2Zr2O7 and Ho2Zr2O7, respectively. Enhanced compressibility was found for the high pressure phase as a result of increasing cation coordination number and cation-anion bond length.

  11. Effects of flashlight guidance on chest compression performance in cardiopulmonary resuscitation in a noisy environment.

    Science.gov (United States)

    You, Je Sung; Chung, Sung Phil; Chang, Chul Ho; Park, Incheol; Lee, Hye Sun; Kim, SeungHo; Lee, Hahn Shick

    2013-08-01

    In real cardiopulmonary resuscitation (CPR), noise can arise from instructional voices and environmental sounds in places such as a battlefield and industrial and high-traffic areas. A feedback device using a flashing light was designed to overcome noise-induced stimulus saturation during CPR. This study was conducted to determine whether 'flashlight' guidance influences CPR performance in a simulated noisy setting. We recruited 30 senior medical students with no previous experience of using flashlight-guided CPR to participate in this prospective, simulation-based, crossover study. The experiment was conducted in a simulated noisy situation using a cardiac arrest model without ventilation. Noise such as patrol car and fire engine sirens was artificially generated. The flashlight guidance device emitted light pulses at the rate of 100 flashes/min. Participants also received instructions to achieve the desired rate of 100 compressions/min. CPR performances were recorded with a Resusci Anne mannequin with a computer skill-reporting system. There were significant differences between the control and flashlight groups in mean compression rate (MCR), MCR/min and visual analogue scale. However, there were no significant differences in correct compression depth, mean compression depth, correct hand position, and correctly released compression. The flashlight group constantly maintained the pace at the desired 100 compressions/min. Furthermore, the flashlight group had a tendency to keep the MCR constant, whereas the control group had a tendency to decrease it after 60 s. Flashlight-guided CPR is particularly advantageous for maintaining a desired MCR during hands-only CPR in noisy environments, where metronome pacing might not be clearly heard.

  12. Numerical investigation of a joint approach to thermal energy storage and compressed air energy storage in aquifers

    International Nuclear Information System (INIS)

    Guo, Chaobin; Zhang, Keni; Pan, Lehua; Cai, Zuansi; Li, Cai; Li, Yi

    2017-01-01

    Highlights: •One wellbore-reservoir numerical model was built to study the impact of ATES on CAESA. •With high injection temperature, the joint of ATES can improve CAESA performance. •The considerable utilization of geothermal occurs only at the beginning of operations. •Combination of CAESA and ATES can be achieved in common aquifers. -- Abstract: Different from conventional compressed air energy storage (CAES) systems, the advanced adiabatic compressed air energy storage (AA-CAES) system can store the compression heat which can be used to reheat air during the electricity generation stage. Thus, AA-CAES system can achieve a higher energy storage efficiency. Similar to the AA-CAES system, a compressed air energy storage in aquifers (CAESA) system, which is integrated with an aquifer thermal energy storage (ATES) could possibly achieve the same objective. In order to investigate the impact of ATES on the performance of CAESA, different injection air temperature schemes are designed and analyzed by using numerical simulations. Key parameters relative to energy recovery efficiencies of the different injection schemes, such as pressure distribution and temperature variation within the aquifers as well as energy flow rate in the injection well, are also investigated in this study. The simulations show that, although different injection schemes have a similar overall energy recovery efficiency (∼97%) as well as a thermal energy recovery efficiency (∼79.2%), the higher injection air temperature has a higher energy storage capability. Our results show the total energy storage for the injection air temperature at 80 °C is about 10% greater than the base model scheme at 40 °C. Sensitivity analysis reveal that permeability of the reservoir boundary could have significant impact on the system performance. However, other hydrodynamic and thermodynamic properties, such as the storage reservoir permeability, thermal conductivity, rock grain specific heat and rock

  13. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  14. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  15. Seeding magnetic fields for laser-driven flux compression in high-energy-density plasmas.

    Science.gov (United States)

    Gotchev, O V; Knauer, J P; Chang, P Y; Jang, N W; Shoup, M J; Meyerhofer, D D; Betti, R

    2009-04-01

    A compact, self-contained magnetic-seed-field generator (5 to 16 T) is the enabling technology for a novel laser-driven flux-compression scheme in laser-driven targets. A magnetized target is directly irradiated by a kilojoule or megajoule laser to compress the preseeded magnetic field to thousands of teslas. A fast (300 ns), 80 kA current pulse delivered by a portable pulsed-power system is discharged into a low-mass coil that surrounds the laser target. A >15 T target field has been demonstrated using a hot spot of a compressed target. This can lead to the ignition of massive shells imploded with low velocity-a way of reaching higher gains than is possible with conventional ICF.

  16. Focal-plane change triggered video compression for low-power vision sensor systems.

    Directory of Open Access Journals (Sweden)

    Yu M Chi

    Full Text Available Video sensors with embedded compression offer significant energy savings in transmission but incur energy losses in the complexity of the encoder. Energy efficient video compression architectures for CMOS image sensors with focal-plane change detection are presented and analyzed. The compression architectures use pixel-level computational circuits to minimize energy usage by selectively processing only pixels which generate significant temporal intensity changes. Using the temporal intensity change detection to gate the operation of a differential DCT based encoder achieves nearly identical image quality to traditional systems (4dB decrease in PSNR while reducing the amount of data that is processed by 67% and reducing overall power consumption reduction of 51%. These typical energy savings, resulting from the sparsity of motion activity in the visual scene, demonstrate the utility of focal-plane change triggered compression to surveillance vision systems.

  17. Regulation and drive system for high rep-rate magnetic-pulse compressors

    International Nuclear Information System (INIS)

    Birx, D.L.; Cook, E.G.; Hawkins, S.; Meyers, A.; Reginato, L.L.; Schmidt, J.A.; Smith, M.W.

    1982-01-01

    The essentially unlimited rep-rate capability of non-linear magnetic systems has imposed strict requirements on the drive system which initiates the pulse compression. An order of magnitude increase in the rep-rates achieved by the Advanced Test Accelerator (ATA) gas blown system is not difficult to achieve in the magnetic compressor. The added requirement of having a high degree of regulation at the higher rep-rates places strict requirements on the triggerable switch for charging and de-Queing. A novel feedback technique which applies the proper bias to a magnetic core by comparing a reference voltage to the charging voltage eases considerably the regulation required to achieve low jitter in magnetic compression. The performance of the high rep-rate charging and regulation systems will be described in the following pages

  18. Study on Relaxation Damage Properties of High Viscosity Asphalt Sand under Uniaxial Compression

    Directory of Open Access Journals (Sweden)

    Yazhen Sun

    2018-01-01

    Full Text Available Laboratory investigations of relaxation damage properties of high viscosity asphalt sand (HVAS by uniaxial compression tests and modified generalized Maxwell model (GMM to simulate viscoelastic characteristics coupling damage were carried out. A series of uniaxial compression relaxation tests were performed on HVAS specimens at different temperatures, loading rates, and constant levels of input strain. The results of the tests show that the peak point of relaxation modulus is highly influenced by the loading rate in the first half of an L-shaped curve, while the relaxation modulus is almost constant in the second half of the curve. It is suggested that for the HVAS relaxation tests, the temperature should be no less than −15°C. The GMM is used to determine the viscoelastic responses, the Weibull distribution function is used to characterize the damage of the HVAS and its evolution, and the modified GMM is a coupling of the two models. In this paper, the modified GMM is implemented through a secondary development with the USDFLD subroutine to analyze the relaxation damage process and improve the linear viscoelastic model in ABAQUS. Results show that the numerical method of coupling damage provides a better approximation of the test curve over almost the whole range. The results also show that the USDFLD subroutine can effectively predict the relaxation damage process of HVAS and can provide a theoretical support for crack control of asphalt pavements.

  19. The Role of Principal Leadership in Achievement beyond Test Scores: An Examination of Leadership, Differentiated Curriculum and High-Achieving Students

    Science.gov (United States)

    Else, Danielle F.

    2013-01-01

    Though research has validated a link between principal leadership and student achievement, questions remain regarding the specific relationship between the principal and high-achieving learners. This association facilitates understanding about forming curricular decisions for high ability learners. The study was conducted to examine the perceived…

  20. Study of Various Techniques for Improving Weak and Compressible Clay Soil under a High Earth Embankment

    Directory of Open Access Journals (Sweden)

    Zein A.K. M.

    2014-04-01

    Full Text Available This paper investigates the suitability of three soil improvement techniques for the construction of a high earth embankment on thick weak and highly compressible clay soil. The eastern approach embankment of Alhalfaya Bridge on the River Nile linking Khartoum North and Omdurman cities was chosen as a case study and a comprehensive site investigation program was carried out to determine the properties the subsurface soils. The study results showed that unless the subsurface soils have been improved they may fail or undergo excessively large settlements due to the embankment construction. Three ground improvement techniques based on the principles of the “staged construction method, SCM”, “vertical sand drain, VSD” and “sand compaction piles, SCP” of embankment foundation soil treatment are discussed and evaluated. Embankment design options based on applications of the above methods have been proposed for foundation treatment to adequately support embankment loads. A method performance evaluation based on the improvement of soil properties achieved; the time required for construction and compared estimated costs criteria was made to assess the effectiveness and expected overall performance. Adoption of any of the soil improvement techniques considered depends mainly on the most critical and decisive factor governing the embankment design. Based on the overall performance for the embankment case studied, the sand drains is considered as the most appropriate improvement method followed by the sand compaction piles technique whereas the staged construction method showed the poorest overall performance.