WorldWideScience

Sample records for rdtc optimized compression

  1. Generalized massive optimal data compression

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin

    2018-05-01

    In this paper, we provide a general procedure for optimally compressing N data down to n summary statistics, where n is equal to the number of parameters of interest. We show that compression to the score function - the gradient of the log-likelihood with respect to the parameters - yields n compressed statistics that are optimal in the sense that they preserve the Fisher information content of the data. Our method generalizes earlier work on linear Karhunen-Loéve compression for Gaussian data whilst recovering both lossless linear compression and quadratic estimation as special cases when they are optimal. We give a unified treatment that also includes the general non-Gaussian case as long as mild regularity conditions are satisfied, producing optimal non-linear summary statistics when appropriate. As a worked example, we derive explicitly the n optimal compressed statistics for Gaussian data in the general case where both the mean and covariance depend on the parameters.

  2. Compressed optimization of device architectures

    Energy Technology Data Exchange (ETDEWEB)

    Frees, Adam [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Gamble, John King [Microsoft Research, Redmond, WA (United States). Quantum Architectures and Computation Group; Ward, Daniel Robert [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Blume-Kohout, Robin J [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Center for Computing Research; Eriksson, M. A. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Friesen, Mark [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics; Coppersmith, Susan N. [Univ. of Wisconsin, Madison, WI (United States). Dept. of Physics

    2014-09-01

    Recent advances in nanotechnology have enabled researchers to control individual quantum mechanical objects with unprecedented accuracy, opening the door for both quantum and extreme- scale conventional computation applications. As these devices become more complex, designing for facility of control becomes a daunting and computationally infeasible task. Here, motivated by ideas from compressed sensing, we introduce a protocol for the Compressed Optimization of Device Architectures (CODA). It leads naturally to a metric for benchmarking and optimizing device designs, as well as an automatic device control protocol that reduces the operational complexity required to achieve a particular output. Because this protocol is both experimentally and computationally efficient, it is readily extensible to large systems. For this paper, we demonstrate both the bench- marking and device control protocol components of CODA through examples of realistic simulations of electrostatic quantum dot devices, which are currently being developed experimentally for quantum computation.

  3. Cloud Optimized Image Format and Compression

    Science.gov (United States)

    Becker, P.; Plesea, L.; Maurer, T.

    2015-04-01

    Cloud based image storage and processing requires revaluation of formats and processing methods. For the true value of the massive volumes of earth observation data to be realized, the image data needs to be accessible from the cloud. Traditional file formats such as TIF and NITF were developed in the hay day of the desktop and assumed fast low latency file access. Other formats such as JPEG2000 provide for streaming protocols for pixel data, but still require a server to have file access. These concepts no longer truly hold in cloud based elastic storage and computation environments. This paper will provide details of a newly evolving image storage format (MRF) and compression that is optimized for cloud environments. Although the cost of storage continues to fall for large data volumes, there is still significant value in compression. For imagery data to be used in analysis and exploit the extended dynamic range of the new sensors, lossless or controlled lossy compression is of high value. Compression decreases the data volumes stored and reduces the data transferred, but the reduced data size must be balanced with the CPU required to decompress. The paper also outlines a new compression algorithm (LERC) for imagery and elevation data that optimizes this balance. Advantages of the compression include its simple to implement algorithm that enables it to be efficiently accessed using JavaScript. Combing this new cloud based image storage format and compression will help resolve some of the challenges of big image data on the internet.

  4. Optimization of suspensions filtration with compressible cake

    Directory of Open Access Journals (Sweden)

    Janacova Dagmar

    2016-01-01

    Full Text Available In this paper there is described filtering process for separating reaction mixture after enzymatic hydrolysis to process the chromium tanning waste. Filtration of this mixture is very complicated because it is case of mixture filtration with compressible cake. Successful process strongly depends on mathematical describing of filtration, calculating optimal values of pressure difference, specific resistant of filtration cake and temperature maintenance which is connected with viscosity change. The mathematic model of filtration with compressible cake we verified in laboratory conditions on special filtration device developed on our department.

  5. Optimized Projection Matrix for Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Jianping Xu

    2010-01-01

    Full Text Available Compressive sensing (CS is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  6. Optimization of Error-Bounded Lossy Compression for Hard-to-Compress HPC Data

    Energy Technology Data Exchange (ETDEWEB)

    Di, Sheng; Cappello, Franck

    2018-01-01

    Since today’s scientific applications are producing vast amounts of data, compressing them before storage/transmission is critical. Results of existing compressors show two types of HPC data sets: highly compressible and hard to compress. In this work, we carefully design and optimize the error-bounded lossy compression for hard-tocompress scientific data. We propose an optimized algorithm that can adaptively partition the HPC data into best-fit consecutive segments each having mutually close data values, such that the compression condition can be optimized. Another significant contribution is the optimization of shifting offset such that the XOR-leading-zero length between two consecutive unpredictable data points can be maximized. We finally devise an adaptive method to select the best-fit compressor at runtime for maximizing the compression factor. We evaluate our solution using 13 benchmarks based on real-world scientific problems, and we compare it with 9 other state-of-the-art compressors. Experiments show that our compressor can always guarantee the compression errors within the user-specified error bounds. Most importantly, our optimization can improve the compression factor effectively, by up to 49% for hard-tocompress data sets with similar compression/decompression time cost.

  7. optimizing compressive strength characteristics of hollow building

    African Journals Online (AJOL)

    eobe

    Keywords: hollow building Blocks, granite dust, sand, partial replacement, compressive strength. 1. INTRODUCTION ... exposed to extreme climate. The physical ... Sridharan et al [13] conducted shear strength studies on soil-quarry dust.

  8. Optimized nonorthogonal transforms for image compression.

    Science.gov (United States)

    Guleryuz, O G; Orchard, M T

    1997-01-01

    The transform coding of images is analyzed from a common standpoint in order to generate a framework for the design of optimal transforms. It is argued that all transform coders are alike in the way they manipulate the data structure formed by transform coefficients. A general energy compaction measure is proposed to generate optimized transforms with desirable characteristics particularly suited to the simple transform coding operation of scalar quantization and entropy coding. It is shown that the optimal linear decoder (inverse transform) must be an optimal linear estimator, independent of the structure of the transform generating the coefficients. A formulation that sequentially optimizes the transforms is presented, and design equations and algorithms for its computation provided. The properties of the resulting transform systems are investigated. In particular, it is shown that the resulting basis are nonorthogonal and complete, producing energy compaction optimized, decorrelated transform coefficients. Quantization issues related to nonorthogonal expansion coefficients are addressed with a simple, efficient algorithm. Two implementations are discussed, and image coding examples are given. It is shown that the proposed design framework results in systems with superior energy compaction properties and excellent coding results.

  9. Optimal control of compressible Navier-Stokes equations

    International Nuclear Information System (INIS)

    Ito, K.; Ravindran, S.S.

    1994-01-01

    Optimal control for the viscous incompressible flows, which are governed by incompressible Navier-Stokes equations, has been the subject of extensive study in recent years, see, e.g., [AT], [GHS], [IR], and [S]. In this paper we consider the optimal control of compressible isentropic Navier-Stokes equations. We develop the weak variational formulation and discuss the existence and necessary optimality condition characterizing the optimal control. A numerical method based on the mixed-finite element method is also discussed to compute the control and numerical results are presented

  10. Thermoeconomic optimization of subcooled and superheated vapor compression refrigeration cycle

    International Nuclear Information System (INIS)

    Selbas, Resat; Kizilkan, Onder; Sencan, Arzu

    2006-01-01

    An exergy-based thermoeconomic optimization application is applied to a subcooled and superheated vapor compression refrigeration system. The advantage of using the exergy method of thermoeconomic optimization is that various elements of the system-i.e., condenser, evaporator, subcooling and superheating heat exchangers-can be optimized on their own. The application consists of determining the optimum heat exchanger areas with the corresponding optimum subcooling and superheating temperatures. A cost function is specified for the optimum conditions. All calculations are made for three refrigerants: R22, R134a, and R407c. Thermodynamic properties of refrigerants are formulated using the Artificial Neural Network methodology

  11. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  12. Optimal design of compressed air energy storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, F. W.; Sharma, A.; Ragsdell, K. M.

    1979-01-01

    Compressed air energy storage (CAES) power systems are currently being considered by various electric utilities for load-leveling applications. Models of CAES systems which employ natural underground aquifer formations, and present an optimal design methodology which demonstrates their economic viability are developed. This approach is based upon a decomposition of the CAES plant and utility grid system into three partially-decoupled subsystems. Numerical results are given for a plant employing the Media, Illinois Galesville aquifer formation.

  13. Optimization of the multilinear compression function applied to calorimetry

    International Nuclear Information System (INIS)

    Cattaneo, P.W.Paolo Walter

    2002-01-01

    The energy dynamic range required by a calorimeter with high speed readout may exceed existing ADC capability. A solution may be a dynamic compressor matching the energy span to the ADC range, such as to contribute at most a predefinite amount to the calorimeter resolution. A multilinear compression function is the easiest to implement, therefore it is interesting to optimize the input to output relation and fix the break points

  14. Optimizing beam transport in rapidly compressing beams on the neutralized drift compression experiment – II

    Directory of Open Access Journals (Sweden)

    Anton D. Stepanov

    2018-03-01

    Full Text Available The Neutralized Drift Compression Experiment-II (NDCX-II is an induction linac that generates intense pulses of 1.2 MeV helium ions for heating matter to extreme conditions. Here, we present recent results on optimizing beam transport. The NDCX-II beamline includes a 1-m-long drift section downstream of the last transport solenoid, which is filled with charge-neutralizing plasma that enables rapid longitudinal compression of an intense ion beam against space-charge forces. The transport section on NDCX-II consists of 28 solenoids. Finding optimal field settings for a group of solenoids requires knowledge of the envelope parameters of the beam. Imaging the beam on the scintillator gives the radius of the beam, but the envelope angle is not measured directly. We demonstrate how the parameters of the beam envelope (radius, envelop angle, and emittance can be reconstructed from a series of images taken by varying the B-field strengths of a solenoid upstream of the scintillator. We use this technique to evaluate emittance at several points in the NDCX-II beamline and for optimizing the trajectory of the beam at the entry of the plasma-filled drift section. Keywords: Charged-particle beams, Induction accelerators, Beam dynamics, Beam emittance, Ion beam diagnostics, PACS Codes: 41.75.-i, 41.85.Ja, 52.59.Sa, 52.59.Wd, 29.27.Eg

  15. Confined compressive strength model of rock for drilling optimization

    Directory of Open Access Journals (Sweden)

    Xiangchao Shi

    2015-03-01

    Full Text Available The confined compressive strength (CCS plays a vital role in drilling optimization. On the basis of Jizba's experimental results, a new CCS model considering the effects of the porosity and nonlinear characteristics with increasing confining pressure has been developed. Because the confining pressure plays a fundamental role in determining the CCS of bottom-hole rock and because the theory of Terzaghi's effective stress principle is founded upon soil mechanics, which is not suitable for calculating the confining pressure in rock mechanics, the double effective stress theory, which treats the porosity as a weighting factor of the formation pore pressure, is adopted in this study. The new CCS model combined with the mechanical specific energy equation is employed to optimize the drilling parameters in two practical wells located in Sichuan basin, China, and the calculated results show that they can be used to identify the inefficient drilling situations of underbalanced drilling (UBD and overbalanced drilling (OBD.

  16. Rate-distortion optimization for compressive video sampling

    Science.gov (United States)

    Liu, Ying; Vijayanagar, Krishna R.; Kim, Joohee

    2014-05-01

    The recently introduced compressed sensing (CS) framework enables low complexity video acquisition via sub- Nyquist rate sampling. In practice, the resulting CS samples are quantized and indexed by finitely many bits (bit-depth) for transmission. In applications where the bit-budget for video transmission is constrained, rate- distortion optimization (RDO) is essential for quality video reconstruction. In this work, we develop a double-level RDO scheme for compressive video sampling, where frame-level RDO is performed by adaptively allocating the fixed bit-budget per frame to each video block based on block-sparsity, and block-level RDO is performed by modelling the block reconstruction peak-signal-to-noise ratio (PSNR) as a quadratic function of quantization bit-depth. The optimal bit-depth and the number of CS samples are then obtained by setting the first derivative of the function to zero. In the experimental studies the model parameters are initialized with a small set of training data, which are then updated with local information in the model testing stage. Simulation results presented herein show that the proposed double-level RDO significantly enhances the reconstruction quality for a bit-budget constrained CS video transmission system.

  17. Modelling for Fuel Optimal Control of a Variable Compression Engine

    OpenAIRE

    Nilsson, Ylva

    2007-01-01

    Variable compression engines are a mean to meet the demand on lower fuel consumption. A high compression ratio results in high engine efficiency, but also increases the knock tendency. On conventional engines with fixed compression ratio, knock is avoided by retarding the ignition angle. The variable compression engine offers an extra dimension in knock control, since both ignition angle and compression ratio can be adjusted. The central question is thus for what combination of compression ra...

  18. Optimal Image Data Compression For Whole Slide Images

    Directory of Open Access Journals (Sweden)

    J. Isola

    2016-06-01

    Differences in WSI file sizes of scanned images deemed “visually lossless” were significant. If we set Hamamatsu Nanozoomer .NDPI file size (using its default “jpeg80 quality” as 100%, the size of a “visually lossless” JPEG2000 file was only 15-20% of that. Comparisons to Aperio and 3D-Histech files (.svs and .mrxs at their default settings yielded similar results. A further optimization of JPEG2000 was done by treating empty slide area as uniform white-grey surface, which could be maximally compressed. Using this algorithm, JPEG2000 file sizes were only half, or even smaller, of original JPEG2000. Variation was due to the proportion of empty slide area on the scan. We anticipate that wavelet-based image compression methods, such as JPEG2000, have a significant advantage in saving storage costs of scanned whole slide image. In routine pathology laboratories applying WSI technology widely to their histology material, absolute cost savings can be substantial.  

  19. Optimal context quantization in lossless compression of image data sequences

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Wu, X.; Andersen, Jakob Dahl

    2004-01-01

    In image compression context-based entropy coding is commonly used. A critical issue to the performance of context-based image coding is how to resolve the conflict of a desire for large templates to model high-order statistic dependency of the pixels and the problem of context dilution due...... to insufficient sample statistics of a given input image. We consider the problem of finding the optimal quantizer Q that quantizes the K-dimensional causal context C/sub t/=(X/sub t-t1/,X/sub t-t2/,...,X/sub t-tK/) of a source symbol X/sub t/ into one of a set of conditioning states. The optimality of context...... quantization is defined to be the minimum static or minimum adaptive code length of given a data set. For a binary source alphabet an optimal context quantizer can be computed exactly by a fast dynamic programming algorithm. Faster approximation solutions are also proposed. In case of m-ary source alphabet...

  20. Compressive sensing using optimized sensing matrix for face verification

    Science.gov (United States)

    Oey, Endra; Jeffry; Wongso, Kelvin; Tommy

    2017-12-01

    Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.

  1. Extreme compression for extreme conditions: pilot study to identify optimal compression of CT images using MPEG-4 video compression.

    Science.gov (United States)

    Peterson, P Gabriel; Pak, Sung K; Nguyen, Binh; Jacobs, Genevieve; Folio, Les

    2012-12-01

    This study aims to evaluate the utility of compressed computed tomography (CT) studies (to expedite transmission) using Motion Pictures Experts Group, Layer 4 (MPEG-4) movie formatting in combat hospitals when guiding major treatment regimens. This retrospective analysis was approved by Walter Reed Army Medical Center institutional review board with a waiver for the informed consent requirement. Twenty-five CT chest, abdomen, and pelvis exams were converted from Digital Imaging and Communications in Medicine to MPEG-4 movie format at various compression ratios. Three board-certified radiologists reviewed various levels of compression on emergent CT findings on 25 combat casualties and compared with the interpretation of the original series. A Universal Trauma Window was selected at -200 HU level and 1,500 HU width, then compressed at three lossy levels. Sensitivities and specificities for each reviewer were calculated along with 95 % confidence intervals using the method of general estimating equations. The compression ratios compared were 171:1, 86:1, and 41:1 with combined sensitivities of 90 % (95 % confidence interval, 79-95), 94 % (87-97), and 100 % (93-100), respectively. Combined specificities were 100 % (85-100), 100 % (85-100), and 96 % (78-99), respectively. The introduction of CT in combat hospitals with increasing detectors and image data in recent military operations has increased the need for effective teleradiology; mandating compression technology. Image compression is currently used to transmit images from combat hospital to tertiary care centers with subspecialists and our study demonstrates MPEG-4 technology as a reasonable means of achieving such compression.

  2. mathematical model for the optimization of compressive strength

    African Journals Online (AJOL)

    ES Obe

    cement and sand either wholly or partially without adverse effect on the strength properties of the ... sandcrete block, compressive strength, laterite, scheffe's theory. 1. Introduction ... that for the properties of a q-component mix- ture which ...

  3. Optimal chest compression rate in cardiopulmonary resuscitation: a prospective, randomized crossover study using a manikin model.

    Science.gov (United States)

    Lee, Seong Hwa; Ryu, Ji Ho; Min, Mun Ki; Kim, Yong In; Park, Maeng Real; Yeom, Seok Ran; Han, Sang Kyoon; Park, Seong Wook

    2016-08-01

    When performing cardiopulmonary resuscitation (CPR), the 2010 American Heart Association guidelines recommend a chest compression rate of at least 100 min, whereas the 2010 European Resuscitation Council guidelines recommend a rate of between 100 and 120 min. The aim of this study was to examine the rate of chest compression that fulfilled various quality indicators, thereby determining the optimal rate of compression. Thirty-two trainee emergency medical technicians and six paramedics were enrolled in this study. All participants had been trained in basic life support. Each participant performed 2 min of continuous compressions on a skill reporter manikin, while listening to a metronome sound at rates of 100, 120, 140, and 160 beats/min, in a random order. Mean compression depth, incomplete chest recoil, and the proportion of correctly performed chest compressions during the 2 min were measured and recorded. The rate of incomplete chest recoil was lower at compression rates of 100 and 120 min compared with that at 160 min (P=0.001). The numbers of compressions that fulfilled the criteria for high-quality CPR at a rate of 120 min were significantly higher than those at 100 min (P=0.016). The number of high-quality CPR compressions was the highest at a compression rate of 120 min, and increased incomplete recoil occurred with increasing compression rate. However, further studies are needed to confirm the results.

  4. Optimal Chest Compression Rate and Compression to Ventilation Ratio in Delivery Room Resuscitation: Evidence from Newborn Piglets and Neonatal Manikins

    OpenAIRE

    Solev?g, Anne Lee; Schm?lzer, Georg M.

    2017-01-01

    Cardiopulmonary resuscitation (CPR) duration until return of spontaneous circulation (ROSC) influences survival and neurologic outcomes after delivery room (DR) CPR. High quality chest compressions (CC) improve cerebral and myocardial perfusion. Improved myocardial perfusion increases the likelihood of a faster ROSC. Thus, optimizing CC quality may improve outcomes both by preserving cerebral blood flow during CPR and by reducing the recovery time. CC quality is determined by rate, CC to vent...

  5. Optimization of binder, disintegrant and compression pressure for ...

    African Journals Online (AJOL)

    This was done by studying the contributions of variable factors of binder concentration, disintegrant concentration and compression pressure to tablet friability, hardness and disintegration time under factor combinations given by 23 factorial experimental designs. The effect of every factor was determined by finding the ...

  6. Optimization of compressive 4D-spatio-spectral snapshot imaging

    Science.gov (United States)

    Zhao, Xia; Feng, Weiyi; Lin, Lihua; Su, Wu; Xu, Guoqing

    2017-10-01

    In this paper, a modified 3D computational reconstruction method in the compressive 4D-spectro-volumetric snapshot imaging system is proposed for better sensing spectral information of 3D objects. In the design of the imaging system, a microlens array (MLA) is used to obtain a set of multi-view elemental images (EIs) of the 3D scenes. Then, these elemental images with one dimensional spectral information and different perspectives are captured by the coded aperture snapshot spectral imager (CASSI) which can sense the spectral data cube onto a compressive 2D measurement image. Finally, the depth images of 3D objects at arbitrary depths, like a focal stack, are computed by inversely mapping the elemental images according to geometrical optics. With the spectral estimation algorithm, the spectral information of 3D objects is also reconstructed. Using a shifted translation matrix, the contrast of the reconstruction result is further enhanced. Numerical simulation results verify the performance of the proposed method. The system can obtain both 3D spatial information and spectral data on 3D objects using only one single snapshot, which is valuable in the agricultural harvesting robots and other 3D dynamic scenes.

  7. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  8. GTZ: a fast compression and cloud transmission tool optimized for FASTQ files.

    Science.gov (United States)

    Xing, Yuting; Li, Gen; Wang, Zhenguo; Feng, Bolun; Song, Zhuo; Wu, Chengkun

    2017-12-28

    The dramatic development of DNA sequencing technology is generating real big data, craving for more storage and bandwidth. To speed up data sharing and bring data to computing resource faster and cheaper, it is necessary to develop a compression tool than can support efficient compression and transmission of sequencing data onto the cloud storage. This paper presents GTZ, a compression and transmission tool, optimized for FASTQ files. As a reference-free lossless FASTQ compressor, GTZ treats different lines of FASTQ separately, utilizes adaptive context modelling to estimate their characteristic probabilities, and compresses data blocks with arithmetic coding. GTZ can also be used to compress multiple files or directories at once. Furthermore, as a tool to be used in the cloud computing era, it is capable of saving compressed data locally or transmitting data directly into cloud by choice. We evaluated the performance of GTZ on some diverse FASTQ benchmarks. Results show that in most cases, it outperforms many other tools in terms of the compression ratio, speed and stability. GTZ is a tool that enables efficient lossless FASTQ data compression and simultaneous data transmission onto to cloud. It emerges as a useful tool for NGS data storage and transmission in the cloud environment. GTZ is freely available online at: https://github.com/Genetalks/gtz .

  9. Optimization of composite sandwich cover panels subjected to compressive loadings

    Science.gov (United States)

    Cruz, Juan R.

    1991-01-01

    An analysis and design method is presented for the design of composite sandwich cover panels that include the transverse shear effects and damage tolerance considerations. This method is incorporated into a sandwich optimization computer program entitled SANDOP. As a demonstration of its capabilities, SANDOP is used in the present study to design optimized composite sandwich cover panels for for transport aircraft wing applications. The results of this design study indicate that optimized composite sandwich cover panels have approximately the same structural efficiency as stiffened composite cover panels designed to satisfy individual constraints. The results also indicate that inplane stiffness requirements have a large effect on the weight of these composite sandwich cover panels at higher load levels. Increasing the maximum allowable strain and the upper percentage limit of the 0 degree and +/- 45 degree plies can yield significant weight savings. The results show that the structural efficiency of these optimized composite sandwich cover panels is relatively insensitive to changes in core density. Thus, core density should be chosen by criteria other than minimum weight (e.g., damage tolerance, ease of manufacture, etc.).

  10. Optimization of the compressive strength of five-component-concrete ...

    African Journals Online (AJOL)

    The paper presents the report of an investigation carried out to optimize some mechanical properties of a five-component-concrete mix. Mound soil (MS), randomly selected from some habitats of a common tropical specie of termites from Iyeke-Ogba, Nigeria was investigated as a fifth component in concrete. The work ...

  11. Optimal control of parametric oscillations of compressed flexible bars

    Science.gov (United States)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    In this paper the problem of damping of the linear systems oscillations with piece-wise constant control is solved. The motion of bar construction is reduced to the form described by Hill's differential equation using the Bubnov-Galerkin method. To calculate switching moments of the one-side control the method of sequential linear programming is used. The elements of the fundamental matrix of the Hill's equation are approximated by trigonometric series. Examples of the optimal control of the systems for various initial conditions and different number of control stages have been calculated. The corresponding phase trajectories and transient processes are represented.

  12. Optimized Bunch Compression System for the European XFEL

    CERN Document Server

    Limberg, Torsten; Brinkmann, Reinhard; Decking, Winfried; Dohlus, Martin; Flöettmann, Klaus; Kim, Yujong; Schneidmiller, Evgeny

    2005-01-01

    The European XFEL bunch compressor system has been optimized for greater flexibility in parameter space. Operation beyond the XFEL design parameters is discussed in two directions: achieving the uppermost number of photons in a single pulse on one hand and reaching the necessary peak current for lasing with a pulse as short as possible on the other. Results of start-to-end calculations including 3D-CSR effects, space charge forces and the impact on wake fields demonstrate the potential of the XFEL for further improvement or, respectively, its safety margin for operation at design values.

  13. A theoretical global optimization method for vapor-compression refrigeration systems based on entransy theory

    International Nuclear Information System (INIS)

    Xu, Yun-Chao; Chen, Qun

    2013-01-01

    The vapor-compression refrigeration systems have been one of the essential energy conversion systems for humankind and exhausting huge amounts of energy nowadays. Surrounding the energy efficiency promotion of the systems, there are lots of effectual optimization methods but mainly relied on engineering experience and computer simulations rather than theoretical analysis due to the complex and vague physical essence. We attempt to propose a theoretical global optimization method based on in-depth physical analysis for the involved physical processes, i.e. heat transfer analysis for condenser and evaporator, through introducing the entransy theory and thermodynamic analysis for compressor and expansion valve. The integration of heat transfer and thermodynamic analyses forms the overall physical optimization model for the systems to describe the relation between all the unknown parameters and known conditions, which makes theoretical global optimization possible. With the aid of the mathematical conditional extremum solutions, an optimization equation group and the optimal configuration of all the unknown parameters are analytically obtained. Eventually, via the optimization of a typical vapor-compression refrigeration system with various working conditions to minimize the total heat transfer area of heat exchangers, the validity and superior of the newly proposed optimization method is proved. - Highlights: • A global optimization method for vapor-compression systems is proposed. • Integrating heat transfer and thermodynamic analyses forms the optimization model. • A mathematical relation between design parameters and requirements is derived. • Entransy dissipation is introduced into heat transfer analysis. • The validity of the method is proved via optimization of practical cases

  14. Optimization of multi-phase compressible lattice Boltzmann codes on massively parallel multi-core systems

    NARCIS (Netherlands)

    Biferale, L.; Mantovani, F.; Pivanti, M.; Pozzati, F.; Sbragaglia, M.; Schifano, S.F.; Toschi, F.; Tripiccione, R.

    2011-01-01

    We develop a Lattice Boltzmann code for computational fluid-dynamics and optimize it for massively parallel systems based on multi-core processors. Our code describes 2D multi-phase compressible flows. We analyze the performance bottlenecks that we find as we gradually expose a larger fraction of

  15. Least median of squares filtering of locally optimal point matches for compressible flow image registration

    International Nuclear Information System (INIS)

    Castillo, Edward; Guerrero, Thomas; Castillo, Richard; White, Benjamin; Rojo, Javier

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. (paper)

  16. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    Energy Technology Data Exchange (ETDEWEB)

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  17. Optimization of poorly compactable drug tablets manufactured by direct compression using the mixture experimental design.

    Science.gov (United States)

    Martinello, Tiago; Kaneko, Telma Mary; Velasco, Maria Valéria Robles; Taqueda, Maria Elena Santos; Consiglieri, Vladi O

    2006-09-28

    The poor flowability and bad compressibility characteristics of paracetamol are well known. As a result, the production of paracetamol tablets is almost exclusively by wet granulation, a disadvantageous method when compared to direct compression. The development of a new tablet formulation is still based on a large number of experiments and often relies merely on the experience of the analyst. The purpose of this study was to apply experimental design methodology (DOE) to the development and optimization of tablet formulations containing high amounts of paracetamol (more than 70%) and manufactured by direct compression. Nineteen formulations, screened by DOE methodology, were produced with different proportions of Microcel 102, Kollydon VA 64, Flowlac, Kollydon CL 30, PEG 4000, Aerosil, and magnesium stearate. Tablet properties, except friability, were in accordance with the USP 28th ed. requirements. These results were used to generate plots for optimization, mainly for friability. The physical-chemical data found from the optimized formulation were very close to those from the regression analysis, demonstrating that the mixture project is a great tool for the research and development of new formulations.

  18. Multi-objective optimization of an underwater compressed air energy storage system using genetic algorithm

    International Nuclear Information System (INIS)

    Cheung, Brian C.; Carriveau, Rupp; Ting, David S.K.

    2014-01-01

    This paper presents the findings from a multi-objective genetic algorithm optimization study on the design parameters of an underwater compressed air energy storage system (UWCAES). A 4 MWh UWCAES system was numerically simulated and its energy, exergy, and exergoeconomics were analysed. Optimal system configurations were determined that maximized the UWCAES system round-trip efficiency and operating profit, and minimized the cost rate of exergy destruction and capital expenditures. The optimal solutions obtained from the multi-objective optimization model formed a Pareto-optimal front, and a single preferred solution was selected using the pseudo-weight vector multi-criteria decision making approach. A sensitivity analysis was performed on interest rates to gauge its impact on preferred system designs. Results showed similar preferred system designs for all interest rates in the studied range. The round-trip efficiency and operating profit of the preferred system designs were approximately 68.5% and $53.5/cycle, respectively. The cost rate of the system increased with interest rates. - Highlights: • UWCAES system configurations were developed using multi-objective optimization. • System was optimized for energy efficiency, exergy, and exergoeconomics • Pareto-optimal solution surfaces were developed at different interest rates. • Similar preferred system configurations were found at all interest rates studied

  19. Optimal erasure protection for scalably compressed video streams with limited retransmission.

    Science.gov (United States)

    Taubman, David; Thie, Johnson

    2005-08-01

    This paper shows how the priority encoding transmission (PET) framework may be leveraged to exploit both unequal error protection and limited retransmission for RD-optimized delivery of streaming media. Previous work on scalable media protection with PET has largely ignored the possibility of retransmission. Conversely, the PET framework has not been harnessed by the substantial body of previous work on RD optimized hybrid forward error correction/automatic repeat request schemes. We limit our attention to sources which can be modeled as independently compressed frames (e.g., video frames), where each element in the scalable representation of each frame can be transmitted in one or both of two transmission slots. An optimization algorithm determines the level of protection which should be assigned to each element in each slot, subject to transmission bandwidth constraints. To balance the protection assigned to elements which are being transmitted for the first time with those which are being retransmitted, the proposed algorithm formulates a collection of hypotheses concerning its own behavior in future transmission slots. We show how the PET framework allows for a decoupled optimization algorithm with only modest complexity. Experimental results obtained with Motion JPEG2000 compressed video demonstrate that substantial performance benefits can be obtained using the proposed framework.

  20. Optimizing the Physical, Mechanical and Hygrothermal Performance of Compressed Earth Bricks

    Directory of Open Access Journals (Sweden)

    Esther Obonyo

    2011-03-01

    Full Text Available The paper is based on findings from research that assesses the potential for enhancing the performance of compressed earth bricks. A set of experiments was carried out to assess the potential for enhancing the bricks’ physical, mechanical and hygrothermal performance through the design of an optimal stabilization strategy. Three different types of bricks were fabricated: soil-cement, soil-cement-lime, and soil-cement-fiber. The different types of bricks did not exhibit significant differences in performances when assessed on the basis of porosity, density, water absorption, and compressive strength. However, upon exposure to elevated moisture and temperature conditions, the soil-cement-fiber bricks had the highest residual strength (87%. The soil-cement and soil-cement-lime bricks had residual strength values of 48.19 and 46.20% respectively. These results suggest that, like any other cement-based material, compressed earth brick properties are affected by hydration-triggered chemical and structural changes occurring in the matrix that would be difficult to isolate using tests that focus on “bulk” changes. The discussion in this paper presents findings from a research effort directed at quantifying the specific changes through an analysis of the microstructure.

  1. DETERMINING OPTIMAL CUBE FOR 3D-DCT BASED VIDEO COMPRESSION FOR DIFFERENT MOTION LEVELS

    Directory of Open Access Journals (Sweden)

    J. Augustin Jacob

    2012-11-01

    Full Text Available This paper proposes new three dimensional discrete cosine transform (3D-DCT based video compression algorithm that will select the optimal cube size based on the motion content of the video sequence. It is determined by finding normalized pixel difference (NPD values, and by categorizing the cubes as “low” or “high” motion cube suitable cube size of dimension either [16×16×8] or[8×8×8] is chosen instead of fixed cube algorithm. To evaluate the performance of the proposed algorithm test sequence with different motion levels are chosen. By doing rate vs. distortion analysis the level of compression that can be achieved and the quality of reconstructed video sequence are determined and compared against fixed cube size algorithm. Peak signal to noise ratio (PSNR is taken to measure the video quality. Experimental result shows that varying the cube size with reference to the motion content of video frames gives better performance in terms of compression ratio and video quality.

  2. Integrated modeling for optimized regional transportation with compressed natural gas fuel

    Directory of Open Access Journals (Sweden)

    Hossam A. Gabbar

    2016-03-01

    Full Text Available Transportation represents major energy consumption where fuel is considered as a primary energy source. Recent development in the vehicle technology revealed possible economical improvements when using natural gas as a fuel source instead of traditional gasoline. There are several fuel alternatives such as electricity, which showed potential for future long-term transportation. However, the move from current situation where gasoline vehicle is dominating shows high cost compared to compressed natural gas vehicle. This paper presents modeling and simulation methodology to optimize performance of transportation based on quantitative study of the risk-based performance of regional transportation. Emission estimation method is demonstrated and used to optimize transportation strategies based on life cycle costing. Different fuel supply scenarios are synthesized and evaluated, which showed strategic use of natural gas as a fuel supply.

  3. Optimal Chest Compression Rate and Compression to Ventilation Ratio in Delivery Room Resuscitation: Evidence from Newborn Piglets and Neonatal Manikins

    Science.gov (United States)

    Solevåg, Anne Lee; Schmölzer, Georg M.

    2017-01-01

    Cardiopulmonary resuscitation (CPR) duration until return of spontaneous circulation (ROSC) influences survival and neurologic outcomes after delivery room (DR) CPR. High quality chest compressions (CC) improve cerebral and myocardial perfusion. Improved myocardial perfusion increases the likelihood of a faster ROSC. Thus, optimizing CC quality may improve outcomes both by preserving cerebral blood flow during CPR and by reducing the recovery time. CC quality is determined by rate, CC to ventilation (C:V) ratio, and applied force, which are influenced by the CC provider. Thus, provider performance should be taken into account. Neonatal resuscitation guidelines recommend a 3:1 C:V ratio. CCs should be delivered at a rate of 90/min synchronized with ventilations at a rate of 30/min to achieve a total of 120 events/min. Despite a lack of scientific evidence supporting this, the investigation of alternative CC interventions in human neonates is ethically challenging. Also, the infrequent occurrence of extensive CPR measures in the DR make randomized controlled trials difficult to perform. Thus, many biomechanical aspects of CC have been investigated in animal and manikin models. Despite mathematical and physiological rationales that higher rates and uninterrupted CC improve CPR hemodynamics, studies indicate that provider fatigue is more pronounced when CC are performed continuously compared to when a pause is inserted after every third CC as currently recommended. A higher rate (e.g., 120/min) is also more fatiguing, which affects CC quality. In post-transitional piglets with asphyxia-induced cardiac arrest, there was no benefit of performing continuous CC at a rate of 90/min. Not only rate but duty cycle, i.e., the duration of CC/total cycle time, is a known determinant of CC effectiveness. However, duty cycle cannot be controlled with manual CC. Mechanical/automated CC in neonatal CPR has not been explored, and feedback systems are under-investigated in this

  4. Massive optimal data compression and density estimation for scalable, likelihood-free inference in cosmology

    Science.gov (United States)

    Alsing, Justin; Wandelt, Benjamin; Feeney, Stephen

    2018-03-01

    Many statistical models in cosmology can be simulated forwards but have intractable likelihood functions. Likelihood-free inference methods allow us to perform Bayesian inference from these models using only forward simulations, free from any likelihood assumptions or approximations. Likelihood-free inference generically involves simulating mock data and comparing to the observed data; this comparison in data-space suffers from the curse of dimensionality and requires compression of the data to a small number of summary statistics to be tractable. In this paper we use massive asymptotically-optimal data compression to reduce the dimensionality of the data-space to just one number per parameter, providing a natural and optimal framework for summary statistic choice for likelihood-free inference. Secondly, we present the first cosmological application of Density Estimation Likelihood-Free Inference (DELFI), which learns a parameterized model for joint distribution of data and parameters, yielding both the parameter posterior and the model evidence. This approach is conceptually simple, requires less tuning than traditional Approximate Bayesian Computation approaches to likelihood-free inference and can give high-fidelity posteriors from orders of magnitude fewer forward simulations. As an additional bonus, it enables parameter inference and Bayesian model comparison simultaneously. We demonstrate Density Estimation Likelihood-Free Inference with massive data compression on an analysis of the joint light-curve analysis supernova data, as a simple validation case study. We show that high-fidelity posterior inference is possible for full-scale cosmological data analyses with as few as ˜104 simulations, with substantial scope for further improvement, demonstrating the scalability of likelihood-free inference to large and complex cosmological datasets.

  5. Compressed Biogas-Diesel Dual-Fuel Engine Optimization Study for Ultralow Emission

    Directory of Open Access Journals (Sweden)

    Hasan Koten

    2014-06-01

    Full Text Available The aim of this study is to find out the optimum operating conditions in a diesel engine fueled with compressed biogas (CBG and pilot diesel dual-fuel. One-dimensional (1D and three-dimensional (3D computational fluid dynamics (CFD code and multiobjective optimization code were employed to investigate the influence of CBG-diesel dual-fuel combustion performance and exhaust emissions on a diesel engine. In this paper, 1D engine code and multiobjective optimization code were coupled and evaluated about 15000 cases to define the proper boundary conditions. In addition, selected single diesel fuel (dodecane and dual-fuel (CBG-diesel combustion modes were modeled to compare the engine performances and exhaust emission characteristics by using CFD code under various operating conditions. In optimization study, start of pilot diesel fuel injection, CBG-diesel flow rate, and engine speed were optimized and selected cases were compared using CFD code. CBG and diesel fuels were defined as leading reactants using user defined code. The results showed that significantly lower NOx emissions were emitted under dual-fuel operation for all cases compared to single-fuel mode at all engine load conditions.

  6. Component optimization of dairy manure vermicompost, straw, and peat in seedling compressed substrates using simplex-centroid design.

    Science.gov (United States)

    Yang, Longyuan; Cao, Hongliang; Yuan, Qiaoxia; Luoa, Shuai; Liu, Zhigang

    2018-03-01

    Vermicomposting is a promising method to disposal dairy manures, and the dairy manure vermicompost (DMV) to replace expensive peat is of high value in the application of seedling compressed substrates. In this research, three main components: DMV, straw, and peat, are conducted in the compressed substrates, and the effect of individual components and the corresponding optimal ratio for the seedling production are significant. To address these issues, the simplex-centroid experimental mixture design is employed, and the cucumber seedling experiment is conducted to evaluate the compressed substrates. Results demonstrated that the mechanical strength and physicochemical properties of compressed substrates for cucumber seedling can be well satisfied with suitable mixture ratio of the components. Moreover, DMV, straw, and peat) could be determined at 0.5917:0.1608:0.2475 when the weight coefficients of the three parameters (shoot length, root dry weight, and aboveground dry weight) were 1:1:1. For different purpose, the optimum ratio can be little changed on the basis of different weight coefficients. Compressed substrate is lump and has certain mechanical strength, produced by application of mechanical pressure to the seedling substrates. It will not harm seedlings when bedding out the seedlings, since the compressed substrate and seedling are bedded out together. However, there is no one using the vermicompost and agricultural waste components of compressed substrate for vegetable seedling production before. Thus, it is important to understand the effect of individual components to seedling production, and to determine the optimal ratio of components.

  7. Linearly and nonlinearly optimized weighted essentially non-oscillatory methods for compressible turbulence

    Science.gov (United States)

    Taylor, Ellen Meredith

    Weighted essentially non-oscillatory (WENO) methods have been developed to simultaneously provide robust shock-capturing in compressible fluid flow and avoid excessive damping of fine-scale flow features such as turbulence. This is accomplished by constructing multiple candidate numerical stencils that adaptively combine so as to provide high order of accuracy and high bandwidth-resolving efficiency in continuous flow regions while averting instability-provoking interpolation across discontinuities. Under certain conditions in compressible turbulence, however, numerical dissipation remains unacceptably high even after optimization of the linear optimal stencil combination that dominates in smooth regions. The remaining nonlinear error arises from two primary sources: (i) the smoothness measurement that governs the application of adaptation away from the optimal stencil and (ii) the numerical properties of individual candidate stencils that govern numerical accuracy when adaptation engages. In this work, both of these sources are investigated, and corrective modifications to the WENO methodology are proposed and evaluated. Excessive nonlinear error due to the first source is alleviated through two separately considered procedures appended to the standard smoothness measurement technique that are designated the "relative smoothness limiter" and the "relative total variation limiter." In theory, appropriate values of their associated parameters should be insensitive to flow configuration, thereby sidestepping the prospect of costly parameter tuning; and this expectation of broad effectiveness is assessed in direct numerical simulations (DNS) of one-dimensional inviscid test problems, three-dimensional compressible isotropic turbulence of varying Reynolds and turbulent Mach numbers, and shock/isotropic-turbulence interaction (SITI). In the process, tools for efficiently comparing WENO adaptation behavior in smooth versus shock-containing regions are developed. The

  8. Vertical discretizations for compressible Euler equation atmospheric models giving optimal representation of normal modes

    International Nuclear Information System (INIS)

    Thuburn, J.; Woollings, T.J.

    2005-01-01

    Accurate representation of different kinds of wave motion is essential for numerical models of the atmosphere, but is sensitive to details of the discretization. In this paper, numerical dispersion relations are computed for different vertical discretizations of the compressible Euler equations and compared with the analytical dispersion relation. A height coordinate, an isentropic coordinate, and a terrain-following mass-based coordinate are considered, and, for each of these, different choices of prognostic variables and grid staggerings are considered. The discretizations are categorized according to whether their dispersion relations are optimal, are near optimal, have a single zero-frequency computational mode, or are problematic in other ways. Some general understanding of the factors that affect the numerical dispersion properties is obtained: heuristic arguments concerning the normal mode structures, and the amount of averaging and coarse differencing in the finite difference scheme, are shown to be useful guides to which configurations will be optimal; the number of degrees of freedom in the discretization is shown to be an accurate guide to the existence of computational modes; there is only minor sensitivity to whether the equations for thermodynamic variables are discretized in advective form or flux form; and an accurate representation of acoustic modes is found to be a prerequisite for accurate representation of inertia-gravity modes, which, in turn, is found to be a prerequisite for accurate representation of Rossby modes

  9. Modeling and Optimization of Compressive Strength of Hollow Sandcrete Block with Rice Husk Ash Admixture

    Directory of Open Access Journals (Sweden)

    2016-11-01

    Full Text Available The paper presents the report of an investigation into the model development and optimization of the compressive strength of 55/45 to 70/30 cement/Rice Husk Ash (RHA in hollow sandcrete block. The low cost and local availability potential of RHA, a pozzolanic material gasps for exploitation. The study applies the Scheffe\\'s optimization approach to obtain a mathematical model of the form f(xi1 ,xi2 ,xi3 xi4 , where x are proportions of the concrete components, viz: cement, RHA, sand and water. Scheffe\\'s i experimental design techniques are followed to mould various hollow block samples measuring 450mm x 225mm x 150mm and tested for 28 days strength. The task involved experimentation and design, applying the second order polynomial characterization process of the simplex lattice method. The model adequacy is checked using the control factors. Finally, a software is prepared to handle the design computation process to take the desired property of the mix, and generate the optimal mix ratios. Reversibly, any mix ratios can be desired and the attainable strength obtained.

  10. Film Cooling Optimization Using Numerical Computation of the Compressible Viscous Flow Equations and Simplex Algorithm

    Directory of Open Access Journals (Sweden)

    Ahmed M. Elsayed

    2013-01-01

    Full Text Available Film cooling is vital to gas turbine blades to protect them from high temperatures and hence high thermal stresses. In the current work, optimization of film cooling parameters on a flat plate is investigated numerically. The effect of film cooling parameters such as inlet velocity direction, lateral and forward diffusion angles, blowing ratio, and streamwise angle on the cooling effectiveness is studied, and optimum cooling parameters are selected. The numerical simulation of the coolant flow through flat plate hole system is carried out using the “CFDRC package” coupled with the optimization algorithm “simplex” to maximize overall film cooling effectiveness. Unstructured finite volume technique is used to solve the steady, three-dimensional and compressible Navier-Stokes equations. The results are compared with the published numerical and experimental data of a cylindrically round-simple hole, and the results show good agreement. In addition, the results indicate that the average overall film cooling effectiveness is enhanced by decreasing the streamwise angle for high blowing ratio and by increasing the lateral and forward diffusion angles. Optimum geometry of the cooling hole on a flat plate is determined. In addition, numerical simulations of film cooling on actual turbine blade are performed using the flat plate optimal hole geometry.

  11. Thermal and economical optimization of air conditioning units with vapor compression refrigeration system

    Energy Technology Data Exchange (ETDEWEB)

    Sanaye, S.; Malekmohammadi, H.R. [Iran University of Science and Technology, Tehran (Iran). Dept. of Mechanical Engineering

    2004-09-01

    A new method of thermal and economical optimum design of air conditioning units with vapor compression refrigeration system, is presented. Such a system includes compressor, condenser, evaporator, centrifugal and axial fans. Evaporator and condenser temperatures, their heating surface areas (frontal surface area and number of tubes), centrifugal and axial fan powers, and compressor power are among the design variables. The data provided by manufacturers for fan (volume flow rate versus pressure drop) and compressor power (using evaporator and condenser temperatures) was used to choose these components directly from available data for consumers. To study the performance of the system under various situations, and implementing the optimization procedure, a simulation program including all thermal and geometrical parameters was developed. The objective function for optimization was the total cost per unit cooling load of the system including capital investment for components as well as the required electricity cost. To find the system design parameters, this objective function was minimized by Lagrange multipliers method. The effects of changing the cooling load on optimal design parameters were studied. (author)

  12. Thermal System Analysis and Optimization of Large-Scale Compressed Air Energy Storage (CAES

    Directory of Open Access Journals (Sweden)

    Zhongguang Fu

    2015-08-01

    Full Text Available As an important solution to issues regarding peak load and renewable energy resources on grids, large-scale compressed air energy storage (CAES power generation technology has recently become a popular research topic in the area of large-scale industrial energy storage. At present, the combination of high-expansion ratio turbines with advanced gas turbine technology is an important breakthrough in energy storage technology. In this study, a new gas turbine power generation system is coupled with current CAES technology. Moreover, a thermodynamic cycle system is optimized by calculating for the parameters of a thermodynamic system. Results show that the thermal efficiency of the new system increases by at least 5% over that of the existing system.

  13. No Cost – Low Cost Compressed Air System Optimization in Industry

    Science.gov (United States)

    Dharma, A.; Budiarsa, N.; Watiniasih, N.; Antara, N. G.

    2018-04-01

    Energy conservation is a systematic, integrated of effort, in order to preserve energy sources and improve energy utilization efficiency. Utilization of energy in efficient manner without reducing the energy usage it must. Energy conservation efforts are applied at all stages of utilization, from utilization of energy resources to final, using efficient technology, and cultivating an energy-efficient lifestyle. The most common way is to promote energy efficiency in the industry on end use and overcome barriers to achieve such efficiency by using system energy optimization programs. The facts show that energy saving efforts in the process usually only focus on replacing tools and not an overall system improvement effort. In this research, a framework of sustainable energy reduction work in companies that have or have not implemented energy management system (EnMS) will be conducted a systematic technical approach in evaluating accurately a compressed-air system and potential optimization through observation, measurement and verification environmental conditions and processes, then processing the physical quantities of systems such as air flow, pressure and electrical power energy at any given time measured using comparative analysis methods in this industry, to provide the potential savings of energy saving is greater than the component approach, with no cost to the lowest cost (no cost - low cost). The process of evaluating energy utilization and energy saving opportunities will provide recommendations for increasing efficiency in the industry and reducing CO2 emissions and improving environmental quality.

  14. Optimization of compressive strength in admixture-reinforced cement-based grouts

    Directory of Open Access Journals (Sweden)

    Sahin Zaimoglu, A.

    2007-12-01

    Full Text Available The Taguchi method was used in this study to optimize the unconfined (7-, 14- and 28-day compressive strength of cement-based grouts with bentonite, fly ash and silica fume admixtures. The experiments were designed using an L16 orthogonal array in which the three factors considered were bentonite (0%, 0.5%, 1.0% and 3%, fly ash (10%, 20%, 30% and 40% and silica fume (0%, 5%, 10% and 20% content. The experimental results, which were analyzed by ANOVA and the Taguchi method, showed that fly ash and silica fume content play a significant role in unconfined compressive strength. The optimum conditions were found to be: 0% bentonite, 10% fly ash, 20% silica fume and 28 days of curing time. The maximum unconfined compressive strength reached under the above optimum conditions was 17.1 MPa.En el presente trabajo se ha intentado optimizar, mediante el método de Taguchi, las resistencias a compresión (a las edades de 7, 14 y 28 días de lechadas de cemento reforzadas con bentonita, cenizas volantes y humo de sílice. Se diseñaron los experimentos de acuerdo con un arreglo ortogonal tipo L16 en el que se contemplaban tres factores: la bentonita (0, 0,5, 1 y 3%, las cenizas volantes (10, 20, 30 y 40% y el humo de sílice (0, 5, 10 y 20% (porcentajes en peso del sólido. Los datos obtenidos se analizaron con mediante ANOVA y el método de Taguchi. De acuerdo con los resultados experimentales, el contenido tanto de cenizas volantes como de humo de sílice desempeña un papel significativo en la resistencia a compresión. Por otra parte, las condiciones óptimas que se han identificado son: 0% bentonita, 10% cenizas volantes, 20% humo de sílice y 28 días de tiempo de curado. La resistencia a compresión máxima conseguida en las anteriores condiciones era de 17,1 MPa.

  15. Modelling and optimization of seawater desalination process using mechanical vapour compression

    Directory of Open Access Journals (Sweden)

    V.P. Kravchenko

    2016-09-01

    Full Text Available In the conditions of global climate changes shortage of fresh water becomes an urgent problem for an increasing number of the countries. One of the most perspective technologies of a desalting of sea water is the mechanical vapour compression (MVC providing low energy consumption due to the principle of a heat pump. Aim: The aim of this research is to identify the reserves of efficiency increasing of the desalination systems based on mechanical vapour compression by optimization of the scheme and parameters of installations with MVC. Materials and Methods: The new type of desalination installation is offered which main element is the heat exchanger of the latent heat. Sea water after preliminary heating in heat exchangers comes to the evaporator-condenser where receives the main amount of heat from the condensed steam. A part of sea water evaporates, and the strong solution of salt (brine goes out of the evaporator, and after cooling is dumped back in the sea. The formed steam is compressed by the compressor and comes to the condenser. An essential singularity of this scheme is that condensation happens at higher temperature, than evaporation. Thanks to this the heat, which is comes out at devaporation, is used for evaporation of sea water. Thereby, in this class of desalination installations the principle of a heat pump is implemented. Results: For achievement of a goal the following tasks were solved: the mathematical model of installations with MVC is modified and supplemented; the scheme of heat exchangers switching is modified; influence of design data of desalination installation on the cost of an inventory and the electric power is investigated. The detailed analysis of the main schemes of installation and mathematical model allowed defining ways of decrease in energy consumption and the possible merit value. Influence of two key parameters - a specific power of the compressor and a specific surface area of the evaporator-condenser - on a

  16. Optimization of current waveform tailoring for magnetically driven isentropic compression experiments

    Energy Technology Data Exchange (ETDEWEB)

    Waisman, E. M.; Reisman, D. B.; Stoltzfus, B. S.; Stygar, W. A.; Cuneo, M. E.; Haill, T. A.; Davis, J.-P.; Brown, J. L.; Seagle, C. T. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States); Spielman, R. B. [Idaho State University, Pocatello, Idaho 83201 (United States)

    2016-06-15

    The Thor pulsed power generator is being developed at Sandia National Laboratories. The design consists of up to 288 decoupled and transit time isolated capacitor-switch units, called “bricks,” that can be individually triggered to achieve a high degree of pulse tailoring for magnetically driven isentropic compression experiments (ICE) [D. B. Reisman et al., Phys. Rev. Spec. Top.–Accel. Beams 18, 090401 (2015)]. The connecting transmission lines are impedance matched to the bricks, allowing the capacitor energy to be efficiently delivered to an ICE strip-line load with peak pressures of over 100 GPa. Thor will drive experiments to explore equation of state, material strength, and phase transition properties of a wide variety of materials. We present an optimization process for producing tailored current pulses, a requirement for many material studies, on the Thor generator. This technique, which is unique to the novel “current-adder” architecture used by Thor, entirely avoids the iterative use of complex circuit models to converge to the desired electrical pulse. We begin with magnetohydrodynamic simulations for a given material to determine its time dependent pressure and thus the desired strip-line load current and voltage. Because the bricks are connected to a central power flow section through transit-time isolated coaxial cables of constant impedance, the brick forward-going pulses are independent of each other. We observe that the desired equivalent forward-going current driving the pulse must be equal to the sum of the individual brick forward-going currents. We find a set of optimal brick delay times by requiring that the L{sub 2} norm of the difference between the brick-sum current and the desired forward-going current be a minimum. We describe the optimization procedure for the Thor design and show results for various materials of interest.

  17. Accelerated barrier optimization compressed sensing (ABOCS) for CT reconstruction with improved convergence

    International Nuclear Information System (INIS)

    Niu, Tianye; Fruhauf, Quentin; Petrongolo, Michael; Zhu, Lei; Ye, Xiaojing

    2014-01-01

    Recently, we proposed a new algorithm of accelerated barrier optimization compressed sensing (ABOCS) for iterative CT reconstruction. The previous implementation of ABOCS uses gradient projection (GP) with a Barzilai–Borwein (BB) step-size selection scheme (GP-BB) to search for the optimal solution. The algorithm does not converge stably due to its non-monotonic behavior. In this paper, we further improve the convergence of ABOCS using the unknown-parameter Nesterov (UPN) method and investigate the ABOCS reconstruction performance on clinical patient data. Comparison studies are carried out on reconstructions of computer simulation, a physical phantom and a head-and-neck patient. In all of these studies, the ABOCS results using UPN show more stable and faster convergence than those of the GP-BB method and a state-of-the-art Bregman-type method. As shown in the simulation study of the Shepp–Logan phantom, UPN achieves the same image quality as those of GP-BB and the Bregman-type methods, but reduces the iteration numbers by up to 50% and 90%, respectively. In the Catphan©600 phantom study, a high-quality image with relative reconstruction error (RRE) less than 3% compared to the full-view result is obtained using UPN with 17% projections (60 views). In the conventional filtered-backprojection reconstruction, the corresponding RRE is more than 15% on the same projection data. The superior performance of ABOCS with the UPN implementation is further demonstrated on the head-and-neck patient. Using 25% projections (91 views), the proposed method reduces the RRE from 21% as in the filtered backprojection (FBP) results to 7.3%. In conclusion, we propose UPN for ABOCS implementation. As compared to GP-BB and the Bregman-type methods, the new method significantly improves the convergence with higher stability and fewer iterations. (paper)

  18. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    Science.gov (United States)

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  19. Optimization of operating conditions in the early direct injection premixed charge compression ignition regime

    NARCIS (Netherlands)

    Boot, M.D.; Luijten, C.C.M.; Rijk, E.P.; Albrecht, B.A.; Baert, R.S.G.

    2009-01-01

    Early Direct Injection Premixed Charge Compression Ignition (EDI PCCI) is a widely researched combustion concept, which promises soot and CO2 emission levels of a spark-ignition (SI) and compression-ignition (CI) engine, respectively. Application of this concept to a conventional CI engine using a

  20. Telemedicine + OCT: toward design of optimized algorithms for high-quality compressed images

    Science.gov (United States)

    Mousavi, Mahta; Lurie, Kristen; Land, Julian; Javidi, Tara; Ellerbee, Audrey K.

    2014-03-01

    Telemedicine is an emerging technology that aims to provide clinical healthcare at a distance. Among its goals, the transfer of diagnostic images over telecommunication channels has been quite appealing to the medical community. When viewed as an adjunct to biomedical device hardware, one highly important consideration aside from the transfer rate and speed is the accuracy of the reconstructed image at the receiver end. Although optical coherence tomography (OCT) is an established imaging technique that is ripe for telemedicine, the effects of OCT data compression, which may be necessary on certain telemedicine platforms, have not received much attention in the literature. We investigate the performance and efficiency of several lossless and lossy compression techniques for OCT data and characterize their effectiveness with respect to achievable compression ratio, compression rate and preservation of image quality. We examine the effects of compression in the interferogram vs. A-scan domain as assessed with various objective and subjective metrics.

  1. Thermo-Economic Comparison and Parametric Optimizations among Two Compressed Air Energy Storage System Based on Kalina Cycle and ORC

    Directory of Open Access Journals (Sweden)

    Ruixiong Li

    2016-12-01

    Full Text Available The compressed air energy storage (CAES system, considered as one method for peaking shaving and load-levelling of the electricity system, has excellent characteristics of energy storage and utilization. However, due to the waste heat existing in compressed air during the charge stage and exhaust gas during the discharge stage, the efficient operation of the conventional CAES system has been greatly restricted. The Kalina cycle (KC and organic Rankine cycle (ORC have been proven to be two worthwhile technologies to fulfill the different residual heat recovery for energy systems. To capture and reuse the waste heat from the CAES system, two systems (the CAES system combined with KC and ORC, respectively are proposed in this paper. The sensitivity analysis shows the effect of the compression ratio and the temperature of the exhaust on the system performance: the KC-CAES system can achieve more efficient operation than the ORC-CAES system under the same temperature of exhaust gas; meanwhile, the larger compression ratio can lead to the higher efficiency for the KC-CAES system than that of ORC-CAES with the constant temperature of the exhaust gas. In addition, the evolutionary multi-objective algorithm is conducted between the thermodynamic and economic performances to find the optimal parameters of the two systems. The optimum results indicate that the solutions with an exergy efficiency of around 59.74% and 53.56% are promising for KC-CAES and ORC-CAES system practical designs, respectively.

  2. Energic, Exergic, Exergo‐economic investigation and optimization of auxiliary cooling system (ACS equipped with compression refrigerating system (CRS

    Directory of Open Access Journals (Sweden)

    Omid Karimi Sadaghiyani

    2017-09-01

    Full Text Available Heller main cooling tower as air-cooled heat exchanger is used in the combined cycle power plants (CCPP to reduce the temperature of condenser. In extreme summer heat, the efficiency of the cooling tower is reduced and it lessens performance of Steam Turbine Generation (STG unit of Combined Cycle Power Plant (CCPP. Thus, the auxiliary cooling system (ACS is equipped with compression refrigerating system (CRS. This auxiliary system is linked with the Heller main cooling tower and improves the performance of power plant. In other words, this auxiliary system increases the generated power of STG unit of CCPP by decreasing the temperature of returning water from cooling tower Therefore, in the first step, the mentioned auxiliary cooling system (ACS as a heat exchanger and compression refrigerating system (CRS have been designed via ASPEN HTFS and EES code respectively. In order to validate their results, these two systems have been built and theirs experimentally obtained data have been compared with ASPEN and EES results. There are good agreements between results. After that, exergic and exergo-economic analysis of designed systems have been carried out. Finally, the compression refrigerating system (CRS has been optimized via Genetic Algorithm (GA. Increasing in exergy efficiency (ε from 14.23% up to 36.12% and decreasing the total cost rate (ĊSystem from 378.2 ($/h to 308.2 ($/h are as results of multi-objective optimization.

  3. Optimization of the segmented method for optical compression and multiplexing system

    Science.gov (United States)

    Al Falou, Ayman

    2002-05-01

    Because of the constant increasing demands of images exchange, and despite the ever increasing bandwidth of the networks, compression and multiplexing of images is becoming inseparable from their generation and display. For high resolution real time motion pictures, electronic performing of compression requires complex and time-consuming processing units. On the contrary, by its inherent bi-dimensional character, coherent optics is well fitted to perform such processes that are basically bi-dimensional data handling in the Fourier domain. Additionally, the main limiting factor that was the maximum frame rate is vanishing because of the recent improvement of spatial light modulator technology. The purpose of this communication is to benefit from recent optical correlation algorithms. The segmented filtering used to store multi-references in a given space bandwidth product optical filter can be applied to networks to compress and multiplex images in a given bandwidth channel.

  4. Parameters Determination of Yoshida Uemori Model Through Optimization Process of Cyclic Tension-Compression Test and V-Bending Springback

    Directory of Open Access Journals (Sweden)

    Serkan Toros

    Full Text Available Abstract In recent years, the studies on the enhancement of the prediction capability of the sheet metal forming simulations have increased remarkably. Among the used models in the finite element simulations, the yield criteria and hardening models have a great importance for the prediction of the formability and springback. The required model parameters are determined by using the several test results, i.e. tensile, compression, biaxial stretching tests (bulge test and cyclic tests (tension-compression. In this study, the Yoshida-Uemori (combined isotropic and kinematic hardening model is used to determine the performance of the springback prediction. The model parameters are determined by the optimization processes of the cyclic test by finite element simulations. However, in the study besides the cyclic tests, the model parameters are also evaluated by the optimization process of both cyclic and V-die bending simulations. The springback angle predictions with the model parameters obtained by the optimization of both cyclic and V-die bending simulations are found to mimic the experimental results in a better way than those obtained from only cyclic tests. However, the cyclic simulation results are found to be close enough to the experimental results.

  5. Compressed collagen constructs with optimized mechanical properties and cell interactions for tissue engineering applications

    DEFF Research Database (Denmark)

    Ajalloueian, Fatemeh; Nikogeorgos, Nikolaos; Ajalloueian, Ali

    2018-01-01

    In this study, we are introducing a simple, fast and reliable add-in to the technique of plastic compression (PC) to obtain collagen sheets with decreased fibrillar densities, representing improved cell-interactions and mechanical properties. Collagen hydrogels with different initial concentratio...

  6. Compressive failure modes and parameter optimization of the trabecular structure of biomimetic fully integrated honeycomb plates.

    Science.gov (United States)

    Chen, Jinxiang; Tuo, Wanyong; Zhang, Xiaoming; He, Chenglin; Xie, Juan; Liu, Chang

    2016-12-01

    To develop lightweight biomimetic composite structures, the compressive failure and mechanical properties of fully integrated honeycomb plates were investigated experimentally and through the finite element method. The results indicated that: fracturing of the fully integrated honeycomb plates primarily occurred in the core layer, including the sealing edge structure. The morphological failures can be classified into two types, namely dislocations and compactions, and were caused primarily by the stress concentrations at the interfaces between the core layer and the upper and lower laminations and secondarily by the disordered short-fiber distribution in the material; although the fully integrated honeycomb plates manufactured in this experiment were imperfect, their mass-specific compressive strength was superior to that of similar biomimetic samples. Therefore, the proposed bio-inspired structure possesses good overall mechanical properties, and a range of parameters, such as the diameter of the transition arc, was defined for enhancing the design of fully integrated honeycomb plates and improving their compressive mechanical properties. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Experimental optimization of a direct injection homogeneous charge compression ignition gasoline engine using split injections with fully automated microgenetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Canakci, M. [Kocaeli Univ., Izmit (Turkey); Reitz, R.D. [Wisconsin Univ., Dept. of Mechanical Engineering, Madison, WI (United States)

    2003-03-01

    Homogeneous charge compression ignition (HCCI) is receiving attention as a new low-emission engine concept. Little is known about the optimal operating conditions for this engine operation mode. Combustion under homogeneous, low equivalence ratio conditions results in modest temperature combustion products, containing very low concentrations of NO{sub x} and particulate matter (PM) as well as providing high thermal efficiency. However, this combustion mode can produce higher HC and CO emissions than those of conventional engines. An electronically controlled Caterpillar single-cylinder oil test engine (SCOTE), originally designed for heavy-duty diesel applications, was converted to an HCCI direct injection (DI) gasoline engine. The engine features an electronically controlled low-pressure direct injection gasoline (DI-G) injector with a 60 deg spray angle that is capable of multiple injections. The use of double injection was explored for emission control and the engine was optimized using fully automated experiments and a microgenetic algorithm optimization code. The variables changed during the optimization include the intake air temperature, start of injection timing and the split injection parameters (per cent mass of fuel in each injection, dwell between the pulses). The engine performance and emissions were determined at 700 r/min with a constant fuel flowrate at 10 MPa fuel injection pressure. The results show that significant emissions reductions are possible with the use of optimal injection strategies. (Author)

  8. Exergoeconomic optimization of an ammonia–water hybrid absorption–compression heat pump for heat supply in a spraydrying facility

    DEFF Research Database (Denmark)

    Jensen, Jonas Kjær; Markussen, Wiebke Brix; Reinholdt, Lars

    2015-01-01

    Spray-drying facilities are among the most energy intensive industrial processes. Using a heat pump to recover waste heat and replace gas combustion has the potential to attain both economic and emissions savings. In the case examined a drying gas of ambient air is heated to 200 C yielding a heat...... load of 6.1 MW. The exhaust air from the drying process is 80 C. The implementation of anammonia–water hybrid absorption–compression heat pump to partly cover the heat load is investigated. A thermodynamic analysis is applied to determine optimal circulation ratios for a number of ammonia mass...... fractions and heat pump loads. An exergo economic optimization is applied to minimize the lifetime cost of the system. Technological limitations are imposed to constrain the solution to commercial components. The best possible implementation is identified in terms of heat load, ammonia mass fraction...

  9. Solidification/stabilization of ASR fly ash using Thiomer material: Optimization of compressive strength and heavy metals leaching.

    Science.gov (United States)

    Baek, Jin Woong; Choi, Angelo Earvin Sy; Park, Hung Suck

    2017-12-01

    Optimization studies of a novel and eco-friendly construction material, Thiomer, was investigated in the solidification/stabilization of automobile shredded residue (ASR) fly ash. A D-optimal mixture design was used to evaluate and optimize maximum compressive strength and heavy metals leaching by varying Thiomer (20-40wt%), ASR fly ash (30-50wt%) and sand (20-40wt%). The analysis of variance was utilized to determine the level of significance of each process parameters and interactions. The microstructure of the solidified materials was taken from a field emission-scanning electron microscopy and energy dispersive X-ray spectroscopy that confirmed successful Thiomer solidified ASR fly ash due to reduced pores and gaps in comparison with an untreated ASR fly ash. The X-ray diffraction detected the enclosed materials on the ASR fly ash primarily contained sulfur associated crystalline complexes. Results indicated the optimal conditions of 30wt% Thiomer, 30wt% ASR fly ash and 40wt% sand reached a compressive strength of 54.9MPa. For the optimum results in heavy metals leaching, 0.0078mg/LPb, 0.0260mg/L Cr, 0.0007mg/LCd, 0.0020mg/L Cu, 0.1027mg/L Fe, 0.0046mg/L Ni and 0.0920mg/L Zn were leached out, being environmentally safe due to being substantially lower than the Korean standard leaching requirements. The results also showed that Thiomer has superiority over the commonly used Portland cement asa binding material which confirmed its potential usage as an innovative approach to simultaneously synthesize durable concrete and satisfactorily pass strict environmental regulations by heavy metals leaching. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Efficient Design and Optimization of a Flow Control System for Supersonic Mixed Compression Inlets, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — SynGenics Corporation proposes a program that unites mathematical and statistical processes, Response Surface Methodology, and multicriterial optimization methods to...

  11. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    Science.gov (United States)

    Guthier, C.; Aschenbrenner, K. P.; Buergy, D.; Ehmann, M.; Wenz, F.; Hesser, J. W.

    2015-03-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced.

  12. A new optimization method using a compressed sensing inspired solver for real-time LDR-brachytherapy treatment planning

    International Nuclear Information System (INIS)

    Guthier, C; Aschenbrenner, K P; Buergy, D; Ehmann, M; Wenz, F; Hesser, J W

    2015-01-01

    This work discusses a novel strategy for inverse planning in low dose rate brachytherapy. It applies the idea of compressed sensing to the problem of inverse treatment planning and a new solver for this formulation is developed. An inverse planning algorithm was developed incorporating brachytherapy dose calculation methods as recommended by AAPM TG-43. For optimization of the functional a new variant of a matching pursuit type solver is presented. The results are compared with current state-of-the-art inverse treatment planning algorithms by means of real prostate cancer patient data. The novel strategy outperforms the best state-of-the-art methods in speed, while achieving comparable quality. It is able to find solutions with comparable values for the objective function and it achieves these results within a few microseconds, being up to 542 times faster than competing state-of-the-art strategies, allowing real-time treatment planning. The sparse solution of inverse brachytherapy planning achieved with methods from compressed sensing is a new paradigm for optimization in medical physics. Through the sparsity of required needles and seeds identified by this method, the cost of intervention may be reduced. (paper)

  13. Determination of composition of pozzolanic waste mixtures with optimized compressive strength

    Directory of Open Access Journals (Sweden)

    Nardi José Vidal

    2004-01-01

    Full Text Available The utilization of ceramic wastes with pozzolanic properties along with other compounds for obtaining new materials with cementating properties is an alternative for reducing the environmental pollution. The acceptance of these new products in the market demands minimal changes in mechanical properties according to its utilization. For a variable range of compositional intervals, attempts were made to establish limiting incorporation proportions that assure the achievement of minimum pre-established mechanical strength values in the final product. In this case minimum compressive strength value is 3,000 kPa. A simultaneous association of other properties is also possible.

  14. Optimal operation strategies of compressed air energy storage (CAES) on electricity spot markets with fluctuating prices

    DEFF Research Database (Denmark)

    Lund, Henrik; Salgi, Georges; Elmegaard, Brian

    2009-01-01

    on electricity spot markets by storing energy when electricity prices are low and producing electricity when prices are high. In order to make a profit on such markets, CAES plant operators have to identify proper strategies to decide when to sell and when to buy electricity. This paper describes three...... plants will not be able to achieve such optimal operation, since the fluctuations of spot market prices in the coming hours and days are not known. Consequently, two simple practical strategies have been identified and compared to the results of the optimal strategy. This comparison shows that...... independent computer-based methodologies which may be used for identifying the optimal operation strategy for a given CAES plant, on a given spot market and in a given year. The optimal strategy is identified as the one which provides the best business-economic net earnings for the plant. In practice, CAES...

  15. Framework for Combined Diagnostics, Prognostics and Optimal Operation of a Subsea Gas Compression System

    OpenAIRE

    Verheyleweghen, Adriaen; Jaeschke, Johannes

    2017-01-01

    The efficient and safe operation of subsea gas and oil production systems sets strict requirements to equipment reliability to avoid unplanned breakdowns and costly maintenance interventions. Because of this, condition monitoring is employed to assess the status of the system in real-time. However, the condition of the system is usually not considered explicitly when finding the optimal operation strategy. Instead, operational constraints on flow rates, pressures etc., based on worst-case sce...

  16. Optimal sensor placement for control of a supersonic mixed-compression inlet with variable geometry

    Science.gov (United States)

    Moore, Kenneth Thomas

    A method of using fluid dynamics models for the generation of models that are useable for control design and analysis is investigated. The problem considered is the control of the normal shock location in the VDC inlet, which is a mixed-compression, supersonic, variable-geometry inlet of a jet engine. A quasi-one-dimensional set of fluid equations incorporating bleed and moving walls is developed. An object-oriented environment is developed for simulation of flow systems under closed-loop control. A public interface between the controller and fluid classes is defined. A linear model representing the dynamics of the VDC inlet is developed from the finite difference equations, and its eigenstructure is analyzed. The order of this model is reduced using the square root balanced model reduction method to produce a reduced-order linear model that is suitable for control design and analysis tasks. A modification to this method that improves the accuracy of the reduced-order linear model for the purpose of sensor placement is presented and analyzed. The reduced-order linear model is used to develop a sensor placement method that quantifies as a function of the sensor location the ability of a sensor to provide information on the variable of interest for control. This method is used to develop a sensor placement metric for the VDC inlet. The reduced-order linear model is also used to design a closed loop control system to control the shock position in the VDC inlet. The object-oriented simulation code is used to simulate the nonlinear fluid equations under closed-loop control.

  17. Control Optimization of a LHC 18 KW Cryoplant Warm Compression Station Using Dynamic Simulations

    CERN Document Server

    Bradu, B; Niculescu, S I

    2010-01-01

    This paper addresses the control optimization of a 4.5 K refrigerator used in the cryogenic system of the Large Hadron Collider (LHC) at CERN. First, the compressor station with the cold-box have been modeled and simulated under PROCOS (Process and Control Simulator), a simulation environment developed at CERN. Next, an appropriate parameter identification has been performed on the simulator to obtain a simplified model of the system in order to design an Internal Model Control (IMC) enhancing the regulation of the high pressure. Finally, a floating high pressure control is proposed using a cascade control to reduce operational costs.

  18. Optimization of a transition radiation detector for the compressed baryonic matter experiment

    Energy Technology Data Exchange (ETDEWEB)

    Arend, Andreas

    2014-07-01

    The Transition Radiation Detector (TRD) of the compressed baryonic matter (CBM) experiment at FAIR has to provide electron-pion separation as well as charged-particle tracking. Within this work, thin and symmetric Multi-Wire Proportional Chambers (MWPCs) without additional drift region were proposed. the proposed prototypes feature a foil-based entrance window to minimize the material budget and to reduce the absorption probability of the generated TR photon. Based on the conceptual design of thin and symmetric MWPCs without drift region, multiple prototypes were constructed and their performance presented within this thesis. With the constructed prototypes of generations II and III the geometries of the wire and cathode planes were determined to be 4+4 mm and 5+5 mm. Based on the results of a performed test beam campaign in 2011 with this prototypes new prototypes of generation IV were manufactured and tested in a subsequent test beam campaign in 2012. Prototypes of different radiators were developed together with the MWPC prototypes. Along with regular foil radiators, foam-based radiator types made of polyethylene foam were utilized. Also radiators constructed in a sandwich design, which used different fiber materials confined with solid foam sheets, were used. For the prototypes without drift region, simulations of the electrostatic and mechanical properties were performed. The GARFIELD software package was used to simulate the electric field and to determine the resulting drift lines of the generated electrons. The mean gas amplification depending on the utilized gas and the applied anode voltage was simulated and the gas-gain homogeneity was verified. Since the thin foil-based entrance window experiences a deformation due to pressure differences inside and outside the MWPC, the variation on the gas gain depending on the deformation was simulated. The mechanical properties focusing on the stability of the entrance window was determined with a finiteelement

  19. Optimization of a transition radiation detector for the compressed baryonic matter experiment

    International Nuclear Information System (INIS)

    Arend, Andreas

    2014-01-01

    The Transition Radiation Detector (TRD) of the compressed baryonic matter (CBM) experiment at FAIR has to provide electron-pion separation as well as charged-particle tracking. Within this work, thin and symmetric Multi-Wire Proportional Chambers (MWPCs) without additional drift region were proposed. the proposed prototypes feature a foil-based entrance window to minimize the material budget and to reduce the absorption probability of the generated TR photon. Based on the conceptual design of thin and symmetric MWPCs without drift region, multiple prototypes were constructed and their performance presented within this thesis. With the constructed prototypes of generations II and III the geometries of the wire and cathode planes were determined to be 4+4 mm and 5+5 mm. Based on the results of a performed test beam campaign in 2011 with this prototypes new prototypes of generation IV were manufactured and tested in a subsequent test beam campaign in 2012. Prototypes of different radiators were developed together with the MWPC prototypes. Along with regular foil radiators, foam-based radiator types made of polyethylene foam were utilized. Also radiators constructed in a sandwich design, which used different fiber materials confined with solid foam sheets, were used. For the prototypes without drift region, simulations of the electrostatic and mechanical properties were performed. The GARFIELD software package was used to simulate the electric field and to determine the resulting drift lines of the generated electrons. The mean gas amplification depending on the utilized gas and the applied anode voltage was simulated and the gas-gain homogeneity was verified. Since the thin foil-based entrance window experiences a deformation due to pressure differences inside and outside the MWPC, the variation on the gas gain depending on the deformation was simulated. The mechanical properties focusing on the stability of the entrance window was determined with a finiteelement

  20. The research of optimal selection method for wavelet packet basis in compressing the vibration signal of a rolling bearing in fans and pumps

    International Nuclear Information System (INIS)

    Hao, W; Jinji, G

    2012-01-01

    Compressing the vibration signal of a rolling bearing has important significance to wireless monitoring and remote diagnosis of fans and pumps which is widely used in the petrochemical industry. In this paper, according to the characteristics of the vibration signal in a rolling bearing, a compression method based on the optimal selection of wavelet packet basis is proposed. We analyze several main attributes of wavelet packet basis and the effect to the compression of the vibration signal in a rolling bearing using wavelet packet transform in various compression ratios, and proposed a method to precisely select a wavelet packet basis. Through an actual signal, we come to the conclusion that an orthogonal wavelet packet basis with low vanishing moment should be used to compress the vibration signal of a rolling bearing to get an accurate energy proportion between the feature bands in the spectrum of reconstructing the signal. Within these low vanishing moments, orthogonal wavelet packet basis, and 'coif' wavelet packet basis can obtain the best signal-to-noise ratio in the same compression ratio for its best symmetry.

  1. [The optimization of chondromalacia patellae diagnosis by NMR tomography. The use of an apparatus for cartilage compression].

    Science.gov (United States)

    König, H; Dinkelaker, F; Wolf, K J

    1991-08-01

    The aim of this study was to improve the MRI diagnosis of CMP, with special reference to the early stages and accurate staging. For this purpose, the retropatellar cartilage was examined by MRI while compression was carried out, using 21 patients and five normal controls. The compression was applied by means of a specially constructed device. Changes in cartilage thickness and signal intensity were evaluated quantitatively during FLASH and FISP sequences. In all patients the results of arthroscopies were available and in 12 patients, cartilage biopsies had been obtained. CMP stage I could be distinguished from normal cartilage by reduction in cartilage thickness and signal increase from the oedematous cartilage during compression. In CMP stages II/III, abnormal protein deposition of collagen type I could be demonstrated by its compressibility. In stages III and IV, the method does not add any significant additional information.

  2. An optimized compression algorithm for real-time ECG data transmission in wireless network of medical information systems.

    Science.gov (United States)

    Cho, Gyoun-Yon; Lee, Seo-Joon; Lee, Tae-Ro

    2015-01-01

    Recent medical information systems are striving towards real-time monitoring models to care patients anytime and anywhere through ECG signals. However, there are several limitations such as data distortion and limited bandwidth in wireless communications. In order to overcome such limitations, this research focuses on compression. Few researches have been made to develop a specialized compression algorithm for ECG data transmission in real-time monitoring wireless network. Not only that, recent researches' algorithm is not appropriate for ECG signals. Therefore this paper presents a more developed algorithm EDLZW for efficient ECG data transmission. Results actually showed that the EDLZW compression ratio was 8.66, which was a performance that was 4 times better than any other recent compression method widely used today.

  3. Multi-objective optimization and exergoeconomic analysis of a combined cooling, heating and power based compressed air energy storage system

    International Nuclear Information System (INIS)

    Yao, Erren; Wang, Huanran; Wang, Ligang; Xi, Guang; Maréchal, François

    2017-01-01

    Highlights: • A novel tri-generation based compressed air energy storage system. • Trade-off between efficiency and cost to highlight the best compromise solution. • Components with largest irreversibility and potential improvements highlighted. - Abstract: Compressed air energy storage technologies can improve the supply capacity and stability of the electricity grid, particularly when fluctuating renewable energies are massively connected. While incorporating the combined cooling, heating and power systems into compressed air energy storage could achieve stable operation as well as efficient energy utilization. In this paper, a novel combined cooling, heating and power based compressed air energy storage system is proposed. The system combines a gas engine, supplemental heat exchangers and an ammonia-water absorption refrigeration system. The design trade-off between the thermodynamic and economic objectives, i.e., the overall exergy efficiency and the total specific cost of product, is investigated by an evolutionary multi-objective algorithm for the proposed combined system. It is found that, with an increase in the exergy efficiency, the total product unit cost is less affected in the beginning, while rises substantially afterwards. The best trade-off solution is selected with an overall exergy efficiency of 53.04% and a total product unit cost of 20.54 cent/kWh, respectively. The variation of decision variables with the exergy efficiency indicates that the compressor, turbine and heat exchanger preheating the inlet air of turbine are the key equipment to cost-effectively pursuit a higher exergy efficiency. It is also revealed by an exergoeconomic analysis that, for the best trade-off solution, the investment costs of the compressor and the two heat exchangers recovering compression heat and heating up compressed air for expansion should be reduced (particularly the latter), while the thermodynamic performance of the gas engine need to be improved

  4. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    Science.gov (United States)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  5. Neonatal CPR: room at the top—a mathematical study of optimal chest compression frequency versus body size

    OpenAIRE

    Babbs, Charles F; Meyer, Andrew; Nadkarni, Vinay

    2009-01-01

    Objective: To explore in detail the expected magnitude of systemic perfusion pressure during standard CPR as a function of compression frequency for different sized people from neonate to adult. Method: A 7-compartment mathematical model of the human cardiopulmonary system—upgraded to include inertance of blood columns in the aorta and vena cavae—was exercised with parameters scaled to reflect changes in body weight from 1 to 70 kg. Results: Maximal systemic perfusion pressure occurs at chest...

  6. Optimizing pulse compressibility in completely all-fibered Ytterbium chirped pulse amplifiers for in vivo two photon laser scanning microscopy.

    Science.gov (United States)

    Fernández, A; Grüner-Nielsen, L; Andreana, M; Stadler, M; Kirchberger, S; Sturtzel, C; Distel, M; Zhu, L; Kautek, W; Leitgeb, R; Baltuska, A; Jespersen, K; Verhoef, A

    2017-08-01

    A simple and completely all-fiber Yb chirped pulse amplifier that uses a dispersion matched fiber stretcher and a spliced-on hollow core photonic bandgap fiber compressor is applied in nonlinear optical microscopy. This stretching-compression approach improves compressibility and helps to maximize the fluorescence signal in two-photon laser scanning microscopy as compared with approaches that use standard single mode fibers as stretcher. We also show that in femtosecond all-fiber systems, compensation of higher order dispersion terms is relevant even for pulses with relatively narrow bandwidths for applications relying on nonlinear optical effects. The completely all-fiber system was applied to image green fluorescent beads, a stained lily-of-the-valley root and rat-tail tendon. We also demonstrated in vivo imaging in zebrafish larvae, where we simultaneously measure second harmonic and fluorescence from two-photon excited red-fluorescent protein. Since the pulses are compressed in a fiber, this source is especially suited for upgrading existing laser scanning (confocal) microscopes with multiphoton imaging capabilities in space restricted settings or for incorporation in endoscope-based microscopy.

  7. Optimizing the Performance of a 50cc Compression Ignition Two-Stroke Engine Operating on Dimethyl Ether

    DEFF Research Database (Denmark)

    Hansen, Kim Rene; Dolriis, J.D.; Hansson, C.

    2011-01-01

    The paper describes the optimization of a 50cc crankcase scavenged two-stroke diesel engine operating on dimethyl ether (DME). The optimization is primarily done with respect to engine efficiency. The underlying idea behind the work is that the low weight, low internal friction and low engine...

  8. Optimization of diesel engine performances for a hybrid wind-diesel system with compressed air energy storage

    International Nuclear Information System (INIS)

    Ibrahim, H.; Younes, R.; Basbous, T.; Ilinca, A.; Dimitrova, M.

    2011-01-01

    Electricity supply in remote areas around the world is mostly guaranteed by diesel generators. This relatively inefficient and expensive method is responsible for 1.2 million tons of greenhouse gas (GHG) emission in Canada annually. Some low- and high-penetration wind-diesel hybrid systems (WDS) have been experimented in order to reduce the diesel consumption. We explore the re-engineering of current diesel power plants with the introduction of high-penetration wind systems together with compressed air energy storage (CAES). This is a viable alternative to major the overall percentage of renewable energy and reduce the cost of electricity. In this paper, we present the operative principle of this hybrid system, its economic benefits and advantages and we finally propose a numerical model of each of its components. Moreover, we are demonstrating the energy efficiency of the system, particularly in terms of the increase of the engine performance and the reduction of its fuel consumption illustrated and supported by a village in northern Quebec. -- Highlights: → The Wind-Diesel-Compressed Air Storage System (WDCAS) has a very important commercial potential for remote areas. → The WDCAS is conceived like the adaptation of the existing engines at the level of the intake system. → A wind turbine and an air compression and storage system are added on the diesel plant. → This study demonstrates the potential of WDCAS to reduce fuel consumption and increase the efficiency of the diesel engine. → This study demonstrates that we can expect savings which can reach 50%.

  9. Thermomechanical process optimization of U-10 wt% Mo – Part 1: high-temperature compressive properties and microstructure

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, Vineet V., E-mail: vineet.joshi@pnnl.gov [Pacific Northwest National Laboratory, Richland, WA 99354 (United States); Nyberg, Eric A.; Lavender, Curt A.; Paxton, Dean [Pacific Northwest National Laboratory, Richland, WA 99354 (United States); Garmestani, Hamid [Georgia Institute of Technology, Atlanta, GA 30332 (United States); Burkes, Douglas E. [Pacific Northwest National Laboratory, Richland, WA 99354 (United States)

    2015-10-15

    Nuclear power research facilities require alternatives to existing highly enriched uranium alloy fuel. One option for a high density metal fuel is uranium alloyed with 10 wt% molybdenum (U–10Mo). Fuel fabrication process development requires specific mechanical property data that, to date has been unavailable. In this work, as-cast samples were compression tested at three strain rates over a temperature range of 400–800 °C to provide data for hot rolling and extrusion modeling. The results indicate that with increasing test temperature the U–10Mo flow stress decreases and becomes more sensitive to strain rate. In addition, above the eutectoid transformation temperature, the drop in material flow stress is prominent and shows a strain-softening behavior, especially at lower strain rates. Room temperature X-ray diffraction and scanning electron microscopy combined with energy dispersive spectroscopy analysis of the as-cast and compression tested samples were conducted. The analysis revealed that the as-cast samples and the samples tested below the eutectoid transformation temperature were predominantly γ phase with varying concentration of molybdenum, whereas the ones tested above the eutectoid transformation temperature underwent significant homogenization.

  10. Optimization of the Starting by compressed air techniques; Optimizacion del Arranque en el sutiraje mediante tecnicas de aire comprimido

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2000-07-01

    High-pressure compressed air shots have been begun to use for coal stop caving in horizontal sublevel caving workings, as alternative to explosives, since they do not condition the winning cycles and they produce a smaller deterioration in the vein walls. In spite of those advantages, different parameters influence on shot result is not known. For this reason, a research project has been carried out in order to improve the high-pressure compressed air technique to extend the system implementation and to reduce the winning costs in the sublevel caving workings. The research works have consisted of a numerical model development and reduced scale and real scale tests. The model describes fragile material fragmentation under dynamical loadings and it has been implemented in a code. The tests allow to study the different parameter influence and to validate the numerical model. The main research results are, on the one hand, a numerical model that allows to define the best shot plan for user's working conditions and, on the other hand, the great influence of the air volume on the disruptive strength has been proven. (Author)

  11. Study of the hoop fracture behaviour of nuclear fuel cladding from ring compression tests by means of non-linear optimization techniques

    Energy Technology Data Exchange (ETDEWEB)

    Gómez, F.J., E-mail: javier.gomez@amsimulation.com [Advanced Material Simulation, AMS, Bilbao (Spain); Martin Rengel, M.A., E-mail: mamartin.rengel@upm.es [E.T.S.I. Caminos, Canales y Puertos, Universidad Politécnica de Madrid, C/Professor Aranguren SN, E-28040 Madrid (Spain); Ruiz-Hervias, J.; Puerta, M.A. [E.T.S.I. Caminos, Canales y Puertos, Universidad Politécnica de Madrid, C/Professor Aranguren SN, E-28040 Madrid (Spain)

    2017-06-15

    In this work, the hoop fracture toughness of ZIRLO{sup ®} fuel cladding is calculated as a function of three parameters: hydrogen concentration, temperature and displacement rate. To this end, pre-hydrided samples with nominal hydrogen concentrations of 0 (as-received), 150, 250, 500, 1200 and 2000 ppm were prepared. Hydrogen was precipitated as zirconium hydrides in the shape of platelets oriented along the hoop direction. Ring Compression Tests (RCTs) were conducted at three temperatures (20, 135 and 300 °C) and two displacement rates (0.5 and 100 mm/min). A new method has been proposed in this paper which allows the determination of fracture toughness from ring compression tests. The proposed method combines the experimental results, the cohesive crack model, finite elements simulations, numerical calculations and non-linear optimization techniques. The parameters of the cohesive crack model were calculated by minimizing the difference between the experimental data and the numerical results. An almost perfect fitting of the experimental results is achieved by this method. In addition, an estimation of the error in the calculated fracture toughness is also provided.

  12. Optimization and influence of parameter affecting the compressive strength of geopolymer concrete containing recycled concrete aggregate: using full factorial design approach

    Science.gov (United States)

    Krishnan, Thulasirajan; Purushothaman, Revathi

    2017-07-01

    There are several parameters that influence the properties of geopolymer concrete, which contains recycled concrete aggregate as the coarse aggregate. In the present study, the vital parameters affecting the compressive strength of geopolymer concrete containing recycled concrete aggregate are analyzedby varying four parameters with two levels using full factorial design in statistical software Minitab® 17. The objective of the present work is to gain an idea on the optimization, main parameter effects, their interactions and the predicted response of the model generated using factorial design. The parameters such as molarity of sodium hydroxide (8M and 12M), curing time (6hrs and 24 hrs), curing temperature (60°C and 90°C) and percentage of recycled concrete aggregate (0% and 100%) are considered. The results show that the curing time, molarity of sodium hydroxide and curing temperature were the orderly significant parameters and the percentage of Recycled concrete aggregate (RCA) was statistically insignificant in the production of geopolymer concrete. Thus, it may be noticeable that the RCA content had negligible effect on the compressive strength of geopolymer concrete. The expected responses from the generated model showed a satisfactory and rational agreement to the experimental data with the R2 value of 97.70%. Thus, geopolymer concrete comprising recycled concrete aggregate can solve the major social and environmental concerns such as the depletion of the naturally available aggregate sources and disposal of construction and demolition waste into the landfill.

  13. In search of optimal compression therapy for venous leg ulcers: a meta-analysis of studies comparing diverse [corrected] bandages with specifically designed stockings.

    Science.gov (United States)

    Amsler, Felix; Willenberg, Torsten; Blättler, Werner

    2009-09-01

    In search of an optimal compression therapy for venous leg ulcers, a systematic review and meta-analysis was performed of randomized controlled trials (RCT) comparing compression systems based on stockings (MCS) with divers bandages. RCT were retrieved from six sources and reviewed independently. The primary endpoint, completion of healing within a defined time frame, and the secondary endpoints, time to healing, and pain were entered into a meta-analysis using the tools of the Cochrane Collaboration. Additional subjective endpoints were summarized. Eight RCT (published 1985-2008) fulfilled the predefined criteria. Data presentation was adequate and showed moderate heterogeneity. The studies included 692 patients (21-178/study, mean age 61 years, 56% women). Analyzed were 688 ulcerated legs, present for 1 week to 9 years, sizing 1 to 210 cm(2). The observation period ranged from 12 to 78 weeks. Patient and ulcer characteristics were evenly distributed in three studies, favored the stocking groups in four, and the bandage group in one. Data on the pressure exerted by stockings and bandages were reported in seven and two studies, amounting to 31-56 and 27-49 mm Hg, respectively. The proportion of ulcers healed was greater with stockings than with bandages (62.7% vs 46.6%; P bandages better than MCS. Pain was assessed in three studies (219 patients) revealing an important advantage of stockings (P bandages, has a positive impact on pain, and is easier to use.

  14. Optimization of combustion chamber geometry and operating conditions for compression ignition engine fueled with pre-blended gasoline-diesel fuel

    International Nuclear Information System (INIS)

    Lee, Seokhwon; Jeon, Joonho; Park, Sungwook

    2016-01-01

    Highlights: • Pre-blended gasoline-diesel fuel was used with direct injection system. • KIVA-CHEMKIN code modeled dual-fuel fuel spray and combustion processes with discrete multi-component model. • The characteristics of Combustion and emission on pre-blended fuel was investigated with various fuel reactivities. • Optimization of combustion chamber shape improved combustion performance of the gasoline-diesel blended fuel engine. - Abstract: In this study, experiments and numerical simulations were used to improve the fuel efficiency of compression ignition engine using a gasoline-diesel blended fuel and an optimization technology. The blended fuel is directly injected into the cylinder with various blending ratios. Combustion and emission characteristics were investigated to explore the effects of gasoline ratio on fuel blend. The present study showed that the advantages of gasoline-diesel blended fuel, high thermal efficiency and low emission, were maximized using the numerical optimization method. The ignition delay and maximum pressure rise rate increased with the proportion of gasoline. As the gasoline fraction increased, the combustion duration and the indicated mean effective pressure decreased. The homogeneity of the fuel-air mixture was improved due to longer ignition delay. Soot emission was significantly reduced up to 90% compared to that of conventional diesel. The nitrogen oxides emissions of the blended fuel increased slightly when the start of injection was retarded toward top dead center. For the numerical study, KIVA-CHEMKIN multi-dimensional CFD code was used to model the combustion and emission characteristics of gasoline-diesel blended fuel. The micro genetic algorithm coupled with the KIVA-CHEMKIN code were used to optimize the combustion chamber shape and operating conditions to improve the combustion performance of the blended fuel engine. The optimized chamber geometry enhanced the fuel efficiency, for a level of nitrogen oxides

  15. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  16. Compression stockings

    Science.gov (United States)

    Call your health insurance or prescription plan: Find out if they pay for compression stockings. Ask if your durable medical equipment benefit pays for compression stockings. Get a prescription from your doctor. Find a medical equipment store where they can ...

  17. Wellhead compression

    Energy Technology Data Exchange (ETDEWEB)

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)

    2012-07-01

    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  18. Experimental Study on Optimization of Absorber Configuration in Compression/Absorption Heat Pump with NH{sub 3}/H{sub 2}O Mixture

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji Young; Kim, Min Sung; Baik, Young Jin; Park, Seong Ryong; Chang, Ki Chang; Ra, Ho Sang [Korea Institute of Energy Research, Daejeon (Korea, Republic of); Kim, Yong Chan [Korea University, Seoul (Korea, Republic of)

    2011-03-15

    This research aims to develop a compression/absorption hybrid heat pump system using an NH{sub 3}/H{sub 2}O as working fluid. The heat pump cycle is based on a combination of compression and absorption cycles. The cycle consists of two-stage compressors, absorbers, a de superheater, solution heat exchangers, a solution pump, a rectifier, and a liquid/vapor separator. The compression/absorption hybrid heat pump was designed to produce hot water above 90 .deg. C using high-temperature glide during a two-phase heat transfer. Distinct characteristics of the nonlinear temperature profile should be considered to maximize the performance of the absorber. In this study, the performance of the absorber was investigated depending on the capacity, shape, and arrangement of the plate heat exchangers with regard to the concentration and distribution at the inlet of the absorber.

  19. Speech Compression

    Directory of Open Access Journals (Sweden)

    Jerry D. Gibson

    2016-06-01

    Full Text Available Speech compression is a key technology underlying digital cellular communications, VoIP, voicemail, and voice response systems. We trace the evolution of speech coding based on the linear prediction model, highlight the key milestones in speech coding, and outline the structures of the most important speech coding standards. Current challenges, future research directions, fundamental limits on performance, and the critical open problem of speech coding for emergency first responders are all discussed.

  20. Perceptual Image Compression in Telemedicine

    Science.gov (United States)

    Watson, Andrew B.; Ahumada, Albert J., Jr.; Eckstein, Miguel; Null, Cynthia H. (Technical Monitor)

    1996-01-01

    The next era of space exploration, especially the "Mission to Planet Earth" will generate immense quantities of image data. For example, the Earth Observing System (EOS) is expected to generate in excess of one terabyte/day. NASA confronts a major technical challenge in managing this great flow of imagery: in collection, pre-processing, transmission to earth, archiving, and distribution to scientists at remote locations. Expected requirements in most of these areas clearly exceed current technology. Part of the solution to this problem lies in efficient image compression techniques. For much of this imagery, the ultimate consumer is the human eye. In this case image compression should be designed to match the visual capacities of the human observer. We have developed three techniques for optimizing image compression for the human viewer. The first consists of a formula, developed jointly with IBM and based on psychophysical measurements, that computes a DCT quantization matrix for any specified combination of viewing distance, display resolution, and display brightness. This DCT quantization matrix is used in most recent standards for digital image compression (JPEG, MPEG, CCITT H.261). The second technique optimizes the DCT quantization matrix for each individual image, based on the contents of the image. This is accomplished by means of a model of visual sensitivity to compression artifacts. The third technique extends the first two techniques to the realm of wavelet compression. Together these two techniques will allow systematic perceptual optimization of image compression in NASA imaging systems. Many of the image management challenges faced by NASA are mirrored in the field of telemedicine. Here too there are severe demands for transmission and archiving of large image databases, and the imagery is ultimately used primarily by human observers, such as radiologists. In this presentation I will describe some of our preliminary explorations of the applications

  1. TOPSIS-based parametric optimization of compression ignition engine performance and emission behavior with bael oil blends for different EGR and charge inlet temperature.

    Science.gov (United States)

    Muniappan, Krishnamoorthi; Rajalingam, Malayalamurthi

    2018-05-02

    The demand for higher fuel energy and lesser exhaust emissions of diesel engines can be achieved by fuel being used and engine operating parameters. In the present work, effects of engine speed (RPM), injection timing (IT), injection pressure (IP), and compression ratio (CR) on performance and emission characteristics of a compression ignition (CI) engine were investigated. The ternary test fuel of 65% diesel + 25% bael oil + 10% diethyl ether (DEE) was used in this work and test was conducted at different charge inlet temperature (CIT) and exhaust gas recirculation (EGR). All the experiments are conducted at the tradeoff engine load that is 75% engine load. When operating the diesel engine with 320 K CIT, brake thermal efficiency (BTE) is improved to 28.6%, and carbon monoxide (CO) and hydrocarbon (HC) emissions have been reduced to 0.025% and 12.5 ppm at 18 CR. The oxide of nitrogen (NOx) has been reduced to 240 ppm at 1500 rpm for 30% EGR mode. Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) method is frequently used in multi-factor selection and gray correlation analysis method is used to study uncertain of the systems.

  2. Free compression tube. Applications

    Science.gov (United States)

    Rusu, Ioan

    2012-11-01

    During the flight of vehicles, their propulsion energy must overcome gravity, to ensure the displacement of air masses on vehicle trajectory, to cover both energy losses from the friction between a solid surface and the air and also the kinetic energy of reflected air masses due to the impact with the flying vehicle. The flight optimization by increasing speed and reducing fuel consumption has directed research in the aerodynamics field. The flying vehicles shapes obtained through studies in the wind tunnel provide the optimization of the impact with the air masses and the airflow along the vehicle. By energy balance studies for vehicles in flight, the author Ioan Rusu directed his research in reducing the energy lost at vehicle impact with air masses. In this respect as compared to classical solutions for building flight vehicles aerodynamic surfaces which reduce the impact and friction with air masses, Ioan Rusu has invented a device which he named free compression tube for rockets, registered with the State Office for Inventions and Trademarks of Romania, OSIM, deposit f 2011 0352. Mounted in front of flight vehicles it eliminates significantly the impact and friction of air masses with the vehicle solid. The air masses come into contact with the air inside the free compression tube and the air-solid friction is eliminated and replaced by air to air friction.

  3. DNABIT Compress – Genome compression algorithm

    OpenAIRE

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our ...

  4. DNABIT Compress – Genome compression algorithm

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-01

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, “DNABIT Compress” for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that “DNABIT Compress” algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases. PMID:21383923

  5. Recognizable or Not: Towards Image Semantic Quality Assessment for Compression

    Science.gov (United States)

    Liu, Dong; Wang, Dandan; Li, Houqiang

    2017-12-01

    Traditionally, image compression was optimized for the pixel-wise fidelity or the perceptual quality of the compressed images given a bit-rate budget. But recently, compressed images are more and more utilized for automatic semantic analysis tasks such as recognition and retrieval. For these tasks, we argue that the optimization target of compression is no longer perceptual quality, but the utility of the compressed images in the given automatic semantic analysis task. Accordingly, we propose to evaluate the quality of the compressed images neither at pixel level nor at perceptual level, but at semantic level. In this paper, we make preliminary efforts towards image semantic quality assessment (ISQA), focusing on the task of optical character recognition (OCR) from compressed images. We propose a full-reference ISQA measure by comparing the features extracted from text regions of original and compressed images. We then propose to integrate the ISQA measure into an image compression scheme. Experimental results show that our proposed ISQA measure is much better than PSNR and SSIM in evaluating the semantic quality of compressed images; accordingly, adopting our ISQA measure to optimize compression for OCR leads to significant bit-rate saving compared to using PSNR or SSIM. Moreover, we perform subjective test about text recognition from compressed images, and observe that our ISQA measure has high consistency with subjective recognizability. Our work explores new dimensions in image quality assessment, and demonstrates promising direction to achieve higher compression ratio for specific semantic analysis tasks.

  6. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Christiansen, Anders Roy; Cording, Patrick Hagge

    2017-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  7. Dynamic Relative Compression, Dynamic Partial Sums, and Substring Concatenation

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2016-01-01

    -repetitive massive data sets such as genomes and web-data. We initiate the study of relative compression in a dynamic setting where the compressed source string S is subject to edit operations. The goal is to maintain the compressed representation compactly, while supporting edits and allowing efficient random...... access to the (uncompressed) source string. We present new data structures that achieve optimal time for updates and queries while using space linear in the size of the optimal relative compression, for nearly all combinations of parameters. We also present solutions for restricted and extended sets......Given a static reference string R and a source string S, a relative compression of S with respect to R is an encoding of S as a sequence of references to substrings of R. Relative compression schemes are a classic model of compression and have recently proved very successful for compressing highly...

  8. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  9. Metal Hydride Compression

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)

    2017-07-01

    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  10. Energy control in industry ''case of SAP Olympic''. ''Pre- energy diagnosis of SAP Olympic: optimization of consumption of electricity and compressed air''

    International Nuclear Information System (INIS)

    Zemba, Ouamsibiri Ernest

    2007-01-01

    This document is a report of a training course to graduate university degree in electronic engineering. It tackles the important question of the competitiveness of Burkina Faso societies in the UEMOA zone. The cost of energy is very high for these societies. This situation affects the distribution of their products. In this case the societies have to optimize their energy consumption. A company as SAP Olympic is supplied energy by a primary tension of 15 KV and a contractual demand of 750 kWh. This society works on 24 hours a day, from Monday to Saturday. Its energy consumption raised and was accentuated by the lack of measuring devices, controls, adjustments and monitoring. That is also caused the one's age of the equipment and installations: all that involves over consumptions and thus penalties of going beyond the contractual demand. It would thus be necessary to have an electric diagram, to have numerical analyzers of electricity consumption and to subscribe has a higher power in order to carry out savings of time of maintenance, availability of production, equipment and staff safety and in order to avoid the penalties of going beyond [fr

  11. Exploring compression techniques for ROOT IO

    Science.gov (United States)

    Zhang, Z.; Bockelman, B.

    2017-10-01

    ROOT provides an flexible format used throughout the HEP community. The number of use cases - from an archival data format to end-stage analysis - has required a number of tradeoffs to be exposed to the user. For example, a high “compression level” in the traditional DEFLATE algorithm will result in a smaller file (saving disk space) at the cost of slower decompression (costing CPU time when read). At the scale of the LHC experiment, poor design choices can result in terabytes of wasted space or wasted CPU time. We explore and attempt to quantify some of these tradeoffs. Specifically, we explore: the use of alternate compressing algorithms to optimize for read performance; an alternate method of compressing individual events to allow efficient random access; and a new approach to whole-file compression. Quantitative results are given, as well as guidance on how to make compression decisions for different use cases.

  12. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix; Gregson, James; Wetzstein, Gordon; Raskar, Ramesh; Heidrich, Wolfgang

    2014-01-01

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  13. A Compressive Superresolution Display

    KAUST Repository

    Heide, Felix

    2014-06-22

    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  14. Microbunching and RF Compression

    International Nuclear Information System (INIS)

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.

    2010-01-01

    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  15. Mining compressing sequential problems

    NARCIS (Netherlands)

    Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.

    2012-01-01

    Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and

  16. Compressing Aviation Data in XML Format

    Science.gov (United States)

    Patel, Hemil; Lau, Derek; Kulkarni, Deepak

    2003-01-01

    Design, operations and maintenance activities in aviation involve analysis of variety of aviation data. This data is typically in disparate formats making it difficult to use with different software packages. Use of a self-describing and extensible standard called XML provides a solution to this interoperability problem. XML provides a standardized language for describing the contents of an information stream, performing the same kind of definitional role for Web content as a database schema performs for relational databases. XML data can be easily customized for display using Extensible Style Sheets (XSL). While self-describing nature of XML makes it easy to reuse, it also increases the size of data significantly. Therefore, transfemng a dataset in XML form can decrease throughput and increase data transfer time significantly. It also increases storage requirements significantly. A natural solution to the problem is to compress the data using suitable algorithm and transfer it in the compressed form. We found that XML-specific compressors such as Xmill and XMLPPM generally outperform traditional compressors. However, optimal use of Xmill requires of discovery of optimal options to use while running Xmill. This, in turn, depends on the nature of data used. Manual disc0ver.y of optimal setting can require an engineer to experiment for weeks. We have devised an XML compression advisory tool that can analyze sample data files and recommend what compression tool would work the best for this data and what are the optimal settings to be used with a XML compression tool.

  17. Compression for radiological images

    Science.gov (United States)

    Wilson, Dennis L.

    1992-07-01

    The viewing of radiological images has peculiarities that must be taken into account in the design of a compression technique. The images may be manipulated on a workstation to change the contrast, to change the center of the brightness levels that are viewed, and even to invert the images. Because of the possible consequences of losing information in a medical application, bit preserving compression is used for the images used for diagnosis. However, for archiving the images may be compressed to 10 of their original size. A compression technique based on the Discrete Cosine Transform (DCT) takes the viewing factors into account by compressing the changes in the local brightness levels. The compression technique is a variation of the CCITT JPEG compression that suppresses the blocking of the DCT except in areas of very high contrast.

  18. Radiological Image Compression

    Science.gov (United States)

    Lo, Shih-Chung Benedict

    The movement toward digital images in radiology presents the problem of how to conveniently and economically store, retrieve, and transmit the volume of digital images. Basic research into image data compression is necessary in order to move from a film-based department to an efficient digital -based department. Digital data compression technology consists of two types of compression technique: error-free and irreversible. Error -free image compression is desired; however, present techniques can only achieve compression ratio of from 1.5:1 to 3:1, depending upon the image characteristics. Irreversible image compression can achieve a much higher compression ratio; however, the image reconstructed from the compressed data shows some difference from the original image. This dissertation studies both error-free and irreversible image compression techniques. In particular, some modified error-free techniques have been tested and the recommended strategies for various radiological images are discussed. A full-frame bit-allocation irreversible compression technique has been derived. A total of 76 images which include CT head and body, and radiographs digitized to 2048 x 2048, 1024 x 1024, and 512 x 512 have been used to test this algorithm. The normalized mean -square-error (NMSE) on the difference image, defined as the difference between the original and the reconstructed image from a given compression ratio, is used as a global measurement on the quality of the reconstructed image. The NMSE's of total of 380 reconstructed and 380 difference images are measured and the results tabulated. Three complex compression methods are also suggested to compress images with special characteristics. Finally, various parameters which would effect the quality of the reconstructed images are discussed. A proposed hardware compression module is given in the last chapter.

  19. [Medical image compression: a review].

    Science.gov (United States)

    Noreña, Tatiana; Romero, Eduardo

    2013-01-01

    Modern medicine is an increasingly complex activity , based on the evidence ; it consists of information from multiple sources : medical record text , sound recordings , images and videos generated by a large number of devices . Medical imaging is one of the most important sources of information since they offer comprehensive support of medical procedures for diagnosis and follow-up . However , the amount of information generated by image capturing gadgets quickly exceeds storage availability in radiology services , generating additional costs in devices with greater storage capacity . Besides , the current trend of developing applications in cloud computing has limitations, even though virtual storage is available from anywhere, connections are made through internet . In these scenarios the optimal use of information necessarily requires powerful compression algorithms adapted to medical activity needs . In this paper we present a review of compression techniques used for image storage , and a critical analysis of them from the point of view of their use in clinical settings.

  20. Insertion profiles of 4 headless compression screws.

    Science.gov (United States)

    Hart, Adam; Harvey, Edward J; Lefebvre, Louis-Philippe; Barthelat, Francois; Rabiei, Reza; Martineau, Paul A

    2013-09-01

    In practice, the surgeon must rely on screw position (insertion depth) and tactile feedback from the screwdriver (insertion torque) to gauge compression. In this study, we identified the relationship between interfragmentary compression and these 2 factors. The Acutrak Standard, Acutrak Mini, Synthes 3.0, and Herbert-Whipple implants were tested using a polyurethane foam scaphoid model. A specialized testing jig simultaneously measured compression force, insertion torque, and insertion depth at half-screw-turn intervals until failure occurred. The peak compression occurs at an insertion depth of -3.1 mm, -2.8 mm, 0.9 mm, and 1.5 mm for the Acutrak Mini, Acutrak Standard, Herbert-Whipple, and Synthes screws respectively (insertion depth is positive when the screw is proud above the bone and negative when buried). The compression and insertion torque at a depth of -2 mm were found to be 113 ± 18 N and 0.348 ± 0.052 Nm for the Acutrak Standard, 104 ± 15 N and 0.175 ± 0.008 Nm for the Acutrak Mini, 78 ± 9 N and 0.245 ± 0.006 Nm for the Herbert-Whipple, and 67 ± 2N, 0.233 ± 0.010 Nm for the Synthes headless compression screws. All 4 screws generated a sizable amount of compression (> 60 N) over a wide range of insertion depths. The compression at the commonly recommended insertion depth of -2 mm was not significantly different between screws; thus, implant selection should not be based on compression profile alone. Conically shaped screws (Acutrak) generated their peak compression when they were fully buried in the foam whereas the shanked screws (Synthes and Herbert-Whipple) reached peak compression before they were fully inserted. Because insertion torque correlated poorly with compression, surgeons should avoid using tactile judgment of torque as a proxy for compression. Knowledge of the insertion profile may improve our understanding of the implants, provide a better basis for comparing screws, and enable the surgeon to optimize compression. Copyright

  1. Optimizing Compressive Strength Characteristics of Hollow Building ...

    African Journals Online (AJOL)

    A range of 0%, 10%, 15%, 20% and 25% sand replacement with quarry dust was used in the cement: sand mix ratios of 1:6 and 1:8 for molding the blocks of size 450mm x 225mm x 225mm.These blocks were produced by machine compaction under a pressure of 3N/mm2. Results indicate that for mix ratio of 1:6 at 28 days ...

  2. Characterization of spectral compression of OFDM symbols using optical time lenses

    DEFF Research Database (Denmark)

    Røge, Kasper Meldgaard; Guan, Pengyu; Kjøller, Niels-Kristian

    2015-01-01

    We present a detailed investigation of a double-time-lens subsystem for spectral compression of OFDM symbols. We derive optimized parameter settings by simulations and experimental characterization. The required chirp for OFDM spectral compression is very large.......We present a detailed investigation of a double-time-lens subsystem for spectral compression of OFDM symbols. We derive optimized parameter settings by simulations and experimental characterization. The required chirp for OFDM spectral compression is very large....

  3. Compressed sensing & sparse filtering

    CERN Document Server

    Carmi, Avishy Y; Godsill, Simon J

    2013-01-01

    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  4. Compression in Working Memory and Its Relationship With Fluid Intelligence.

    Science.gov (United States)

    Chekaf, Mustapha; Gauvrit, Nicolas; Guida, Alessandro; Mathy, Fabien

    2018-06-01

    Working memory has been shown to be strongly related to fluid intelligence; however, our goal is to shed further light on the process of information compression in working memory as a determining factor of fluid intelligence. Our main hypothesis was that compression in working memory is an excellent indicator for studying the relationship between working-memory capacity and fluid intelligence because both depend on the optimization of storage capacity. Compressibility of memoranda was estimated using an algorithmic complexity metric. The results showed that compressibility can be used to predict working-memory performance and that fluid intelligence is well predicted by the ability to compress information. We conclude that the ability to compress information in working memory is the reason why both manipulation and retention of information are linked to intelligence. This result offers a new concept of intelligence based on the idea that compression and intelligence are equivalent problems. Copyright © 2018 Cognitive Science Society, Inc.

  5. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  6. Data compression with applications to digital radiology

    International Nuclear Information System (INIS)

    Elnahas, S.E.

    1985-01-01

    The structure of arithmetic codes is defined in terms of source parsing trees. The theoretical derivations of algorithms for the construction of optimal and sub-optimal structures are presented. The software simulation results demonstrate how arithmetic coding out performs variable-length to variable-length coding. Linear predictive coding is presented for the compression of digital diagnostic images from several imaging modalities including computed tomography, nuclear medicine, ultrasound, and magnetic resonance imaging. The problem of designing optimal predictors is formulated and alternative solutions are discussed. The results indicate that noiseless compression factors between 1.7 and 7.4 can be achieved. With nonlinear predictive coding, noisy and noiseless compression techniques are combined in a novel way that may have a potential impact on picture archiving and communication systems in radiology. Adaptive fast discrete cosine transform coding systems are used as nonlinear block predictors, and optimal delta modulation systems are used as nonlinear sequential predictors. The off-line storage requirements for archiving diagnostic images are reasonably reduced by the nonlinear block predictive coding. The online performance, however, seems to be bounded by that of the linear systems. The subjective quality of image imperfect reproductions from the cosine transform coding is promising and prompts future research on the compression of diagnostic images by transform coding systems and the clinical evaluation of these systems

  7. Anisotropic Concrete Compressive Strength

    DEFF Research Database (Denmark)

    Gustenhoff Hansen, Søren; Jørgensen, Henrik Brøner; Hoang, Linh Cao

    2017-01-01

    When the load carrying capacity of existing concrete structures is (re-)assessed it is often based on compressive strength of cores drilled out from the structure. Existing studies show that the core compressive strength is anisotropic; i.e. it depends on whether the cores are drilled parallel...

  8. Experiments with automata compression

    NARCIS (Netherlands)

    Daciuk, J.; Yu, S; Daley, M; Eramian, M G

    2001-01-01

    Several compression methods of finite-state automata are presented and evaluated. Most compression methods used here are already described in the literature. However, their impact on the size of automata has not been described yet. We fill that gap, presenting results of experiments carried out on

  9. An efficient compression scheme for bitmap indices

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2004-04-13

    When using an out-of-core indexing method to answer a query, it is generally assumed that the I/O cost dominates the overall query response time. Because of this, most research on indexing methods concentrate on reducing the sizes of indices. For bitmap indices, compression has been used for this purpose. However, in most cases, operations on these compressed bitmaps, mostly bitwise logical operations such as AND, OR, and NOT, spend more time in CPU than in I/O. To speedup these operations, a number of specialized bitmap compression schemes have been developed; the best known of which is the byte-aligned bitmap code (BBC). They are usually faster in performing logical operations than the general purpose compression schemes, but, the time spent in CPU still dominates the total query response time. To reduce the query response time, we designed a CPU-friendly scheme named the word-aligned hybrid (WAH) code. In this paper, we prove that the sizes of WAH compressed bitmap indices are about two words per row for large range of attributes. This size is smaller than typical sizes of commonly used indices, such as a B-tree. Therefore, WAH compressed indices are not only appropriate for low cardinality attributes but also for high cardinality attributes.In the worst case, the time to operate on compressed bitmaps is proportional to the total size of the bitmaps involved. The total size of the bitmaps required to answer a query on one attribute is proportional to the number of hits. These indicate that WAH compressed bitmap indices are optimal. To verify their effectiveness, we generated bitmap indices for four different datasets and measured the response time of many range queries. Tests confirm that sizes of compressed bitmap indices are indeed smaller than B-tree indices, and query processing with WAH compressed indices is much faster than with BBC compressed indices, projection indices and B-tree indices. In addition, we also verified that the average query response time

  10. The Optimal Volume Fraction in Percutaneous Vertebroplasty Evaluated by Pain Relief, Cement Dispersion, and Cement Leakage: A Prospective Cohort Study of 130 Patients with Painful Osteoporotic Vertebral Compression Fracture in the Thoracolumbar Vertebra.

    Science.gov (United States)

    Sun, Hai-Bo; Jing, Xiao-Shan; Liu, Yu-Zeng; Qi, Ming; Wang, Xin-Kuan; Hai, Yong

    2018-06-01

    specificity of 60.00%. The incidence of favorable cement distribution was 74.62% (97/130), and VF% were identified as independent protective factors (adjusted OR 1.185, 95% CI 1.067-1.317, P = 0.002) The area under the receiver-operating characteristic curve of VF% was 0.686 (95% CI 0.571-0.802, P = 0.001 cement distribution was 19.78%, with a sensitivity of 86.60% and a specificity of 51.50%. In osteoporotic vertebral compression fracture with mild/moderate fracture severity at the single thoracolumbar level, the intravertebral cement volume of 4-6 mL could relieve pain rapidly. The optimal VF% was 19.78%, which could achieve satisfactory cement distribution. With the increase of VF%, the incidence of cement leakage would also increase. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Multiband and Lossless Compression of Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Raffaele Pizzolante

    2016-02-01

    Full Text Available Hyperspectral images are widely used in several real-life applications. In this paper, we investigate on the compression of hyperspectral images by considering different aspects, including the optimization of the computational complexity in order to allow implementations on limited hardware (i.e., hyperspectral sensors, etc.. We present an approach that relies on a three-dimensional predictive structure. Our predictive structure, 3D-MBLP, uses one or more previous bands as references to exploit the redundancies among the third dimension. The achieved results are comparable, and often better, with respect to the other state-of-art lossless compression techniques for hyperspectral images.

  12. Bunch Compression Stability Dependence on RF Parameters

    CERN Document Server

    Limberg, T

    2005-01-01

    In present designs for FEL's with high electron peak currents and short bunch lengths, higher harmonic RF systems are often used to optimize the final longitudinal charge distributions. This opens degrees of freedom for the choice of RF phases and amplitudes to achieve the necessary peak current with a reasonable longitudinal bunch shape. It had been found empirically that different working points result in different tolerances for phases and amplitudes. We give an analytical expression for the sensitivity of the compression factor on phase and amplitude jitter for a bunch compression scheme involving two RF systems and two magnetic chicanes as well numerical results for the case of the European XFEL.

  13. Compressive laser ranging.

    Science.gov (United States)

    Babbitt, Wm Randall; Barber, Zeb W; Renner, Christoffer

    2011-12-15

    Compressive sampling has been previously proposed as a technique for sampling radar returns and determining sparse range profiles with a reduced number of measurements compared to conventional techniques. By employing modulation on both transmission and reception, compressive sensing in ranging is extended to the direct measurement of range profiles without intermediate measurement of the return waveform. This compressive ranging approach enables the use of pseudorandom binary transmit waveforms and return modulation, along with low-bandwidth optical detectors to yield high-resolution ranging information. A proof-of-concept experiment is presented. With currently available compact, off-the-shelf electronics and photonics, such as high data rate binary pattern generators and high-bandwidth digital optical modulators, compressive laser ranging can readily achieve subcentimeter resolution in a compact, lightweight package.

  14. The theory of temporal compression of intense pulses in a metal vapor

    Energy Technology Data Exchange (ETDEWEB)

    Shaw, M.J.; Crane, J.K.

    1990-11-16

    We examine compression of near-resonant pulses in metal vapor in the nonlinear regime. Our calculations examine nonlinear effects on compression of optimally-chirped pulses of various fluences. In addition, we compare model predictions with experimental results for compression of 4 nsec Nd:YAG pumped dye pulses.

  15. Mixed raster content segmentation, compression, transmission

    CERN Document Server

    Pavlidis, George

    2017-01-01

    This book presents the main concepts in handling digital images of mixed content, traditionally referenced as mixed raster content (MRC), in two main parts. The first includes introductory chapters covering the scientific and technical background aspects, whereas the second presents a set of research and development approaches to tackle key issues in MRC segmentation, compression and transmission. The book starts with a review of color theory and the mechanism of color vision in humans. In turn, the second chapter reviews data coding and compression methods so as to set the background and demonstrate the complexity involved in dealing with MRC. Chapter three addresses the segmentation of images through an extensive literature review, which highlights the various approaches used to tackle MRC segmentation. The second part of the book focuses on the segmentation of color images for optimized compression, including multi-layered decomposition and representation of MRC and the processes that can be employed to op...

  16. Optical pulse compression

    International Nuclear Information System (INIS)

    Glass, A.J.

    1975-01-01

    The interest in using large lasers to achieve a very short and intense pulse for generating fusion plasma has provided a strong impetus to reexamine the possibilities of optical pulse compression at high energy. Pulse compression allows one to generate pulses of long duration (minimizing damage problems) and subsequently compress optical pulses to achieve the short pulse duration required for specific applications. The ideal device for carrying out this program has not been developed. Of the two approaches considered, the Gires--Tournois approach is limited by the fact that the bandwidth and compression are intimately related, so that the group delay dispersion times the square of the bandwidth is about unity for all simple Gires--Tournois interferometers. The Treacy grating pair does not suffer from this limitation, but is inefficient because diffraction generally occurs in several orders and is limited by the problem of optical damage to the grating surfaces themselves. Nonlinear and parametric processes were explored. Some pulse compression was achieved by these techniques; however, they are generally difficult to control and are not very efficient. (U.S.)

  17. Isentropic Compression of Argon

    International Nuclear Information System (INIS)

    Oona, H.; Solem, J.C.; Veeser, L.R.; Ekdahl, C.A.; Rodriquez, P.J.; Younger, S.M.; Lewis, W.; Turley, W.D.

    1997-01-01

    We are studying the transition of argon from an insulator to a conductor by compressing the frozen gas isentropically to pressures at which neighboring atomic orbitals overlap sufficiently to allow some electron motion between atoms. Argon and the other rare gases have closed electron shells and therefore remain montomic, even when they solidify. Their simple structure makes it likely that any measured change in conductivity is due to changes in the atomic structure, not in molecular configuration. As the crystal is compressed the band gap closes, allowing increased conductivity. We have begun research to determine the conductivity at high pressures, and it is our intention to determine the compression at which the crystal becomes a metal

  18. Pulsed Compression Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Roestenberg, T. [University of Twente, Enschede (Netherlands)

    2012-06-07

    The advantages of the Pulsed Compression Reactor (PCR) over the internal combustion engine-type chemical reactors are briefly discussed. Over the last four years a project concerning the fundamentals of the PCR technology has been performed by the University of Twente, Enschede, Netherlands. In order to assess the feasibility of the application of the PCR principle for the conversion methane to syngas, several fundamental questions needed to be answered. Two important questions that relate to the applicability of the PCR for any process are: how large is the heat transfer rate from a rapidly compressed and expanded volume of gas, and how does this heat transfer rate compare to energy contained in the compressed gas? And: can stable operation with a completely free piston as it is intended with the PCR be achieved?.

  19. Medullary compression syndrome

    International Nuclear Information System (INIS)

    Barriga T, L.; Echegaray, A.; Zaharia, M.; Pinillos A, L.; Moscol, A.; Barriga T, O.; Heredia Z, A.

    1994-01-01

    The authors made a retrospective study in 105 patients treated in the Radiotherapy Department of the National Institute of Neoplasmic Diseases from 1973 to 1992. The objective of this evaluation was to determine the influence of radiotherapy in patients with medullary compression syndrome in aspects concerning pain palliation and improvement of functional impairment. Treatment sheets of patients with medullary compression were revised: 32 out of 39 of patients (82%) came to hospital by their own means and continued walking after treatment, 8 out of 66 patients (12%) who came in a wheelchair or were bedridden, could mobilize by their own after treatment, 41 patients (64%) had partial alleviation of pain after treatment. In those who came by their own means and did not change their characteristics, functional improvement was observed. It is concluded that radiotherapy offers palliative benefit in patients with medullary compression syndrome. (authors). 20 refs., 5 figs., 6 tabs

  20. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    OpenAIRE

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with vari...

  1. Graph Compression by BFS

    Directory of Open Access Journals (Sweden)

    Alberto Apostolico

    2009-08-01

    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  2. Compressible generalized Newtonian fluids

    Czech Academy of Sciences Publication Activity Database

    Málek, Josef; Rajagopal, K.R.

    2010-01-01

    Roč. 61, č. 6 (2010), s. 1097-1110 ISSN 0044-2275 Institutional research plan: CEZ:AV0Z20760514 Keywords : power law fluid * uniform temperature * compressible fluid Subject RIV: BJ - Thermodynamics Impact factor: 1.290, year: 2010

  3. Temporal compressive sensing systems

    Science.gov (United States)

    Reed, Bryan W.

    2017-12-12

    Methods and systems for temporal compressive sensing are disclosed, where within each of one or more sensor array data acquisition periods, one or more sensor array measurement datasets comprising distinct linear combinations of time slice data are acquired, and where mathematical reconstruction allows for calculation of accurate representations of the individual time slice datasets.

  4. Compression of Infrared images

    DEFF Research Database (Denmark)

    Mantel, Claire; Forchhammer, Søren

    2017-01-01

    best for bits-per-pixel rates below 1.4 bpp, while HEVC obtains best performance in the range 1.4 to 6.5 bpp. The compression performance is also evaluated based on maximum errors. These results also show that HEVC can achieve a precision of 1°C with an average of 1.3 bpp....

  5. Gas compression infrared generator

    International Nuclear Information System (INIS)

    Hug, W.F.

    1980-01-01

    A molecular gas is compressed in a quasi-adiabatic manner to produce pulsed radiation during each compressor cycle when the pressure and temperature are sufficiently high, and part of the energy is recovered during the expansion phase, as defined in U.S. Pat. No. 3,751,666; characterized by use of a cylinder with a reciprocating piston as a compressor

  6. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  7. Effects on MR images compression in tissue classification quality

    International Nuclear Information System (INIS)

    Santalla, H; Meschino, G; Ballarin, V

    2007-01-01

    It is known that image compression is required to optimize the storage in memory. Moreover, transmission speed can be significantly improved. Lossless compression is used without controversy in medicine, though benefits are limited. If we compress images lossy, where image can not be totally recovered; we can only recover an approximation. In this point definition of 'quality' is essential. What we understand for 'quality'? How can we evaluate a compressed image? Quality in images is an attribute whit several definitions and interpretations, which actually depend on the posterior use we want to give them. This work proposes a quantitative analysis of quality for lossy compressed Magnetic Resonance (MR) images, and their influence in automatic tissue classification, accomplished with these images

  8. Compressible Fluid Suspension Performance Testing

    National Research Council Canada - National Science Library

    Hoogterp, Francis

    2003-01-01

    ... compressible fluid suspension system that was designed and installed on the vehicle by DTI. The purpose of the tests was to evaluate the possible performance benefits of the compressible fluid suspension system...

  9. Transthoracic impedance for the monitoring of quality of manual chest compression during cardiopulmonary resuscitation.

    Science.gov (United States)

    Zhang, Hehua; Yang, Zhengfei; Huang, Zitong; Chen, Bihua; Zhang, Lei; Li, Heng; Wu, Baoming; Yu, Tao; Li, Yongqin

    2012-10-01

    The quality of cardiopulmonary resuscitation (CPR), especially adequate compression depth, is associated with return of spontaneous circulation (ROSC) and is therefore recommended to be measured routinely. In the current study, we investigated the relationship between changes of transthoracic impedance (TTI) measured through the defibrillation electrodes, chest compression depth and coronary perfusion pressure (CPP) in a porcine model of cardiac arrest. In 14 male pigs weighing between 28 and 34 kg, ventricular fibrillation (VF) was electrically induced and untreated for 6 min. Animals were randomized to either optimal or suboptimal chest compression group. Optimal depth of manual compression in 7 pigs was defined as a decrease of 25% (50 mm) in anterior posterior diameter of the chest, while suboptimal compression was defined as 70% of the optimal depth (35 mm). After 2 min of chest compression, defibrillation was attempted with a 120-J rectilinear biphasic shock. There were no differences in baseline measurements between groups. All animals had ROSC after optimal compressions; this contrasted with suboptimal compressions, after which only 2 of the animals had ROSC (100% vs. 28.57%, p=0.021). The correlation coefficient was 0.89 between TTI amplitude and compression depth (pcompression depth and CPP in this porcine model of cardiac arrest. The TTI measured from defibrillator electrodes, therefore has the potential to serve as an indicator to monitor the quality of chest compression and estimate CPP during CPR. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. LZ-Compressed String Dictionaries

    OpenAIRE

    Arz, Julian; Fischer, Johannes

    2013-01-01

    We show how to compress string dictionaries using the Lempel-Ziv (LZ78) data compression algorithm. Our approach is validated experimentally on dictionaries of up to 1.5 GB of uncompressed text. We achieve compression ratios often outperforming the existing alternatives, especially on dictionaries containing many repeated substrings. Our query times remain competitive.

  11. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2013-01-01

    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  12. Tree compression with top trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.

    2015-01-01

    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  13. Low-Complexity Compression Algorithm for Hyperspectral Images Based on Distributed Source Coding

    Directory of Open Access Journals (Sweden)

    Yongjian Nian

    2013-01-01

    Full Text Available A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.

  14. Digital cinema video compression

    Science.gov (United States)

    Husak, Walter

    2003-05-01

    The Motion Picture Industry began a transition from film based distribution and projection to digital distribution and projection several years ago. Digital delivery and presentation offers the prospect to increase the quality of the theatrical experience for the audience, reduce distribution costs to the distributors, and create new business opportunities for the theater owners and the studios. Digital Cinema also presents an opportunity to provide increased flexibility and security of the movies for the content owners and the theater operators. Distribution of content via electronic means to theaters is unlike any of the traditional applications for video compression. The transition from film-based media to electronic media represents a paradigm shift in video compression techniques and applications that will be discussed in this paper.

  15. Fingerprints in compressed strings

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Cording, Patrick Hagge

    2017-01-01

    In this paper we show how to construct a data structure for a string S of size N compressed into a context-free grammar of size n that supports efficient Karp–Rabin fingerprint queries to any substring of S. That is, given indices i and j, the answer to a query is the fingerprint of the substring S......[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(log⁡N) query time, and for Linear SLPs (an SLP derivative that captures LZ78 compression and its variations) we get O(log⁡log⁡N) query time...

  16. Lossless medical image compression with a hybrid coder

    Science.gov (United States)

    Way, Jing-Dar; Cheng, Po-Yuen

    1998-10-01

    The volume of medical image data is expected to increase dramatically in the next decade due to the large use of radiological image for medical diagnosis. The economics of distributing the medical image dictate that data compression is essential. While there is lossy image compression, the medical image must be recorded and transmitted lossless before it reaches the users to avoid wrong diagnosis due to the image data lost. Therefore, a low complexity, high performance lossless compression schematic that can approach the theoretic bound and operate in near real-time is needed. In this paper, we propose a hybrid image coder to compress the digitized medical image without any data loss. The hybrid coder is constituted of two key components: an embedded wavelet coder and a lossless run-length coder. In this system, the medical image is compressed with the lossy wavelet coder first, and the residual image between the original and the compressed ones is further compressed with the run-length coder. Several optimization schemes have been used in these coders to increase the coding performance. It is shown that the proposed algorithm is with higher compression ratio than run-length entropy coders such as arithmetic, Huffman and Lempel-Ziv coders.

  17. Blind compressive sensing dynamic MRI

    Science.gov (United States)

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding

  18. WSNs Microseismic Signal Subsection Compression Algorithm Based on Compressed Sensing

    Directory of Open Access Journals (Sweden)

    Zhouzhou Liu

    2015-01-01

    Full Text Available For wireless network microseismic monitoring and the problems of low compression ratio and high energy consumption of communication, this paper proposes a segmentation compression algorithm according to the characteristics of the microseismic signals and the compression perception theory (CS used in the transmission process. The algorithm will be collected as a number of nonzero elements of data segmented basis, by reducing the number of combinations of nonzero elements within the segment to improve the accuracy of signal reconstruction, while taking advantage of the characteristics of compressive sensing theory to achieve a high compression ratio of the signal. Experimental results show that, in the quantum chaos immune clone refactoring (Q-CSDR algorithm for reconstruction algorithm, under the condition of signal sparse degree higher than 40, to be more than 0.4 of the compression ratio to compress the signal, the mean square error is less than 0.01, prolonging the network life by 2 times.

  19. Compressed sensing electron tomography

    International Nuclear Information System (INIS)

    Leary, Rowan; Saghi, Zineb; Midgley, Paul A.; Holland, Daniel J.

    2013-01-01

    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform

  20. Velocity and Magnetic Compressions in FEL Drivers

    CERN Document Server

    Serafini, L

    2005-01-01

    We will compare merits and issues of these two techniques suitable for increasing the peak current of high brightness electron beams. The typical range of applicability is low energy for the velocity bunching and middle to high energy for magnetic compression. Velocity bunching is free from CSR effects but requires very high RF stability (time jitters), as well as a dedicated additional focusing and great cure in the beam transport: it is very well understood theoretically and numerical simulations are pretty straightforward. Several experiments of velocity bunching have been performed in the past few years: none of them, nevertheless, used a photoinjector designed and optimized for that purpose. Magnetic compression is a much more consolidated technique: CSR effects and micro-bunch instabilities are its main drawbacks. There is a large operational experience with chicanes used as magnetic compressors and their theoretical understanding is quite deep, though numerical simulations of real devices are still cha...

  1. Isentropic compression with the SPHINX machine

    International Nuclear Information System (INIS)

    D'almeida, T; Lasalle, F.; Morell, A.; Grunenwald, J.; Zucchini, F.; Loyen, A.

    2013-01-01

    The SPHINX machine is a generator of pulsed high power (Class 6 MA, 1 μs) that can be used in the framework of inertial fusion for achieving isentropic compression experiments. The magnetic field created by the current impulse generates a quasi-isentropic compression of a metallic liner. In order to optimize this mode of operation, the current impulse is shaped through a device called DLCM (Dynamic Load Current Multiplier). The DLCM device allows both the increase of the amplitude of the current injected into the liner and its shaping. Some preliminary results concerning an aluminium liner are reported. The measurement of the speed of the internal surface of the liner during its implosion and over a quite long trajectory has been possible by interferometry and the results agree well with simulations based on the experimental value of the current delivered to the liner

  2. Time-Space Topology Optimization

    DEFF Research Database (Denmark)

    Jensen, Jakob Søndergaard

    2008-01-01

    A method for space-time topology optimization is outlined. The space-time optimization strategy produces structures with optimized material distributions that vary in space and in time. The method is demonstrated for one-dimensional wave propagation in an elastic bar that has a time-dependent Young......’s modulus and is subjected to a transient load. In the example an optimized dynamic structure is demonstrated that compresses a propagating Gauss pulse....

  3. Compressive Transient Imaging

    KAUST Repository

    Sun, Qilin

    2017-04-01

    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  4. Fast Compressive Tracking.

    Science.gov (United States)

    Zhang, Kaihua; Zhang, Lei; Yang, Ming-Hsuan

    2014-10-01

    It is a challenging task to develop effective and efficient appearance models for robust object tracking due to factors such as pose variation, illumination change, occlusion, and motion blur. Existing online tracking algorithms often update models with samples from observations in recent frames. Despite much success has been demonstrated, numerous issues remain to be addressed. First, while these adaptive appearance models are data-dependent, there does not exist sufficient amount of data for online algorithms to learn at the outset. Second, online tracking algorithms often encounter the drift problems. As a result of self-taught learning, misaligned samples are likely to be added and degrade the appearance models. In this paper, we propose a simple yet effective and efficient tracking algorithm with an appearance model based on features extracted from a multiscale image feature space with data-independent basis. The proposed appearance model employs non-adaptive random projections that preserve the structure of the image feature space of objects. A very sparse measurement matrix is constructed to efficiently extract the features for the appearance model. We compress sample images of the foreground target and the background using the same sparse measurement matrix. The tracking task is formulated as a binary classification via a naive Bayes classifier with online update in the compressed domain. A coarse-to-fine search strategy is adopted to further reduce the computational complexity in the detection procedure. The proposed compressive tracking algorithm runs in real-time and performs favorably against state-of-the-art methods on challenging sequences in terms of efficiency, accuracy and robustness.

  5. Chest compression rates and survival following out-of-hospital cardiac arrest.

    Science.gov (United States)

    Idris, Ahamed H; Guffey, Danielle; Pepe, Paul E; Brown, Siobhan P; Brooks, Steven C; Callaway, Clifton W; Christenson, Jim; Davis, Daniel P; Daya, Mohamud R; Gray, Randal; Kudenchuk, Peter J; Larsen, Jonathan; Lin, Steve; Menegazzi, James J; Sheehan, Kellie; Sopko, George; Stiell, Ian; Nichol, Graham; Aufderheide, Tom P

    2015-04-01

    Guidelines for cardiopulmonary resuscitation recommend a chest compression rate of at least 100 compressions/min. A recent clinical study reported optimal return of spontaneous circulation with rates between 100 and 120/min during cardiopulmonary resuscitation for out-of-hospital cardiac arrest. However, the relationship between compression rate and survival is still undetermined. Prospective, observational study. Data is from the Resuscitation Outcomes Consortium Prehospital Resuscitation IMpedance threshold device and Early versus Delayed analysis clinical trial. Adults with out-of-hospital cardiac arrest treated by emergency medical service providers. None. Data were abstracted from monitor-defibrillator recordings for the first five minutes of emergency medical service cardiopulmonary resuscitation. Multiple logistic regression assessed odds ratio for survival by compression rate categories (compression fraction and depth, first rhythm, and study site. Compression rate data were available for 10,371 patients; 6,399 also had chest compression fraction and depth data. Age (mean±SD) was 67±16 years. Chest compression rate was 111±19 per minute, compression fraction was 0.70±0.17, and compression depth was 42±12 mm. Circulation was restored in 34%; 9% survived to hospital discharge. After adjustment for covariates without chest compression depth and fraction (n=10,371), a global test found no significant relationship between compression rate and survival (p=0.19). However, after adjustment for covariates including chest compression depth and fraction (n=6,399), the global test found a significant relationship between compression rate and survival (p=0.02), with the reference group (100-119 compressions/min) having the greatest likelihood for survival. After adjustment for chest compression fraction and depth, compression rates between 100 and 120 per minute were associated with greatest survival to hospital discharge.

  6. SeqCompress: an algorithm for biological sequence compression.

    Science.gov (United States)

    Sardaraz, Muhammad; Tahir, Muhammad; Ikram, Ataul Aziz; Bajwa, Hassan

    2014-10-01

    The growth of Next Generation Sequencing technologies presents significant research challenges, specifically to design bioinformatics tools that handle massive amount of data efficiently. Biological sequence data storage cost has become a noticeable proportion of total cost in the generation and analysis. Particularly increase in DNA sequencing rate is significantly outstripping the rate of increase in disk storage capacity, which may go beyond the limit of storage capacity. It is essential to develop algorithms that handle large data sets via better memory management. This article presents a DNA sequence compression algorithm SeqCompress that copes with the space complexity of biological sequences. The algorithm is based on lossless data compression and uses statistical model as well as arithmetic coding to compress DNA sequences. The proposed algorithm is compared with recent specialized compression tools for biological sequences. Experimental results show that proposed algorithm has better compression gain as compared to other existing algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  8. Analysis by compression

    DEFF Research Database (Denmark)

    Meredith, David

    MEL is a geometric music encoding language designed to allow for musical objects to be encoded parsimoniously as sets of points in pitch-time space, generated by performing geometric transformations on component patterns. MEL has been implemented in Java and coupled with the SIATEC pattern...... discovery algorithm to allow for compact encodings to be generated automatically from in extenso note lists. The MEL-SIATEC system is founded on the belief that music analysis and music perception can be modelled as the compression of in extenso descriptions of musical objects....

  9. Compressive Fatigue in Wood

    DEFF Research Database (Denmark)

    Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben

    1999-01-01

    An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...

  10. Compressive full waveform lidar

    Science.gov (United States)

    Yang, Weiyi; Ke, Jun

    2017-05-01

    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  11. Photon compression in cylinders

    International Nuclear Information System (INIS)

    Ensley, D.L.

    1977-01-01

    It has been shown theoretically that intense microwave radiation is absorbed non-classically by a newly enunciated mechanism when interacting with hydrogen plasma. Fields > 1 Mg, lambda > 1 mm are within this regime. The predicted absorption, approximately P/sub rf/v/sub theta/sup e/, has not yet been experimentally confirmed. The applications of such a coupling are many. If microwave bursts approximately > 5 x 10 14 watts, 5 ns can be generated, the net generation of power from pellet fusion as well as various military applications becomes feasible. The purpose, then, for considering gas-gun photon compression is to obtain the above experimental capability by converting the gas kinetic energy directly into microwave form. Energies of >10 5 joules cm -2 and powers of >10 13 watts cm -2 are potentially available for photon interaction experiments using presently available technology. The following topics are discussed: microwave modes in a finite cylinder, injection, compression, switchout operation, and system performance parameter scaling

  12. Fingerprints in Compressed Strings

    DEFF Research Database (Denmark)

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li

    2013-01-01

    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...... derivative that captures LZ78 compression and its variations) we get O(loglogN) query time. Hence, our data structures has the same time and space complexity as for random access in SLPs. We utilize the fingerprint data structures to solve the longest common extension problem in query time O(logNlogℓ) and O....... That is, given indices i and j, the answer to a query is the fingerprint of the substring S[i,j]. We present the first O(n) space data structures that answer fingerprint queries without decompressing any characters. For Straight Line Programs (SLP) we get O(logN) query time, and for Linear SLPs (an SLP...

  13. Acceptable levels of digital image compression in chest radiology

    International Nuclear Information System (INIS)

    Smith, I.

    2000-01-01

    The introduction of picture archival and communications systems (PACS) and teleradiology has prompted an examination of techniques that optimize the storage capacity and speed of digital storage and distribution networks. The general acceptance of the move to replace conventional screen-film capture with computed radiography (CR) is an indication that clinicians within the radiology community are willing to accept images that have been 'compressed'. The question to be answered, therefore, is what level of compression is acceptable. The purpose of the present study is to provide an assessment of the ability of a group of imaging professionals to determine whether an image has been compressed. To undertake this study a single mobile chest image, selected for the presence of some subtle pathology in the form of a number of septal lines in both costphrenic angles, was compressed to levels of 10:1, 20:1 and 30:1. These images were randomly ordered and shown to the observers for interpretation. Analysis of the responses indicates that in general it was not possible to distinguish the original image from its compressed counterparts. Furthermore, a preference appeared to be shown for images that have undergone low levels of compression. This preference can most likely be attributed to the 'de-noising' effect of the compression algorithm at low levels. Copyright (1999) Blackwell Science Pty. Ltd

  14. Compressive sensing in medical imaging.

    Science.gov (United States)

    Graff, Christian G; Sidky, Emil Y

    2015-03-10

    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  15. Quantum autoencoders for efficient compression of quantum data

    Science.gov (United States)

    Romero, Jonathan; Olson, Jonathan P.; Aspuru-Guzik, Alan

    2017-12-01

    Classical autoencoders are neural networks that can learn efficient low-dimensional representations of data in higher-dimensional space. The task of an autoencoder is, given an input x, to map x to a lower dimensional point y such that x can likely be recovered from y. The structure of the underlying autoencoder network can be chosen to represent the data on a smaller dimension, effectively compressing the input. Inspired by this idea, we introduce the model of a quantum autoencoder to perform similar tasks on quantum data. The quantum autoencoder is trained to compress a particular data set of quantum states, where a classical compression algorithm cannot be employed. The parameters of the quantum autoencoder are trained using classical optimization algorithms. We show an example of a simple programmable circuit that can be trained as an efficient autoencoder. We apply our model in the context of quantum simulation to compress ground states of the Hubbard model and molecular Hamiltonians.

  16. Introduction to compressible fluid flow

    CERN Document Server

    Oosthuizen, Patrick H

    2013-01-01

    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices

  17. Mammographic compression in Asian women.

    Science.gov (United States)

    Lau, Susie; Abdul Aziz, Yang Faridah; Ng, Kwan Hoong

    2017-01-01

    To investigate: (1) the variability of mammographic compression parameters amongst Asian women; and (2) the effects of reducing compression force on image quality and mean glandular dose (MGD) in Asian women based on phantom study. We retrospectively collected 15818 raw digital mammograms from 3772 Asian women aged 35-80 years who underwent screening or diagnostic mammography between Jan 2012 and Dec 2014 at our center. The mammograms were processed using a volumetric breast density (VBD) measurement software (Volpara) to assess compression force, compression pressure, compressed breast thickness (CBT), breast volume, VBD and MGD against breast contact area. The effects of reducing compression force on image quality and MGD were also evaluated based on measurement obtained from 105 Asian women, as well as using the RMI156 Mammographic Accreditation Phantom and polymethyl methacrylate (PMMA) slabs. Compression force, compression pressure, CBT, breast volume, VBD and MGD correlated significantly with breast contact area (pAsian women. The median compression force should be about 8.1 daN compared to the current 12.0 daN. Decreasing compression force from 12.0 daN to 9.0 daN increased CBT by 3.3±1.4 mm, MGD by 6.2-11.0%, and caused no significant effects on image quality (p>0.05). Force-standardized protocol led to widely variable compression parameters in Asian women. Based on phantom study, it is feasible to reduce compression force up to 32.5% with minimal effects on image quality and MGD.

  18. Laser-pulse compression in a collisional plasma under weak-relativistic ponderomotive nonlinearity

    International Nuclear Information System (INIS)

    Singh, Mamta; Gupta, D. N.

    2016-01-01

    We present theory and numerical analysis which demonstrate laser-pulse compression in a collisional plasma under the weak-relativistic ponderomotive nonlinearity. Plasma equilibrium density is modified due to the ohmic heating of electrons, the collisions, and the weak relativistic-ponderomotive force during the interaction of a laser pulse with plasmas. First, within one-dimensional analysis, the longitudinal self-compression mechanism is discussed. Three-dimensional analysis (spatiotemporal) of laser pulse propagation is also investigated by coupling the self-compression with the self-focusing. In the regime in which the laser becomes self-focused due to the weak relativistic-ponderomotive nonlinearity, we provide results for enhanced pulse compression. The results show that the matched interplay between self-focusing and self-compression can improve significantly the temporal profile of the compressed pulse. Enhanced pulse compression can be achieved by optimizing and selecting the parameters such as collision frequency, ion-temperature, and laser intensity.

  19. Laser-pulse compression in a collisional plasma under weak-relativistic ponderomotive nonlinearity

    Energy Technology Data Exchange (ETDEWEB)

    Singh, Mamta; Gupta, D. N., E-mail: dngupta@physics.du.ac.in [Department of Physics and Astrophysics, North Campus, University of Delhi, Delhi 110 007 (India)

    2016-05-15

    We present theory and numerical analysis which demonstrate laser-pulse compression in a collisional plasma under the weak-relativistic ponderomotive nonlinearity. Plasma equilibrium density is modified due to the ohmic heating of electrons, the collisions, and the weak relativistic-ponderomotive force during the interaction of a laser pulse with plasmas. First, within one-dimensional analysis, the longitudinal self-compression mechanism is discussed. Three-dimensional analysis (spatiotemporal) of laser pulse propagation is also investigated by coupling the self-compression with the self-focusing. In the regime in which the laser becomes self-focused due to the weak relativistic-ponderomotive nonlinearity, we provide results for enhanced pulse compression. The results show that the matched interplay between self-focusing and self-compression can improve significantly the temporal profile of the compressed pulse. Enhanced pulse compression can be achieved by optimizing and selecting the parameters such as collision frequency, ion-temperature, and laser intensity.

  20. Data compression of scanned halftone images

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Jensen, Kim S.

    1994-01-01

    with the halftone grid, and converted to a gray level representation. A new digital description of (halftone) grids has been developed for this purpose. The gray level values are coded according to a scheme based on states derived from a segmentation of gray values. To enable real-time processing of high resolution...... scanner output, the coding has been parallelized and implemented on a transputer system. For comparison, the test image was coded using existing (lossless) methods giving compression rates of 2-7. The best of these, a combination of predictive and binary arithmetic coding was modified and optimized...

  1. Adiabatic compression and radiative compression of magnetic fields

    International Nuclear Information System (INIS)

    Woods, C.H.

    1980-01-01

    Flux is conserved during mechanical compression of magnetic fields for both nonrelativistic and relativistic compressors. However, the relativistic compressor generates radiation, which can carry up to twice the energy content of the magnetic field compressed adiabatically. The radiation may be either confined or allowed to escape

  2. Waves and compressible flow

    CERN Document Server

    Ockendon, Hilary

    2016-01-01

    Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications.  New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises.  Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science.   Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

  3. Numerical approach to solar ejector-compression refrigeration system

    Directory of Open Access Journals (Sweden)

    Zheng Hui-Fan

    2016-01-01

    Full Text Available A model was established for solar ejector-compression refrigeration system. The influence of generator temperature, middle-temperature, and evaporator temperature on the performance of the refrigerant system was analyzed. An optimal generator temperature is found for maximal energy efficiency ratio and minimal power consumption.

  4. New Regenerative Cycle for Vapor Compression Refrigeration

    Energy Technology Data Exchange (ETDEWEB)

    Mark J. Bergander

    2005-08-29

    The main objective of this project is to confirm on a well-instrumented prototype the theoretically derived claims of higher efficiency and coefficient of performance for geothermal heat pumps based on a new regenerative thermodynamic cycle as comparing to existing technology. In order to demonstrate the improved performance of the prototype, it will be compared to published parameters of commercially available geothermal heat pumps manufactured by US and foreign companies. Other objectives are to optimize the design parameters and to determine the economic viability of the new technology. Background (as stated in the proposal): The proposed technology closely relates to EERE mission by improving energy efficiency, bringing clean, reliable and affordable heating and cooling to the residential and commercial buildings and reducing greenhouse gases emission. It can provide the same amount of heating and cooling with considerably less use of electrical energy and consequently has a potential of reducing our nations dependence on foreign oil. The theoretical basis for the proposed thermodynamic cycle was previously developed and was originally called a dynamic equilibrium method. This theory considers the dynamic equations of state of the working fluid and proposes the methods for modification of T-S trajectories of adiabatic transformation by changing dynamic properties of gas, such as flow rate, speed and acceleration. The substance of this proposal is a thermodynamic cycle characterized by the regenerative use of the potential energy of two-phase flow expansion, which in traditional systems is lost in expansion valves. The essential new features of the process are: (1) The application of two-step throttling of the working fluid and two-step compression of its vapor phase. (2) Use of a compressor as the initial step compression and a jet device as a second step, where throttling and compression are combined. (3) Controlled ratio of a working fluid at the first and

  5. Application specific compression : final report.

    Energy Technology Data Exchange (ETDEWEB)

    Melgaard, David Kennett; Byrne, Raymond Harry; Myers, Daniel S.; Harrison, Carol D.; Lee, David S.; Lewis, Phillip J.; Carlson, Jeffrey J.

    2008-12-01

    With the continuing development of more capable data gathering sensors, comes an increased demand on the bandwidth for transmitting larger quantities of data. To help counteract that trend, a study was undertaken to determine appropriate lossy data compression strategies for minimizing their impact on target detection and characterization. The survey of current compression techniques led us to the conclusion that wavelet compression was well suited for this purpose. Wavelet analysis essentially applies a low-pass and high-pass filter to the data, converting the data into the related coefficients that maintain spatial information as well as frequency information. Wavelet compression is achieved by zeroing the coefficients that pertain to the noise in the signal, i.e. the high frequency, low amplitude portion. This approach is well suited for our goal because it reduces the noise in the signal with only minimal impact on the larger, lower frequency target signatures. The resulting coefficients can then be encoded using lossless techniques with higher compression levels because of the lower entropy and significant number of zeros. No significant signal degradation or difficulties in target characterization or detection were observed or measured when wavelet compression was applied to simulated and real data, even when over 80% of the coefficients were zeroed. While the exact level of compression will be data set dependent, for the data sets we studied, compression factors over 10 were found to be satisfactory where conventional lossless techniques achieved levels of less than 3.

  6. Compressed Baryonic Matter of Astrophysics

    OpenAIRE

    Guo, Yanjun; Xu, Renxin

    2013-01-01

    Baryonic matter in the core of a massive and evolved star is compressed significantly to form a supra-nuclear object, and compressed baryonic matter (CBM) is then produced after supernova. The state of cold matter at a few nuclear density is pedagogically reviewed, with significant attention paid to a possible quark-cluster state conjectured from an astrophysical point of view.

  7. Streaming Compression of Hexahedral Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Isenburg, M; Courbet, C

    2010-02-03

    We describe a method for streaming compression of hexahedral meshes. Given an interleaved stream of vertices and hexahedral our coder incrementally compresses the mesh in the presented order. Our coder is extremely memory efficient when the input stream documents when vertices are referenced for the last time (i.e. when it contains topological finalization tags). Our coder then continuously releases and reuses data structures that no longer contribute to compressing the remainder of the stream. This means in practice that our coder has only a small fraction of the whole mesh in memory at any time. We can therefore compress very large meshes - even meshes that do not file in memory. Compared to traditional, non-streaming approaches that load the entire mesh and globally reorder it during compression, our algorithm trades a less compact compressed representation for significant gains in speed, memory, and I/O efficiency. For example, on the 456k hexahedra 'blade' mesh, our coder is twice as fast and uses 88 times less memory (only 3.1 MB) with the compressed file increasing about 3% in size. We also present the first scheme for predictive compression of properties associated with hexahedral cells.

  8. Data Compression with Linear Algebra

    OpenAIRE

    Etler, David

    2015-01-01

    A presentation on the applications of linear algebra to image compression. Covers entropy, the discrete cosine transform, thresholding, quantization, and examples of images compressed with DCT. Given in Spring 2015 at Ocean County College as part of the honors program.

  9. Images compression in nuclear medicine

    International Nuclear Information System (INIS)

    Rebelo, M.S.; Furuie, S.S.; Moura, L.

    1992-01-01

    The performance of two methods for images compression in nuclear medicine was evaluated. The LZW precise, and Cosine Transformed, approximate, methods were analyzed. The results were obtained, showing that the utilization of approximated method produced images with an agreeable quality for visual analysis and compression rates, considerably high than precise method. (C.G.C.)

  10. Compressive Sensing in Communication Systems

    DEFF Research Database (Denmark)

    Fyhn, Karsten

    2013-01-01

    . The need for cheaper, smarter and more energy efficient wireless devices is greater now than ever. This thesis addresses this problem and concerns the application of the recently developed sampling theory of compressive sensing in communication systems. Compressive sensing is the merging of signal...... acquisition and compression. It allows for sampling a signal with a rate below the bound dictated by the celebrated Shannon-Nyquist sampling theorem. In some communication systems this necessary minimum sample rate, dictated by the Shannon-Nyquist sampling theorem, is so high it is at the limit of what...... with using compressive sensing in communication systems. The main contribution of this thesis is two-fold: 1) a new compressive sensing hardware structure for spread spectrum signals, which is simpler than the current state-of-the-art, and 2) a range of algorithms for parameter estimation for the class...

  11. Effect of Functional Nano Channel Structures Different Widths on Injection Molding and Compression Molding Replication Capabilities

    DEFF Research Database (Denmark)

    Calaon, M.; Tosello, G.; Garnaes, J.

    The present study investigates the capabilities of the two employed processes, injection molding (IM) and injection compression molding (ICM) on replicating different channel cross sections. Statistical design of experiment was adopted to optimize replication quality of produced polymer parts wit...

  12. Compression force and radiation dose in the Norwegian Breast Cancer Screening Program

    Energy Technology Data Exchange (ETDEWEB)

    Waade, Gunvor G.; Sanderud, Audun [Department of Life Sciences and Health, Faculty of Health Sciences, Oslo and Akershus University College of Applied Sciences, P.O. 4 St. Olavs Plass, 0130 Oslo (Norway); Hofvind, Solveig, E-mail: solveig.hofvind@kreftregisteret.no [Department of Life Sciences and Health, Faculty of Health Sciences, Oslo and Akershus University College of Applied Sciences, P.O. 4 St. Olavs Plass, 0130 Oslo (Norway); The Cancer Registry of Norway, P.O. 5313 Majorstuen, 0304 Oslo (Norway)

    2017-03-15

    Highlights: • Compression force and radiation dose for 17 951 screening mammograms were analyzed. • Large variations in mean applied compression force between the breast centers. • Limited associations between compression force and radiation dose. - Abstract: Purpose: Compression force is used in mammography to reduce breast thickness and by that decrease radiation dose and improve image quality. There are no evidence-based recommendations regarding the optimal compression force. We analyzed compression force and radiation dose between screening centers in the Norwegian Breast Cancer Screening Program (NBCSP), as a first step towards establishing evidence-based recommendations for compression force. Materials and methods: The study included information from 17 951 randomly selected screening examinations among women screened with equipment from four different venors at fourteen breast centers in the NBCSP, January-March 2014. We analyzed the applied compression force and radiation dose used on craniocaudal (CC) and mediolateral-oblique (MLO) view on left breast, by breast centers and vendors. Results: Mean compression force used in the screening program was 116N (CC: 108N, MLO: 125N). The maximum difference in mean compression force between the centers was 63N for CC and 57N for MLO. Mean radiation dose for each image was 1.09 mGy (CC: 1.04mGy, MLO: 1.14mGy), varying from 0.55 mGy to 1.31 mGy between the centers. Compression force alone had a negligible impact on radiation dose (r{sup 2} = 0.8%, p = < 0.001). Conclusion: We observed substantial variations in mean compression forces between the breast centers. Breast characteristics and differences in automated exposure control between vendors might explain the low association between compression force and radiation dose. Further knowledge about different automated exposure controls and the impact of compression force on dose and image quality is needed to establish individualised and evidence

  13. Evaluation of mammogram compression efficiency

    International Nuclear Information System (INIS)

    Przelaskowski, A.; Surowski, P.; Kukula, A.

    2005-01-01

    Lossy image coding significantly improves performance over lossless methods, but a reliable control of diagnostic accuracy regarding compressed images is necessary. The acceptable range of compression ratios must be safe with respect to as many objective criteria as possible. This study evaluates the compression efficiency of digital mammograms in both numerically lossless (reversible) and lossy (irreversible) manner. Effective compression methods and concepts were examined to increase archiving and telediagnosis performance. Lossless compression as a primary applicable tool for medical applications was verified on a set 131 mammograms. Moreover, nine radiologists participated in the evaluation of lossy compression of mammograms. Subjective rating of diagnostically important features brought a set of mean rates given for each test image. The lesion detection test resulted in binary decision data analyzed statistically. The radiologists rated and interpreted malignant and benign lesions, representative pathology symptoms, and other structures susceptible to compression distortions contained in 22 original and 62 reconstructed mammograms. Test mammograms were collected in two radiology centers for three years and then selected according to diagnostic content suitable for an evaluation of compression effects. Lossless compression efficiency of the tested coders varied, but CALIC, JPEG-LS, and SPIHT performed the best. The evaluation of lossy compression effects affecting detection ability was based on ROC-like analysis. Assuming a two-sided significance level of p=0.05, the null hypothesis that lower bit rate reconstructions are as useful for diagnosis as the originals was false in sensitivity tests with 0.04 bpp mammograms. However, verification of the same hypothesis with 0.1 bpp reconstructions suggested their acceptance. Moreover, the 1 bpp reconstructions were rated very similarly to the original mammograms in the diagnostic quality evaluation test, but the

  14. Compression etiology in tendinopathy.

    Science.gov (United States)

    Almekinders, Louis C; Weinhold, Paul S; Maffulli, Nicola

    2003-10-01

    Recent studies have emphasized that the etiology of tendinopathy is not as simple as was once thought. The etiology is likely to be multifactorial. Etiologic factors may include some of the traditional factors such as overuse, inflexibility, and equipment problems; however, other factors need to be considered as well, such as age-related tendon degeneration and biomechanical considerations as outlined in this article. More research is needed to determine the significance of stress-shielding and compression in tendinopathy. If they are confirmed to play a role, this finding may significantly alter our approach in both prevention and in treatment through exercise therapy. The current biomechanical studies indicate that certain joint positions are more likely to place tensile stress on the area of the tendon commonly affected by tendinopathy. These joint positions seem to be different than the traditional positions for stretching exercises used for prevention and rehabilitation of tendinopathic conditions. Incorporation of different joint positions during stretching exercises may exert more uniform, controlled tensile stress on these affected areas of the tendon and avoid stresshielding. These exercises may be able to better maintain the mechanical strength of that region of the tendon and thereby avoid injury. Alternatively, they could more uniformly stress a healing area of the tendon in a controlled manner, and thereby stimulate healing once an injury has occurred. Additional work will have to prove if a change in rehabilitation exercises is more efficacious that current techniques.

  15. Compressible Vortex Ring

    Science.gov (United States)

    Elavarasan, Ramasamy; Arakeri, Jayawant; Krothapalli, Anjaneyulu

    1999-11-01

    The interaction of a high-speed vortex ring with a shock wave is one of the fundamental issues as it is a source of sound in supersonic jets. The complex flow field induced by the vortex alters the propagation of the shock wave greatly. In order to understand the process, a compressible vortex ring is studied in detail using Particle Image Velocimetry (PIV) and shadowgraphic techniques. The high-speed vortex ring is generated from a shock tube and the shock wave, which precedes the vortex, is reflected back by a plate and made to interact with the vortex. The shadowgraph images indicate that the reflected shock front is influenced by the non-uniform flow induced by the vortex and is decelerated while passing through the vortex. It appears that after the interaction the shock is "split" into two. The PIV measurements provided clear picture about the evolution of the vortex at different time interval. The centerline velocity traces show the maximum velocity to be around 350 m/s. The velocity field, unlike in incompressible rings, contains contributions from both the shock and the vortex ring. The velocity distribution across the vortex core, core diameter and circulation are also calculated from the PIV data.

  16. Advances in compressible turbulent mixing

    International Nuclear Information System (INIS)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately

  17. Mammography image compression using Wavelet

    International Nuclear Information System (INIS)

    Azuhar Ripin; Md Saion Salikin; Wan Hazlinda Ismail; Asmaliza Hashim; Norriza Md Isa

    2004-01-01

    Image compression plays an important role in many applications like medical imaging, televideo conferencing, remote sensing, document and facsimile transmission, which depend on the efficient manipulation, storage, and transmission of binary, gray scale, or color images. In Medical imaging application such Picture Archiving and Communication System (PACs), the image size or image stream size is too large and requires a large amount of storage space or high bandwidth for communication. Image compression techniques are divided into two categories namely lossy and lossless data compression. Wavelet method used in this project is a lossless compression method. In this method, the exact original mammography image data can be recovered. In this project, mammography images are digitized by using Vider Sierra Plus digitizer. The digitized images are compressed by using this wavelet image compression technique. Interactive Data Language (IDLs) numerical and visualization software is used to perform all of the calculations, to generate and display all of the compressed images. Results of this project are presented in this paper. (Author)

  18. Advances in compressible turbulent mixing

    Energy Technology Data Exchange (ETDEWEB)

    Dannevik, W.P.; Buckingham, A.C.; Leith, C.E. [eds.

    1992-01-01

    This volume includes some recent additions to original material prepared for the Princeton International Workshop on the Physics of Compressible Turbulent Mixing, held in 1988. Workshop participants were asked to emphasize the physics of the compressible mixing process rather than measurement techniques or computational methods. Actual experimental results and their meaning were given precedence over discussions of new diagnostic developments. Theoretical interpretations and understanding were stressed rather than the exposition of new analytical model developments or advances in numerical procedures. By design, compressibility influences on turbulent mixing were discussed--almost exclusively--from the perspective of supersonic flow field studies. The papers are arranged in three topical categories: Foundations, Vortical Domination, and Strongly Coupled Compressibility. The Foundations category is a collection of seminal studies that connect current study in compressible turbulent mixing with compressible, high-speed turbulent flow research that almost vanished about two decades ago. A number of contributions are included on flow instability initiation, evolution, and transition between the states of unstable flow onset through those descriptive of fully developed turbulence. The Vortical Domination category includes theoretical and experimental studies of coherent structures, vortex pairing, vortex-dynamics-influenced pressure focusing. In the Strongly Coupled Compressibility category the organizers included the high-speed turbulent flow investigations in which the interaction of shock waves could be considered an important source for production of new turbulence or for the enhancement of pre-existing turbulence. Individual papers are processed separately.

  19. Context-Aware Image Compression.

    Directory of Open Access Journals (Sweden)

    Jacky C K Chan

    Full Text Available We describe a physics-based data compression method inspired by the photonic time stretch wherein information-rich portions of the data are dilated in a process that emulates the effect of group velocity dispersion on temporal signals. With this coding operation, the data can be downsampled at a lower rate than without it. In contrast to previous implementation of the warped stretch compression, here the decoding can be performed without the need of phase recovery. We present rate-distortion analysis and show improvement in PSNR compared to compression via uniform downsampling.

  20. Compressive sensing for urban radar

    CERN Document Server

    Amin, Moeness

    2014-01-01

    With the emergence of compressive sensing and sparse signal reconstruction, approaches to urban radar have shifted toward relaxed constraints on signal sampling schemes in time and space, and to effectively address logistic difficulties in data acquisition. Traditionally, these challenges have hindered high resolution imaging by restricting both bandwidth and aperture, and by imposing uniformity and bounds on sampling rates.Compressive Sensing for Urban Radar is the first book to focus on a hybrid of two key areas: compressive sensing and urban sensing. It explains how reliable imaging, tracki

  1. Efficient Lossy Compression for Compressive Sensing Acquisition of Images in Compressive Sensing Imaging Systems

    Directory of Open Access Journals (Sweden)

    Xiangwei Li

    2014-12-01

    Full Text Available Compressive Sensing Imaging (CSI is a new framework for image acquisition, which enables the simultaneous acquisition and compression of a scene. Since the characteristics of Compressive Sensing (CS acquisition are very different from traditional image acquisition, the general image compression solution may not work well. In this paper, we propose an efficient lossy compression solution for CS acquisition of images by considering the distinctive features of the CSI. First, we design an adaptive compressive sensing acquisition method for images according to the sampling rate, which could achieve better CS reconstruction quality for the acquired image. Second, we develop a universal quantization for the obtained CS measurements from CS acquisition without knowing any a priori information about the captured image. Finally, we apply these two methods in the CSI system for efficient lossy compression of CS acquisition. Simulation results demonstrate that the proposed solution improves the rate-distortion performance by 0.4~2 dB comparing with current state-of-the-art, while maintaining a low computational complexity.

  2. Compressed gas fuel storage system

    Science.gov (United States)

    Wozniak, John J.; Tiller, Dale B.; Wienhold, Paul D.; Hildebrand, Richard J.

    2001-01-01

    A compressed gas vehicle fuel storage system comprised of a plurality of compressed gas pressure cells supported by shock-absorbing foam positioned within a shape-conforming container. The container is dimensioned relative to the compressed gas pressure cells whereby a radial air gap surrounds each compressed gas pressure cell. The radial air gap allows pressure-induced expansion of the pressure cells without resulting in the application of pressure to adjacent pressure cells or physical pressure to the container. The pressure cells are interconnected by a gas control assembly including a thermally activated pressure relief device, a manual safety shut-off valve, and means for connecting the fuel storage system to a vehicle power source and a refueling adapter. The gas control assembly is enclosed by a protective cover attached to the container. The system is attached to the vehicle with straps to enable the chassis to deform as intended in a high-speed collision.

  3. Compressed sensing for distributed systems

    CERN Document Server

    Coluccia, Giulio; Magli, Enrico

    2015-01-01

    This book presents a survey of the state-of-the art in the exciting and timely topic of compressed sensing for distributed systems. It has to be noted that, while compressed sensing has been studied for some time now, its distributed applications are relatively new. Remarkably, such applications are ideally suited to exploit all the benefits that compressed sensing can provide. The objective of this book is to provide the reader with a comprehensive survey of this topic, from the basic concepts to different classes of centralized and distributed reconstruction algorithms, as well as a comparison of these techniques. This book collects different contributions on these aspects. It presents the underlying theory in a complete and unified way for the first time, presenting various signal models and their use cases. It contains a theoretical part collecting latest results in rate-distortion analysis of distributed compressed sensing, as well as practical implementations of algorithms obtaining performance close to...

  4. Nonlinear compression of optical solitons

    Indian Academy of Sciences (India)

    linear pulse propagation is the nonlinear Schrödinger (NLS) equation [1]. There are ... Optical pulse compression finds important applications in optical fibres. The pulse com ..... to thank CSIR, New Delhi for financial support in the form of SRF.

  5. Research on the compressive strength of a passenger vehicle roof

    Science.gov (United States)

    Zhao, Guanglei; Cao, Jianxiao; Liu, Tao; Yang, Na; Zhao, Hongguang

    2017-05-01

    To study the compressive strength of a passenger vehicle roof, this paper makes the simulation test on the static collapse of the passenger vehicle roof and analyzes the stress and deformation of the vehicle roof under pressure in accordance with the Roof Crush Resistance of Passenger Cars (GB26134-2010). It studies the optimization on the major stressed parts, pillar A, pillar B and the rail of roof, during the static collapse process of passenger vehicle roof. The result shows that the thickness of pillar A and the roof rail has significant influence on the compressive strength of the roof while that of pillar B has minor influence on the compressive strength of the roof.

  6. Metal hydride hydrogen compression: recent advances and future prospects

    Science.gov (United States)

    Yartys, Volodymyr A.; Lototskyy, Mykhaylo; Linkov, Vladimir; Grant, David; Stuart, Alastair; Eriksen, Jon; Denys, Roman; Bowman, Robert C.

    2016-04-01

    Metal hydride (MH) thermal sorption compression is one of the more important applications of the MHs. The present paper reviews recent advances in the field based on the analysis of the fundamental principles of this technology. The performances when boosting hydrogen pressure, along with two- and three-step compression units, are analyzed. The paper includes also a theoretical modelling of a two-stage compressor aimed at describing the performance of the experimentally studied systems, their optimization and design of more advanced MH compressors. Business developments in the field are reviewed for the Norwegian company HYSTORSYS AS and the South African Institute for Advanced Materials Chemistry. Finally, future prospects are outlined presenting the role of the MH compression in the overall development of the hydrogen-driven energy systems. The work is based on the analysis of the development of the technology in Europe, USA and South Africa.

  7. 29 CFR 1917.154 - Compressed air.

    Science.gov (United States)

    2010-07-01

    ... 29 Labor 7 2010-07-01 2010-07-01 false Compressed air. 1917.154 Section 1917.154 Labor Regulations...) MARINE TERMINALS Related Terminal Operations and Equipment § 1917.154 Compressed air. Employees shall be... this part during cleaning with compressed air. Compressed air used for cleaning shall not exceed a...

  8. Double-compression method for biomedical images

    Science.gov (United States)

    Antonenko, Yevhenii A.; Mustetsov, Timofey N.; Hamdi, Rami R.; Małecka-Massalska, Teresa; Orshubekov, Nurbek; DzierŻak, RóŻa; Uvaysova, Svetlana

    2017-08-01

    This paper describes a double compression method (DCM) of biomedical images. A comparison of image compression factors in size JPEG, PNG and developed DCM was carried out. The main purpose of the DCM - compression of medical images while maintaining the key points that carry diagnostic information. To estimate the minimum compression factor an analysis of the coding of random noise image is presented.

  9. A biomechanical model of mammographic compressions.

    Science.gov (United States)

    Chung, J H; Rajagopal, V; Nielsen, P M F; Nash, M P

    2008-02-01

    A number of biomechanical models have been proposed to improve nonrigid registration techniques for multimodal breast image alignment. A deformable breast model may also be useful for overcoming difficulties in interpreting 2D X-ray projections (mammograms) of 3D volumes (breast tissues). If a deformable model could accurately predict the shape changes that breasts undergo during mammography, then the model could serve to localize suspicious masses (visible in mammograms) in the unloaded state, or in any other deformed state required for further investigations (such as biopsy or other medical imaging modalities). In this paper, we present a validation study that was conducted in order to develop a biomechanical model based on the well-established theory of continuum mechanics (finite elasticity theory with contact mechanics) and demonstrate its use for this application. Experimental studies using gel phantoms were conducted to test the accuracy in predicting mammographic-like deformations. The material properties of the gel phantom were estimated using a nonlinear optimization process, which minimized the errors between the experimental and the model-predicted surface data by adjusting the parameter associated with the neo-Hookean constitutive relation. Two compressions (the equivalent of cranio-caudal and medio-lateral mammograms) were performed on the phantom, and the corresponding deformations were recorded using a MRI scanner. Finite element simulations were performed to mimic the experiments using the estimated material properties with appropriate boundary conditions. The simulation results matched the experimental recordings of the deformed phantom, with a sub-millimeter root-mean-square error for each compression state. Having now validated our finite element model of breast compression, the next stage is to apply the model to clinical images.

  10. Evaluation of a new image compression technique

    International Nuclear Information System (INIS)

    Algra, P.R.; Kroon, H.M.; Noordveld, R.B.; DeValk, J.P.J.; Seeley, G.W.; Westerink, P.H.

    1988-01-01

    The authors present the evaluation of a new image compression technique, subband coding using vector quantization, on 44 CT examinations of the upper abdomen. Three independent radiologists reviewed the original images and compressed versions. The compression ratios used were 16:1 and 20:1. Receiver operating characteristic analysis showed no difference in the diagnostic contents between originals and their compressed versions. Subjective visibility of anatomic structures was equal. Except for a few 20:1 compressed images, the observers could not distinguish compressed versions from original images. They conclude that subband coding using vector quantization is a valuable method for data compression in CT scans of the abdomen

  11. Efficient transmission of compressed data for remote volume visualization.

    Science.gov (United States)

    Krishnan, Karthik; Marcellin, Michael W; Bilgin, Ali; Nadar, Mariappan S

    2006-09-01

    One of the goals of telemedicine is to enable remote visualization and browsing of medical volumes. There is a need to employ scalable compression schemes and efficient client-server models to obtain interactivity and an enhanced viewing experience. First, we present a scheme that uses JPEG2000 and JPIP (JPEG2000 Interactive Protocol) to transmit data in a multi-resolution and progressive fashion. The server exploits the spatial locality offered by the wavelet transform and packet indexing information to transmit, in so far as possible, compressed volume data relevant to the clients query. Once the client identifies its volume of interest (VOI), the volume is refined progressively within the VOI from an initial lossy to a final lossless representation. Contextual background information can also be made available having quality fading away from the VOI. Second, we present a prioritization that enables the client to progressively visualize scene content from a compressed file. In our specific example, the client is able to make requests to progressively receive data corresponding to any tissue type. The server is now capable of reordering the same compressed data file on the fly to serve data packets prioritized as per the client's request. Lastly, we describe the effect of compression parameters on compression ratio, decoding times and interactivity. We also present suggestions for optimizing JPEG2000 for remote volume visualization and volume browsing applications. The resulting system is ideally suited for client-server applications with the server maintaining the compressed volume data, to be browsed by a client with a low bandwidth constraint.

  12. Word aligned bitmap compression method, data structure, and apparatus

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  13. Word aligned bitmap compression method, data structure, and apparatus

    Science.gov (United States)

    Wu, Kesheng; Shoshani, Arie; Otoo, Ekow

    2004-12-14

    The Word-Aligned Hybrid (WAH) bitmap compression method and data structure is a relatively efficient method for searching and performing logical, counting, and pattern location operations upon large datasets. The technique is comprised of a data structure and methods that are optimized for computational efficiency by using the WAH compression method, which typically takes advantage of the target computing system's native word length. WAH is particularly apropos to infrequently varying databases, including those found in the on-line analytical processing (OLAP) industry, due to the increased computational efficiency of the WAH compressed bitmap index. Some commercial database products already include some version of a bitmap index, which could possibly be replaced by the WAH bitmap compression techniques for potentially increased operation speed, as well as increased efficiencies in constructing compressed bitmaps. Combined together, this technique may be particularly useful for real-time business intelligence. Additional WAH applications may include scientific modeling, such as climate and combustion simulations, to minimize search time for analysis and subsequent data visualization.

  14. Building indifferentiable compression functions from the PGV compression functions

    DEFF Research Database (Denmark)

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde

    2016-01-01

    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black......, Rogaway and Shrimpton formally proved this result in the ideal cipher model. However, in the indifferentiability security framework introduced by Maurer, Renner and Holenstein, all these 12 schemes are easily differentiable from a fixed input-length random oracle (FIL-RO) even when their underlying block...

  15. Compression of Probabilistic XML Documents

    Science.gov (United States)

    Veldman, Irma; de Keijzer, Ander; van Keulen, Maurice

    Database techniques to store, query and manipulate data that contains uncertainty receives increasing research interest. Such UDBMSs can be classified according to their underlying data model: relational, XML, or RDF. We focus on uncertain XML DBMS with as representative example the Probabilistic XML model (PXML) of [10,9]. The size of a PXML document is obviously a factor in performance. There are PXML-specific techniques to reduce the size, such as a push down mechanism, that produces equivalent but more compact PXML documents. It can only be applied, however, where possibilities are dependent. For normal XML documents there also exist several techniques for compressing a document. Since Probabilistic XML is (a special form of) normal XML, it might benefit from these methods even more. In this paper, we show that existing compression mechanisms can be combined with PXML-specific compression techniques. We also show that best compression rates are obtained with a combination of PXML-specific technique with a rather simple generic DAG-compression technique.

  16. Plasma heating by adiabatic compression

    International Nuclear Information System (INIS)

    Ellis, R.A. Jr.

    1972-01-01

    These two lectures will cover the following three topics: (i) The application of adiabatic compression to toroidal devices is reviewed. The special case of adiabatic compression in tokamaks is considered in more detail, including a discussion of the equilibrium, scaling laws, and heating effects. (ii) The ATC (Adiabatic Toroidal Compressor) device which was completed in May 1972, is described in detail. Compression of a tokamak plasma across a static toroidal field is studied in this device. The device is designed to produce a pre-compression plasma with a major radius of 17 cm, toroidal field of 20 kG, and current of 90 kA. The compression leads to a plasma with major radius of 38 cm and minor radius of 10 cm. Scaling laws imply a density increase of a factor 6, temperature increase of a factor 3, and current increase of a factor 2.4. An additional feature of ATC is that it is a large tokamak which operates without a copper shell. (iii) Data which show that the expected MHD behavior is largely observed is presented and discussed. (U.S.)

  17. Concurrent data compression and protection

    International Nuclear Information System (INIS)

    Saeed, M.

    2009-01-01

    Data compression techniques involve transforming data of a given format, called source message, to data of a smaller sized format, called codeword. The primary objective of data encryption is to ensure security of data if it is intercepted by an eavesdropper. It transforms data of a given format, called plaintext, to another format, called ciphertext, using an encryption key or keys. Thus, combining the processes of compression and encryption together must be done in this order, that is, compression followed by encryption because all compression techniques heavily rely on the redundancies which are inherently a part of a regular text or speech. The aim of this research is to combine two processes of compression (using an existing scheme) with a new encryption scheme which should be compatible with encoding scheme embedded in encoder. The novel technique proposed by the authors is new, unique and is highly secured. The deployment of sentinel marker' enhances the security of the proposed TR-One algorithm from 2/sup 44/ ciphertexts to 2/sup 44/ +2/sub 20/ ciphertexts thus imposing extra challenges to the intruders. (author)

  18. Radiologic image compression -- A review

    International Nuclear Information System (INIS)

    Wong, S.; Huang, H.K.; Zaremba, L.; Gooden, D.

    1995-01-01

    The objective of radiologic image compression is to reduce the data volume of and to achieve a lot bit rate in the digital representation of radiologic images without perceived loss of image quality. However, the demand for transmission bandwidth and storage space in the digital radiology environment, especially picture archiving and communication systems (PACS) and teleradiology, and the proliferating use of various imaging modalities, such as magnetic resonance imaging, computed tomography, ultrasonography, nuclear medicine, computed radiography, and digital subtraction angiography, continue to outstrip the capabilities of existing technologies. The availability of lossy coding techniques for clinical diagnoses further implicates many complex legal and regulatory issues. This paper reviews the recent progress of lossless and lossy radiologic image compression and presents the legal challenges of using lossy compression of medical records. To do so, the authors first describe the fundamental concepts of radiologic imaging and digitization. Then, the authors examine current compression technology in the field of medical imaging and discuss important regulatory policies and legal questions facing the use of compression in this field. The authors conclude with a summary of future challenges and research directions. 170 refs

  19. 30 CFR 75.1730 - Compressed air; general; compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air; general; compressed air systems... Compressed air; general; compressed air systems. (a) All pressure vessels shall be constructed, installed... Safety and Health district office. (b) Compressors and compressed-air receivers shall be equipped with...

  20. Optimum SNR data compression in hardware using an Eigencoil array.

    Science.gov (United States)

    King, Scott B; Varosi, Steve M; Duensing, G Randy

    2010-05-01

    With the number of receivers available on clinical MRI systems now ranging from 8 to 32 channels, data compression methods are being explored to lessen the demands on the computer for data handling and processing. Although software-based methods of compression after reception lessen computational requirements, a hardware-based method before the receiver also reduces the number of receive channels required. An eight-channel Eigencoil array is constructed by placing a hardware radiofrequency signal combiner inline after preamplification, before the receiver system. The Eigencoil array produces signal-to-noise ratio (SNR) of an optimal reconstruction using a standard sum-of-squares reconstruction, with peripheral SNR gains of 30% over the standard array. The concept of "receiver channel reduction" or MRI data compression is demonstrated, with optimal SNR using only four channels, and with a three-channel Eigencoil, superior sum-of-squares SNR was achieved over the standard eight-channel array. A three-channel Eigencoil portion of a product neurovascular array confirms in vivo SNR performance and demonstrates parallel MRI up to R = 3. This SNR-preserving data compression method advantageously allows users of MRI systems with fewer receiver channels to achieve the SNR of higher-channel MRI systems. (c) 2010 Wiley-Liss, Inc.

  1. Image compression using moving average histogram and RBF network

    International Nuclear Information System (INIS)

    Khowaja, S.; Ismaili, I.A.

    2015-01-01

    Modernization and Globalization have made the multimedia technology as one of the fastest growing field in recent times but optimal use of bandwidth and storage has been one of the topics which attract the research community to work on. Considering that images have a lion share in multimedia communication, efficient image compression technique has become the basic need for optimal use of bandwidth and space. This paper proposes a novel method for image compression based on fusion of moving average histogram and RBF (Radial Basis Function). Proposed technique employs the concept of reducing color intensity levels using moving average histogram technique followed by the correction of color intensity levels using RBF networks at reconstruction phase. Existing methods have used low resolution images for the testing purpose but the proposed method has been tested on various image resolutions to have a clear assessment of the said technique. The proposed method have been tested on 35 images with varying resolution and have been compared with the existing algorithms in terms of CR (Compression Ratio), MSE (Mean Square Error), PSNR (Peak Signal to Noise Ratio), computational complexity. The outcome shows that the proposed methodology is a better trade off technique in terms of compression ratio, PSNR which determines the quality of the image and computational complexity. (author)

  2. A Coded Aperture Compressive Imaging Array and Its Visual Detection and Tracking Algorithms for Surveillance Systems

    Directory of Open Access Journals (Sweden)

    Hanxiao Wu

    2012-10-01

    Full Text Available In this paper, we propose an application of a compressive imaging system to the problem of wide-area video surveillance systems. A parallel coded aperture compressive imaging system is proposed to reduce the needed high resolution coded mask requirements and facilitate the storage of the projection matrix. Random Gaussian, Toeplitz and binary phase coded masks are utilized to obtain the compressive sensing images. The corresponding motion targets detection and tracking algorithms directly using the compressive sampling images are developed. A mixture of Gaussian distribution is applied in the compressive image space to model the background image and for foreground detection. For each motion target in the compressive sampling domain, a compressive feature dictionary spanned by target templates and noises templates is sparsely represented. An l1 optimization algorithm is used to solve the sparse coefficient of templates. Experimental results demonstrate that low dimensional compressed imaging representation is sufficient to determine spatial motion targets. Compared with the random Gaussian and Toeplitz phase mask, motion detection algorithms using a random binary phase mask can yield better detection results. However using random Gaussian and Toeplitz phase mask can achieve high resolution reconstructed image. Our tracking algorithm can achieve a real time speed that is up to 10 times faster than that of the l1 tracker without any optimization.

  3. Rectal perforation by compressed air.

    Science.gov (United States)

    Park, Young Jin

    2017-07-01

    As the use of compressed air in industrial work has increased, so has the risk of associated pneumatic injury from its improper use. However, damage of large intestine caused by compressed air is uncommon. Herein a case of pneumatic rupture of the rectum is described. The patient was admitted to the Emergency Room complaining of abdominal pain and distension. His colleague triggered a compressed air nozzle over his buttock. On arrival, vital signs were stable but physical examination revealed peritoneal irritation and marked distension of the abdomen. Computed tomography showed a large volume of air in the peritoneal cavity and subcutaneous emphysema at the perineum. A rectal perforation was found at laparotomy and the Hartmann procedure was performed.

  4. Compact torus compression of microwaves

    International Nuclear Information System (INIS)

    Hewett, D.W.; Langdon, A.B.

    1985-01-01

    The possibility that a compact torus (CT) might be accelerated to large velocities has been suggested by Hartman and Hammer. If this is feasible one application of these moving CTs might be to compress microwaves. The proposed mechanism is that a coaxial vacuum region in front of a CT is prefilled with a number of normal electromagnetic modes on which the CT impinges. A crucial assumption of this proposal is that the CT excludes the microwaves and therefore compresses them. Should the microwaves penetrate the CT, compression efficiency is diminished and significant CT heating results. MFE applications in the same parameters regime have found electromagnetic radiation capable of penetrating, heating, and driving currents. We report here a cursory investigation of rf penetration using a 1-D version of a direct implicit PIC code

  5. Premixed autoignition in compressible turbulence

    Science.gov (United States)

    Konduri, Aditya; Kolla, Hemanth; Krisman, Alexander; Chen, Jacqueline

    2016-11-01

    Prediction of chemical ignition delay in an autoignition process is critical in combustion systems like compression ignition engines and gas turbines. Often, ignition delay times measured in simple homogeneous experiments or homogeneous calculations are not representative of actual autoignition processes in complex turbulent flows. This is due the presence of turbulent mixing which results in fluctuations in thermodynamic properties as well as chemical composition. In the present study the effect of fluctuations of thermodynamic variables on the ignition delay is quantified with direct numerical simulations of compressible isotropic turbulence. A premixed syngas-air mixture is used to remove the effects of inhomogeneity in the chemical composition. Preliminary results show a significant spatial variation in the ignition delay time. We analyze the topology of autoignition kernels and identify the influence of extreme events resulting from compressibility and intermittency. The dependence of ignition delay time on Reynolds and turbulent Mach numbers is also quantified. Supported by Basic Energy Sciences, Dept of Energy, United States.

  6. Lossless Compression of Broadcast Video

    DEFF Research Database (Denmark)

    Martins, Bo; Eriksen, N.; Faber, E.

    1998-01-01

    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...... cannot be expected to code losslessly at a rate of 125 Mbit/s. We investigate the rate and quality effects of quantization using standard JPEG-LS quantization and two new techniques: visual quantization and trellis quantization. Visual quantization is not part of baseline JPEG-LS, but is applicable...... in the framework of JPEG-LS. Visual tests show that this quantization technique gives much better quality than standard JPEG-LS quantization. Trellis quantization is a process by which the original image is altered in such a way as to make lossless JPEG-LS encoding more effective. For JPEG-LS and visual...

  7. Efficient access of compressed data

    International Nuclear Information System (INIS)

    Eggers, S.J.; Shoshani, A.

    1980-06-01

    A compression technique is presented that allows a high degree of compression but requires only logarithmic access time. The technique is a constant suppression scheme, and is most applicable to stable databases whose distribution of constants is fairly clustered. Furthermore, the repeated use of the technique permits the suppression of a multiple number of different constants. Of particular interest is the application of the constant suppression technique to databases the composite key of which is made up of an incomplete cross product of several attribute domains. The scheme for compressing the full cross product composite key is well known. This paper, however, also handles the general, incomplete case by applying the constant suppression technique in conjunction with a composite key suppression scheme

  8. Compressibility of rotating black holes

    International Nuclear Information System (INIS)

    Dolan, Brian P.

    2011-01-01

    Interpreting the cosmological constant as a pressure, whose thermodynamically conjugate variable is a volume, modifies the first law of black hole thermodynamics. Properties of the resulting thermodynamic volume are investigated: the compressibility and the speed of sound of the black hole are derived in the case of nonpositive cosmological constant. The adiabatic compressibility vanishes for a nonrotating black hole and is maximal in the extremal case--comparable with, but still less than, that of a cold neutron star. A speed of sound v s is associated with the adiabatic compressibility, which is equal to c for a nonrotating black hole and decreases as the angular momentum is increased. An extremal black hole has v s 2 =0.9 c 2 when the cosmological constant vanishes, and more generally v s is bounded below by c/√(2).

  9. Compressive behavior of fine sand.

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Bradley E. (Air Force Research Laboratory, Eglin, FL); Kabir, Md. E. (Purdue University, West Lafayette, IN); Song, Bo; Chen, Wayne (Purdue University, West Lafayette, IN)

    2010-04-01

    The compressive mechanical response of fine sand is experimentally investigated. The strain rate, initial density, stress state, and moisture level are systematically varied. A Kolsky bar was modified to obtain uniaxial and triaxial compressive response at high strain rates. A controlled loading pulse allows the specimen to acquire stress equilibrium and constant strain-rates. The results show that the compressive response of the fine sand is not sensitive to strain rate under the loading conditions in this study, but significantly dependent on the moisture content, initial density and lateral confinement. Partially saturated sand is more compliant than dry sand. Similar trends were reported in the quasi-static regime for experiments conducted at comparable specimen conditions. The sand becomes stiffer as initial density and/or confinement pressure increases. The sand particle size become smaller after hydrostatic pressure and further smaller after dynamic axial loading.

  10. Coding Strategies and Implementations of Compressive Sensing

    Science.gov (United States)

    Tsai, Tsung-Han

    information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

  11. Correlations between quality indexes of chest compression.

    Science.gov (United States)

    Zhang, Feng-Ling; Yan, Li; Huang, Su-Fang; Bai, Xiang-Jun

    2013-01-01

    Cardiopulmonary resuscitation (CPR) is a kind of emergency treatment for cardiopulmonary arrest, and chest compression is the most important and necessary part of CPR. The American Heart Association published the new Guidelines for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care in 2010 and demanded for better performance of chest compression practice, especially in compression depth and rate. The current study was to explore the relationship of quality indexes of chest compression and to identify the key points in chest compression training and practice. Totally 219 healthcare workers accepted chest compression training by using Laerdal ACLS advanced life support resuscitation model. The quality indexes of chest compression, including compression hands placement, compression rate, compression depth, and chest wall recoil as well as self-reported fatigue time were monitored by the Laerdal Computer Skills and Reporting System. The quality of chest compression was related to the gender of the compressor. The indexes in males, including self-reported fatigue time, the accuracy of compression depth and the compression rate, the accuracy of compression rate, were higher than those in females. However, the accuracy of chest recoil was higher in females than in males. The quality indexes of chest compression were correlated with each other. The self-reported fatigue time was related to all the indexes except the compression rate. It is necessary to offer CPR training courses regularly. In clinical practice, it might be better to change the practitioner before fatigue, especially for females or weak practitioners. In training projects, more attention should be paid to the control of compression rate, in order to delay the fatigue, guarantee enough compression depth and improve the quality of chest compression.

  12. Optimization and Optimal Control

    CERN Document Server

    Chinchuluun, Altannar; Enkhbat, Rentsen; Tseveendorj, Ider

    2010-01-01

    During the last four decades there has been a remarkable development in optimization and optimal control. Due to its wide variety of applications, many scientists and researchers have paid attention to fields of optimization and optimal control. A huge number of new theoretical, algorithmic, and computational results have been observed in the last few years. This book gives the latest advances, and due to the rapid development of these fields, there are no other recent publications on the same topics. Key features: Provides a collection of selected contributions giving a state-of-the-art accou

  13. Optimally Stopped Optimization

    Science.gov (United States)

    Vinci, Walter; Lidar, Daniel

    We combine the fields of heuristic optimization and optimal stopping. We propose a strategy for benchmarking randomized optimization algorithms that minimizes the expected total cost for obtaining a good solution with an optimal number of calls to the solver. To do so, rather than letting the objective function alone define a cost to be minimized, we introduce a further cost-per-call of the algorithm. We show that this problem can be formulated using optimal stopping theory. The expected cost is a flexible figure of merit for benchmarking probabilistic solvers that can be computed when the optimal solution is not known, and that avoids the biases and arbitrariness that affect other measures. The optimal stopping formulation of benchmarking directly leads to a real-time, optimal-utilization strategy for probabilistic optimizers with practical impact. We apply our formulation to benchmark the performance of a D-Wave 2X quantum annealer and the HFS solver, a specialized classical heuristic algorithm designed for low tree-width graphs. On a set of frustrated-loop instances with planted solutions defined on up to N = 1098 variables, the D-Wave device is between one to two orders of magnitude faster than the HFS solver.

  14. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest

    NARCIS (Netherlands)

    Monsieurs, Koenraad G.; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F.; Calle, Paul A.

    2012-01-01

    Background and goal of study: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with

  15. The impact of chest compression rates on quality of chest compressions : a manikin study

    OpenAIRE

    Field, Richard A.; Soar, Jasmeet; Davies, Robin P.; Akhtar, Naheed; Perkins, Gavin D.

    2012-01-01

    Purpose\\ud Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables.\\ud Methods\\ud Twenty healthcare professionals performed two minutes of co...

  16. Lossless image data sequence compression using optimal context quantization

    DEFF Research Database (Denmark)

    Forchhammer, Søren; WU, Xiaolin; Andersen, Jakob Dahl

    2001-01-01

    Context based entropy coding often faces the conflict of a desire for large templates and the problem of context dilution. We consider the problem of finding the quantizer Q that quantizes the K-dimensional causal context Ci=(X(i-t1), X(i-t2), …, X(i-tK)) of a source symbol Xi into one of M...

  17. Optimization of compressive strength of zirconia based dental ...

    Indian Academy of Sciences (India)

    Administrator

    have been done on size, shape and types of filler particles to be incorporated into .... A magnetic stirrer was used to agitate the mixture for 3 days, resulting in a ... filled with the test composite and all air bubbles were excluded. A second piece ...

  18. Optimization of injection pressure for a compression ignition engine ...

    African Journals Online (AJOL)

    user

    injection and atomization and contributes to incomplete combustion, nozzle clogging, ... this non edible oil may be an appropriate substitute for diesel fuel. ... The effect of injector opening pressure on the performance of a jatropha oil fuelled ...

  19. optimizing compression zone of flanged hollow cored concrete

    African Journals Online (AJOL)

    eobe

    Equations were derived using double integration method to determine the moment of to determine the ... and stiffness-mass ratios and the desire to reduce the mass or weight .... derived by direct observation of test samples in the laboratory ...

  20. JPEG2000 COMPRESSION CODING USING HUMAN VISUAL SYSTEM MODEL

    Institute of Scientific and Technical Information of China (English)

    Xiao Jiang; Wu Chengke

    2005-01-01

    In order to apply the Human Visual System (HVS) model to JPEG2000 standard,several implementation alternatives are discussed and a new scheme of visual optimization isintroduced with modifying the slope of rate-distortion. The novelty is that the method of visual weighting is not lifting the coefficients in wavelet domain, but is complemented by code stream organization. It remains all the features of Embedded Block Coding with Optimized Truncation (EBCOT) such as resolution progressive, good robust for error bit spread and compatibility of lossless compression. Well performed than other methods, it keeps the shortest standard codestream and decompression time and owns the ability of VIsual Progressive (VIP) coding.

  1. Compressing Data Cube in Parallel OLAP Systems

    Directory of Open Access Journals (Sweden)

    Frank Dehne

    2007-03-01

    Full Text Available This paper proposes an efficient algorithm to compress the cubes in the progress of the parallel data cube generation. This low overhead compression mechanism provides block-by-block and record-by-record compression by using tuple difference coding techniques, thereby maximizing the compression ratio and minimizing the decompression penalty at run-time. The experimental results demonstrate that the typical compression ratio is about 30:1 without sacrificing running time. This paper also demonstrates that the compression method is suitable for Hilbert Space Filling Curve, a mechanism widely used in multi-dimensional indexing.

  2. CEPRAM: Compression for Endurance in PCM RAM

    OpenAIRE

    González Alberquilla, Rodrigo; Castro Rodríguez, Fernando; Piñuel Moreno, Luis; Tirado Fernández, Francisco

    2017-01-01

    We deal with the endurance problem of Phase Change Memories (PCM) by proposing Compression for Endurance in PCM RAM (CEPRAM), a technique to elongate the lifespan of PCM-based main memory through compression. We introduce a total of three compression schemes based on already existent schemes, but targeting compression for PCM-based systems. We do a two-level evaluation. First, we quantify the performance of the compression, in terms of compressed size, bit-flips and how they are affected by e...

  3. Entropy, Coding and Data Compression

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 9. Entropy, Coding and Data Compression. S Natarajan. General Article Volume 6 Issue 9 September 2001 pp 35-45. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/09/0035-0045 ...

  4. Shock compression of synthetic opal

    International Nuclear Information System (INIS)

    Inoue, A; Okuno, M; Okudera, H; Mashimo, T; Omurzak, E; Katayama, S; Koyano, M

    2010-01-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO 4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO 2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  5. Range Compressed Holographic Aperture Ladar

    Science.gov (United States)

    2017-06-01

    entropy saturation behavior of the estimator is analytically described. Simultaneous range-compression and aperture synthesis is experimentally...4 2.1 Circular and Inverse -Circular HAL...2.3 Single Aperture, Multi-λ Imaging ...................................................................................... 14 2.4 Simultaneous Range

  6. Compression of Probabilistic XML documents

    NARCIS (Netherlands)

    Veldman, Irma

    2009-01-01

    Probabilistic XML (PXML) files resulting from data integration can become extremely large, which is undesired. For XML there are several techniques available to compress the document and since probabilistic XML is in fact (a special form of) XML, it might benefit from these methods even more. In

  7. Shock compression of synthetic opal

    Science.gov (United States)

    Inoue, A.; Okuno, M.; Okudera, H.; Mashimo, T.; Omurzak, E.; Katayama, S.; Koyano, M.

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO4 tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO2 glass. However, internal silanole groups still remain even at 38.1 GPa.

  8. Shock compression of synthetic opal

    Energy Technology Data Exchange (ETDEWEB)

    Inoue, A; Okuno, M; Okudera, H [Department of Earth Sciences, Kanazawa University Kanazawa, Ishikawa, 920-1192 (Japan); Mashimo, T; Omurzak, E [Shock Wave and Condensed Matter Research Center, Kumamoto University, Kumamoto, 860-8555 (Japan); Katayama, S; Koyano, M, E-mail: okuno@kenroku.kanazawa-u.ac.j [JAIST, Nomi, Ishikawa, 923-1297 (Japan)

    2010-03-01

    Structural change of synthetic opal by shock-wave compression up to 38.1 GPa has been investigated by using SEM, X-ray diffraction method (XRD), Infrared (IR) and Raman spectroscopies. Obtained information may indicate that the dehydration and polymerization of surface silanole due to high shock and residual temperature are very important factors in the structural evolution of synthetic opal by shock compression. Synthetic opal loses opalescence by 10.9 and 18.4 GPa of shock pressures. At 18.4 GPa, dehydration and polymerization of surface silanole and transformation of network structure may occur simultaneously. The 4-membered ring of TO{sub 4} tetrahedrons in as synthetic opal may be relaxed to larger ring such as 6-membered ring by high residual temperature. Therefore, the residual temperature may be significantly high at even 18.4 GPa of shock compression. At 23.9 GPa, opal sample recovered the opalescence. Origin of this opalescence may be its layer structure by shock compression. Finally, sample fuse by very high residual temperature at 38.1 GPa and the structure closes to that of fused SiO{sub 2} glass. However, internal silanole groups still remain even at 38.1 GPa.

  9. Large breast compressions: Observations and evaluation of simulations

    Energy Technology Data Exchange (ETDEWEB)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J. [Centre of Medical Image Computing, UCL, London WC1E 6BT, United Kingdom and Computer Vision Laboratory, ETH Zuerich, 8092 Zuerich (Switzerland); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Department of Imaging, UCL Hospital, London NW1 2BU (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom)

    2011-02-15

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  10. Large breast compressions: observations and evaluation of simulations.

    Science.gov (United States)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A; Douek, Michael; Hawkes, David J

    2011-02-01

    Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs. 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast shapes than when using

  11. Large breast compressions: Observations and evaluation of simulations

    International Nuclear Information System (INIS)

    Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.

    2011-01-01

    Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast

  12. Effect of compressive force on PEM fuel cell performance

    Science.gov (United States)

    MacDonald, Colin Stephen

    Polymer electrolyte membrane (PEM) fuel cells possess the potential, as a zero-emission power source, to replace the internal combustion engine as the primary option for transportation applications. Though there are a number of obstacles to vast PEM fuel cell commercialization, such as high cost and limited durability, there has been significant progress in the field to achieve this goal. Experimental testing and analysis of fuel cell performance has been an important tool in this advancement. Experimental studies of the PEM fuel cell not only identify unfiltered performance response to manipulation of variables, but also aid in the advancement of fuel cell modelling, by allowing for validation of computational schemes. Compressive force used to contain a fuel cell assembly can play a significant role in how effectively the cell functions, the most obvious example being to ensure proper sealing within the cell. Compression can have a considerable impact on cell performance beyond the sealing aspects. The force can manipulate the ability to deliver reactants and the electrochemical functions of the cell, by altering the layers in the cell susceptible to this force. For these reasons an experimental study was undertaken, presented in this thesis, with specific focus placed on cell compression; in order to study its effect on reactant flow fields and performance response. The goal of the thesis was to develop a consistent and accurate general test procedure for the experimental analysis of a PEM fuel cell in order to analyse the effects of compression on performance. The factors potentially affecting cell performance, which were a function of compression, were identified as: (1) Sealing and surface contact; (2) Pressure drop across the flow channel; (3) Porosity of the GDL. Each factor was analysed independently in order to determine the individual contribution to changes in performance. An optimal degree of compression was identified for the cell configuration in

  13. Force balancing in mammographic compression

    International Nuclear Information System (INIS)

    Branderhorst, W.; Groot, J. E. de; Lier, M. G. J. T. B. van; Grimbergen, C. A.; Neeter, L. M. F. H.; Heeten, G. J. den; Neeleman, C.

    2016-01-01

    Purpose: In mammography, the height of the image receptor is adjusted to the patient before compressing the breast. An inadequate height setting can result in an imbalance between the forces applied by the image receptor and the paddle, causing the clamped breast to be pushed up or down relative to the body during compression. This leads to unnecessary stretching of the skin and other tissues around the breast, which can make the imaging procedure more painful for the patient. The goal of this study was to implement a method to measure and minimize the force imbalance, and to assess its feasibility as an objective and reproducible method of setting the image receptor height. Methods: A trial was conducted consisting of 13 craniocaudal mammographic compressions on a silicone breast phantom, each with the image receptor positioned at a different height. The image receptor height was varied over a range of 12 cm. In each compression, the force exerted by the compression paddle was increased up to 140 N in steps of 10 N. In addition to the paddle force, the authors measured the force exerted by the image receptor and the reaction force exerted on the patient body by the ground. The trial was repeated 8 times, with the phantom remounted at a slightly different orientation and position between the trials. Results: For a given paddle force, the obtained results showed that there is always exactly one image receptor height that leads to a balance of the forces on the breast. For the breast phantom, deviating from this specific height increased the force imbalance by 9.4 ± 1.9 N/cm (6.7%) for 140 N paddle force, and by 7.1 ± 1.6 N/cm (17.8%) for 40 N paddle force. The results also show that in situations where the force exerted by the image receptor is not measured, the craniocaudal force imbalance can still be determined by positioning the patient on a weighing scale and observing the changes in displayed weight during the procedure. Conclusions: In mammographic breast

  14. Designing Neutralized Drift Compression for Focusing of Intense Ion Beam Pulses in a Background Plasma

    International Nuclear Information System (INIS)

    Kaganovich, I.D.; Davidson, R.C.; Dorf, M.; Startsev, E.A.; Barnard, J.J.; Friedman, A.; Lee, E.P.; Lidia, S.M.; Logan, B.G.; Roy, P.K.; Seidl, P.A.; Welch, D.R.; Sefkow, A.B.

    2009-01-01

    Neutralized drift compression offers an effective method for particle beam focusing and current amplification. In neutralized drift compression, a linear radial and longitudinal velocity drift is applied to a beam pulse, so that the beam pulse compresses as it drifts in the drift-compression section. The beam intensity can increase more than a factor of 100 in both the radial and longitudinal directions, resulting in more than 10,000 times increase in the beam number density during this process. The self-electric and self-magnetic fields can prevent tight ballistic focusing and have to be neutralized by supplying neutralizing electrons. This paper presents a survey of the present theoretical understanding of the drift compression process and plasma neutralization of intense particle beams. The optimal configuration of focusing and neutralizing elements is discussed in this paper.

  15. Data compression techniques and the ACR-NEMA digital interface communications standard

    International Nuclear Information System (INIS)

    Zielonka, J.S.; Blume, H.; Hill, D.; Horil, S.C.; Lodwick, G.S.; Moore, J.; Murphy, L.L.; Wake, R.; Wallace, G.

    1987-01-01

    Data compression offers the possibility of achieving high, effective information transfer rates between devices and of efficient utilization of digital storge devices in meeting department-wide archiving needs. Accordingly, the ARC-NEMA Digital Imaging and Communications Standards Committee established a Working Group to develop a means to incorporate the optimal use of a wide variety of current compression techniques while remaining compatible with the standard. This proposed method allows the use of public domain techniques, predetermined methods between devices already aware of the selected algorithm, and the ability for the originating device to specify algorithms and parameters prior to transmitting compressed data. Because of the latter capability, the technique has the potential for supporting many compression algorithms not yet developed or in common use. Both lossless and lossy methods can be implemented. In addition to description of the overall structure of this proposal, several examples using current compression algorithms are given

  16. Adiabatic compression of ion rings

    International Nuclear Information System (INIS)

    Larrabee, D.A.; Lovelace, R.V.

    1982-01-01

    A study has been made of the compression of collisionless ion rings in an increasing external magnetic field, B/sub e/ = zB/sub e/(t), by numerically implementing a previously developed kinetic theory of ring compression. The theory is general in that there is no limitation on the ring geometry or the compression ratio, lambdaequivalentB/sub e/ (final)/B/sub e/ (initial)> or =1. However, the motion of a single particle in an equilibrium is assumed to be completely characterized by its energy H and canonical angular momentum P/sub theta/ with the absence of a third constant of the motion. The present computational work assumes that plasma currents are negligible, as is appropriate for a low-temperature collisional plasma. For a variety of initial ring geometries and initial distribution functions (having a single value of P/sub theta/), it is found that the parameters for ''fat'', small aspect ratio rings follow general scaling laws over a large range of compression ratios, 1 3 : The ring radius varies as lambda/sup -1/2/; the average single particle energy as lambda/sup 0.72/; the root mean square energy spread as lambda/sup 1.1/; and the total current as lambda/sup 0.79/. The field reversal parameter is found to saturate at values typically between 2 and 3. For large compression ratios the current density is found to ''hollow out''. This hollowing tends to improve the interchange stability of an embedded low β plasma. The implications of these scaling laws for fusion reactor systems are discussed

  17. Effect of compressibility on the hypervelocity penetration

    Science.gov (United States)

    Song, W. J.; Chen, X. W.; Chen, P.

    2018-02-01

    We further consider the effect of rod strength by employing the compressible penetration model to study the effect of compressibility on hypervelocity penetration. Meanwhile, we define different instances of penetration efficiency in various modified models and compare these penetration efficiencies to identify the effects of different factors in the compressible model. To systematically discuss the effect of compressibility in different metallic rod-target combinations, we construct three cases, i.e., the penetrations by the more compressible rod into the less compressible target, rod into the analogously compressible target, and the less compressible rod into the more compressible target. The effects of volumetric strain, internal energy, and strength on the penetration efficiency are analyzed simultaneously. It indicates that the compressibility of the rod and target increases the pressure at the rod/target interface. The more compressible rod/target has larger volumetric strain and higher internal energy. Both the larger volumetric strain and higher strength enhance the penetration or anti-penetration ability. On the other hand, the higher internal energy weakens the penetration or anti-penetration ability. The two trends conflict, but the volumetric strain dominates in the variation of the penetration efficiency, which would not approach the hydrodynamic limit if the rod and target are not analogously compressible. However, if the compressibility of the rod and target is analogous, it has little effect on the penetration efficiency.

  18. Bronchoscopic guidance of endovascular stenting limits airway compression.

    Science.gov (United States)

    Ebrahim, Mohammad; Hagood, James; Moore, John; El-Said, Howaida

    2015-04-01

    Bronchial compression as a result of pulmonary artery and aortic arch stenting may cause significant respiratory distress. We set out to limit airway narrowing by endovascular stenting, by using simultaneous flexible bronchoscopy and graduated balloon stent dilatation, or balloon angioplasty to determine maximum safe stent diameter. Between August 2010 and August 2013, patients with suspected airway compression by adjacent vascular structures, underwent CT or a 3D rotational angiogram to evaluate the relationship between the airway and the blood vessels. If these studies showed close proximity of the stenosed vessel and the airway, simultaneous bronchoscopy and graduated stent re-dilation or graduated balloon angioplasty were performed. Five simultaneous bronchoscopy and interventional catheterization procedures were performed in four patients. Median age/weight was 33 (range 9-49) months and 14 (range 7.6-24) kg, respectively. Three had hypoplastic left heart syndrome, and one had coarctation of the aorta (CoA). All had confirmed or suspected left main stem bronchial compression. In three procedures, serial balloon dilatation of a previously placed stent in the CoA was performed and bronchoscopy was used to determine the safest largest diameter. In the other two procedures, balloon testing with simultaneous bronchoscopy was performed to determine the stent size that would limit compression of the adjacent airway. In all cases, simultaneous bronchoscopy allowed selection of an ideal caliber of the stent that optimized vessel diameter while minimizing compression of the adjacent airway. In cases at risk for airway compromise, flexible bronchoscopy is a useful tool to guide endovascular stenting. Maximum safe stent diameter can be determined without risking catastrophic airway compression. © 2014 Wiley Periodicals, Inc.

  19. Photon level chemical classification using digital compressive detection

    International Nuclear Information System (INIS)

    Wilcox, David S.; Buzzard, Gregery T.; Lucier, Bradley J.; Wang Ping; Ben-Amotz, Dor

    2012-01-01

    Highlights: ► A new digital compressive detection strategy is developed. ► Chemical classification demonstrated using as few as ∼10 photons. ► Binary filters are optimal when taking few measurements. - Abstract: A key bottleneck to high-speed chemical analysis, including hyperspectral imaging and monitoring of dynamic chemical processes, is the time required to collect and analyze hyperspectral data. Here we describe, both theoretically and experimentally, a means of greatly speeding up the collection of such data using a new digital compressive detection strategy. Our results demonstrate that detecting as few as ∼10 Raman scattered photons (in as little time as ∼30 μs) can be sufficient to positively distinguish chemical species. This is achieved by measuring the Raman scattered light intensity transmitted through programmable binary optical filters designed to minimize the error in the chemical classification (or concentration) variables of interest. The theoretical results are implemented and validated using a digital compressive detection instrument that incorporates a 785 nm diode excitation laser, digital micromirror spatial light modulator, and photon counting photodiode detector. Samples consisting of pairs of liquids with different degrees of spectral overlap (including benzene/acetone and n-heptane/n-octane) are used to illustrate how the accuracy of the present digital compressive detection method depends on the correlation coefficients of the corresponding spectra. Comparisons of measured and predicted chemical classification score plots, as well as linear and non-linear discriminant analyses, demonstrate that this digital compressive detection strategy is Poisson photon noise limited and outperforms total least squares-based compressive detection with analog filters.

  20. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Science.gov (United States)

    Gupta, Rajarshi

    2016-05-01

    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  1. Lossless compression of waveform data for efficient storage and transmission

    International Nuclear Information System (INIS)

    Stearns, S.D.; Tan, Li Zhe; Magotra, Neeraj

    1993-01-01

    Compression of waveform data is significant in many engineering and research areas since it can be used to alleviate data storage and transmission bandwidth. For example, seismic data are widely recorded and transmitted so that analysis can be performed on large amounts of data for numerous applications such as petroleum exploration, determination of the earth's core structure, seismic event detection and discrimination of underground nuclear explosions, etc. This paper describes a technique for lossless wave form data compression. The technique consists of two stages. The first stage is a modified form of linear prediction with discrete coefficients and the second stage is bi-level sequence coding. The linear predictor generates an error or residue sequence in a way such that exact reconstruction of the original data sequence can be accomplished with a simple algorithm. The residue sequence is essentially white Gaussian with seismic or other similar waveform data. Bi-level sequence coding, in which two sample sizes are chosen and the residue sequence is encoded into subsequences that alternate from one level to the other, further compresses the residue sequence. The principal feature of the two-stage data compression algorithm is that it is lossless, that is, it allows exact, bit-for-bit recovery of the original data sequence. The performance of the lossless compression algorithm at each stage is analyzed. The advantages of using bi-level sequence coding in the second stage are its simplicity of implementation, its effectiveness on data with large amplitude variations, and its near-optimal performance in encoding Gaussian sequences. Applications of the two-stage technique to typical seismic data indicates that an average number of compressed bits per sample close to the lower bound is achievable in practical situations

  2. Subsurface Profile Mapping using 3-D Compressive Wave Imaging

    Directory of Open Access Journals (Sweden)

    Hazreek Z A M

    2017-01-01

    Full Text Available Geotechnical site investigation related to subsurface profile mapping was commonly performed to provide valuable data for design and construction stage based on conventional drilling techniques. From past experience, drilling techniques particularly using borehole method suffer from limitations related to expensive, time consuming and limited data coverage. Hence, this study performs subsurface profile mapping using 3-D compressive wave imaging in order to minimize those conventional method constraints. Field measurement and data analysis of compressive wave (p-wave, vp was performed using seismic refraction survey (ABEM Terraloc MK 8, 7 kg of sledgehammer and 24 units of vertical geophone and OPTIM (SeisOpt@Picker & SeisOpt@2D software respectively. Then, 3-D compressive wave distribution of subsurface studied was obtained using analysis of SURFER software. Based on 3-D compressive wave image analyzed, it was found that subsurface profile studied consist of three main layers representing top soil (vp = 376 – 600 m/s, weathered material (vp = 900 – 2600 m/s and bedrock (vp > 3000 m/s. Thickness of each layer was varied from 0 – 2 m (first layer, 2 – 20 m (second layer and 20 m and over (third layer. Moreover, groundwater (vp = 1400 – 1600 m/s starts to be detected at 2.0 m depth from ground surface. This study has demonstrated that geotechnical site investigation data related to subsurface profiling was applicable to be obtained using 3-D compressive wave imaging. Furthermore, 3-D compressive wave imaging was performed based on non destructive principle in ground exploration thus consider economic, less time, large data coverage and sustainable to our environment.

  3. Effect of raw material ratios on the compressive strength of magnesium potassium phosphate chemically bonded ceramics

    International Nuclear Information System (INIS)

    Wang, Ai-juan; Yuan, Zhi-long; Zhang, Jiao; Liu, Lin-tao; Li, Jun-ming; Liu, Zheng

    2013-01-01

    The compressive strength of magnesium potassium phosphate chemically bonded ceramics is important in biomedical field. In this work, the compressive strength of magnesium potassium phosphate chemically bonded ceramics was investigated with different liquid-to-solid and MgO-to-KH 2 PO 4 ratios. X-ray diffractometer was applied to characterize its phase composition. The microstructure was imaged using a scanning electron microscope. The results showed that the compressive strength of the chemically bonded ceramics increased with the decrease of liquid-to-solid ratio due to the change of the packing density and the crystallinity of hydrated product. However, with the increase of MgO-to-KH 2 PO 4 weight ratio, its compressive strength increased firstly and then decreased. The low compressive strength in lower MgO-to-KH 2 PO 4 ratio might be explained by the existence of the weak phase KH 2 PO 4 . However, the low value of compressive strength with the higher MgO-to-KH 2 PO 4 ratio might be caused by lack of the joined phase in the hydrated product. Besides, it has been found that the microstructures were different in these two cases by the scanning electron microscope. Colloidal structure appeared for the samples with lower liquid-to-solid and higher MgO-to-KH 2 PO 4 ratios possibly because of the existence of amorphous hydrated products. The optimization of both liquid-to-solid and MgO-to-KH 2 PO 4 ratios was important to improve the compressive strength of magnesium potassium phosphate chemically bonded ceramics. - Highlights: • High packing density and amorphous hydrated phase improved the compressive strength. • Residual KH 2 PO 4 and poor bonding phase lower the compressive strength. • MPCBC fabricated with optimized parameters had the highest compressive strength

  4. Flux compression generators as plasma compression power sources

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.; Thomson, D.B.; Garn, W.B.

    1979-01-01

    A survey is made of applications where explosive-driven magnetic flux compression generators have been or can be used to directly power devices that produce dense plasmas. Representative examples are discussed that are specific to the theta pinch, the plasma gun, the dense plasma focus and the Z pinch. These examples are used to illustrate the high energy and power capabilities of explosive generators. An application employing a rocket-borne, generator-powered plasma gun emphasizes the size and weight potential of flux compression power supplies. Recent results from a local effort to drive a dense plasma focus are provided. Imploding liners ae discussed in the context of both the theta and Z pinches

  5. ADVANCED RECIPROCATING COMPRESSION TECHNOLOGY (ARCT)

    Energy Technology Data Exchange (ETDEWEB)

    Danny M. Deffenbaugh; Klaus Brun; Ralph E. Harris; J. Pete Harrell; Robert J. Mckee; J. Jeffrey Moore; Steven J. Svedeman; Anthony J. Smalley; Eugene L. Broerman; Robert A Hart; Marybeth G. Nored; Ryan S. Gernentz; Shane P. Siebenaler

    2005-12-01

    The U.S. natural gas pipeline industry is facing the twin challenges of increased flexibility and capacity expansion. To meet these challenges, the industry requires improved choices in gas compression to address new construction and enhancement of the currently installed infrastructure. The current fleet of installed reciprocating compression is primarily slow-speed integral machines. Most new reciprocating compression is and will be large, high-speed separable units. The major challenges with the fleet of slow-speed integral machines are: limited flexibility and a large range in performance. In an attempt to increase flexibility, many operators are choosing to single-act cylinders, which are causing reduced reliability and integrity. While the best performing units in the fleet exhibit thermal efficiencies between 90% and 92%, the low performers are running down to 50% with the mean at about 80%. The major cause for this large disparity is due to installation losses in the pulsation control system. In the better performers, the losses are about evenly split between installation losses and valve losses. The major challenges for high-speed machines are: cylinder nozzle pulsations, mechanical vibrations due to cylinder stretch, short valve life, and low thermal performance. To shift nozzle pulsation to higher orders, nozzles are shortened, and to dampen the amplitudes, orifices are added. The shortened nozzles result in mechanical coupling with the cylinder, thereby, causing increased vibration due to the cylinder stretch mode. Valve life is even shorter than for slow speeds and can be on the order of a few months. The thermal efficiency is 10% to 15% lower than slow-speed equipment with the best performance in the 75% to 80% range. The goal of this advanced reciprocating compression program is to develop the technology for both high speed and low speed compression that will expand unit flexibility, increase thermal efficiency, and increase reliability and integrity

  6. The task of control digital image compression

    OpenAIRE

    TASHMANOV E.B.; МАМАTOV М.S.

    2014-01-01

    In this paper we consider the relationship of control tasks and image compression losses. The main idea of this approach is to allocate structural lines simplified image and further compress the selected data

  7. Discrete Wigner Function Reconstruction and Compressed Sensing

    OpenAIRE

    Zhang, Jia-Ning; Fang, Lei; Ge, Mo-Lin

    2011-01-01

    A new reconstruction method for Wigner function is reported for quantum tomography based on compressed sensing. By analogy with computed tomography, Wigner functions for some quantum states can be reconstructed with less measurements utilizing this compressed sensing based method.

  8. Compressibility Analysis of the Tongue During Speech

    National Research Council Canada - National Science Library

    Unay, Devrim

    2001-01-01

    .... In this paper, 3D compression and expansion analysis of the tongue will be presented. Patterns of expansion and compression have been compared for different syllables and various repetitions of each syllable...

  9. Compressed normalized block difference for object tracking

    Science.gov (United States)

    Gao, Yun; Zhang, Dengzhuo; Cai, Donglan; Zhou, Hao; Lan, Ge

    2018-04-01

    Feature extraction is very important for robust and real-time tracking. Compressive sensing provided a technical support for real-time feature extraction. However, all existing compressive tracking were based on compressed Haar-like feature, and how to compress many more excellent high-dimensional features is worth researching. In this paper, a novel compressed normalized block difference feature (CNBD) was proposed. For resisting noise effectively in a highdimensional normalized pixel difference feature (NPD), a normalized block difference feature extends two pixels in the original formula of NPD to two blocks. A CNBD feature can be obtained by compressing a normalized block difference feature based on compressive sensing theory, with the sparse random Gaussian matrix as the measurement matrix. The comparative experiments of 7 trackers on 20 challenging sequences showed that the tracker based on CNBD feature can perform better than other trackers, especially than FCT tracker based on compressed Haar-like feature, in terms of AUC, SR and Precision.

  10. Formation of nanosecond SBS-compressed pulses for pumping an ultra-high power parametric amplifier

    Science.gov (United States)

    Kuz’min, A. A.; Kulagin, O. V.; Rodchenkov, V. I.

    2018-04-01

    Compression of pulsed Nd : glass laser radiation under stimulated Brillouin scattering (SBS) in perfluorooctane is investigated. Compression of 16-ns pulses at a beam diameter of 30 mm is implemented. The maximum compression coefficient is 28 in the optimal range of laser pulse energies from 2 to 4 J. The Stokes pulse power exceeds that of the initial laser pulse by a factor of about 11.5. The Stokes pulse jitter (fluctuations of the Stokes pulse exit time from the compressor) is studied. The rms spread of these fluctuations is found to be 0.85 ns.

  11. Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion

    Directory of Open Access Journals (Sweden)

    Kan Ren

    2014-01-01

    Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.

  12. On Normalized Compression Distance and Large Malware

    OpenAIRE

    Borbely, Rebecca Schuller

    2015-01-01

    Normalized Compression Distance (NCD) is a popular tool that uses compression algorithms to cluster and classify data in a wide range of applications. Existing discussions of NCD's theoretical merit rely on certain theoretical properties of compression algorithms. However, we demonstrate that many popular compression algorithms don't seem to satisfy these theoretical properties. We explore the relationship between some of these properties and file size, demonstrating that this theoretical pro...

  13. Image quality (IQ) guided multispectral image compression

    Science.gov (United States)

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik

    2016-05-01

    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  14. Speech Data Compression using Vector Quantization

    OpenAIRE

    H. B. Kekre; Tanuja K. Sarode

    2008-01-01

    Mostly transforms are used for speech data compressions which are lossy algorithms. Such algorithms are tolerable for speech data compression since the loss in quality is not perceived by the human ear. However the vector quantization (VQ) has a potential to give more data compression maintaining the same quality. In this paper we propose speech data compression algorithm using vector quantization technique. We have used VQ algorithms LBG, KPE and FCG. The results table s...

  15. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  16. A biological compression model and its applications.

    Science.gov (United States)

    Cao, Minh Duc; Dix, Trevor I; Allison, Lloyd

    2011-01-01

    A biological compression model, expert model, is presented which is superior to existing compression algorithms in both compression performance and speed. The model is able to compress whole eukaryotic genomes. Most importantly, the model provides a framework for knowledge discovery from biological data. It can be used for repeat element discovery, sequence alignment and phylogenetic analysis. We demonstrate that the model can handle statistically biased sequences and distantly related sequences where conventional knowledge discovery tools often fail.

  17. FRESCO: Referential compression of highly similar sequences.

    Science.gov (United States)

    Wandelt, Sebastian; Leser, Ulf

    2013-01-01

    In many applications, sets of similar texts or sequences are of high importance. Prominent examples are revision histories of documents or genomic sequences. Modern high-throughput sequencing technologies are able to generate DNA sequences at an ever-increasing rate. In parallel to the decreasing experimental time and cost necessary to produce DNA sequences, computational requirements for analysis and storage of the sequences are steeply increasing. Compression is a key technology to deal with this challenge. Recently, referential compression schemes, storing only the differences between a to-be-compressed input and a known reference sequence, gained a lot of interest in this field. In this paper, we propose a general open-source framework to compress large amounts of biological sequence data called Framework for REferential Sequence COmpression (FRESCO). Our basic compression algorithm is shown to be one to two orders of magnitudes faster than comparable related work, while achieving similar compression ratios. We also propose several techniques to further increase compression ratios, while still retaining the advantage in speed: 1) selecting a good reference sequence; and 2) rewriting a reference sequence to allow for better compression. In addition,we propose a new way of further boosting the compression ratios by applying referential compression to already referentially compressed files (second-order compression). This technique allows for compression ratios way beyond state of the art, for instance,4,000:1 and higher for human genomes. We evaluate our algorithms on a large data set from three different species (more than 1,000 genomes, more than 3 TB) and on a collection of versions of Wikipedia pages. Our results show that real-time compression of highly similar sequences at high compression ratios is possible on modern hardware.

  18. Subjective evaluation of compressed image quality

    Science.gov (United States)

    Lee, Heesub; Rowberg, Alan H.; Frank, Mark S.; Choi, Hyung-Sik; Kim, Yongmin

    1992-05-01

    Lossy data compression generates distortion or error on the reconstructed image and the distortion becomes visible as the compression ratio increases. Even at the same compression ratio, the distortion appears differently depending on the compression method used. Because of the nonlinearity of the human visual system and lossy data compression methods, we have evaluated subjectively the quality of medical images compressed with two different methods, an intraframe and interframe coding algorithms. The evaluated raw data were analyzed statistically to measure interrater reliability and reliability of an individual reader. Also, the analysis of variance was used to identify which compression method is better statistically, and from what compression ratio the quality of a compressed image is evaluated as poorer than that of the original. Nine x-ray CT head images from three patients were used as test cases. Six radiologists participated in reading the 99 images (some were duplicates) compressed at four different compression ratios, original, 5:1, 10:1, and 15:1. The six readers agree more than by chance alone and their agreement was statistically significant, but there were large variations among readers as well as within a reader. The displacement estimated interframe coding algorithm is significantly better in quality than that of the 2-D block DCT at significance level 0.05. Also, 10:1 compressed images with the interframe coding algorithm do not show any significant differences from the original at level 0.05.

  19. H.264/AVC Video Compression on Smartphones

    Science.gov (United States)

    Sharabayko, M. P.; Markov, N. G.

    2017-01-01

    In this paper, we studied the usage of H.264/AVC video compression tools by the flagship smartphones. The results show that only a subset of tools is used, meaning that there is still a potential to achieve higher compression efficiency within the H.264/AVC standard, but the most advanced smartphones are already reaching the compression efficiency limit of H.264/AVC.

  20. Relationship between the edgewise compression strength of ...

    African Journals Online (AJOL)

    The results of this study were used to determine the linear regression constants in the Maltenfort model by correlating the measured board edgewise compression strength (ECT) with the predicted strength, using the paper components' compression strengths, measured with the short-span compression test (SCT) and the ...

  1. Magnetic pulse compression circuits for plasma devices

    Energy Technology Data Exchange (ETDEWEB)

    Georgescu, N; Zoita, V; Presura, R [Inst. of Physics and Technology of Radiation Devices, Bucharest (Romania)

    1997-12-31

    Two magnetic pulse compression circuits (MPCC), for two different plasma devices, are presented. The first is a 20 J/pulse, 3-stage circuit designed to trigger a low pressure discharge. The circuit has 16-18 kV working voltage, and 200 nF in each stage. The saturable inductors are realized with toroidal 25 {mu}m strip-wound cores, made of a Fe-Ni alloy, with 1.5 T saturation induction. The total magnetic volume is around 290 cm{sup 3}. By using a 25 kV/1 A thyratron as a primary switch, the time compression is from 3.5 {mu}s to 450 ns, in a short-circuit load. The second magnetic pulser is a 200 J/pulse circuit, designed to drive a high average power plasma focus soft X-ray source, for X-ray microlithography as the main application. The 3-stage pulser should supply a maximum load current of 100 kA with a rise-time of 250 - 300 ns. The maximum pulse voltage applied on the plasma discharge chamber is around 20 - 25 kV. The three saturable inductors in the circuit are made of toroidal strip-wound cores with METGLAS 2605 CO amorphous alloy as the magnetic material. The total, optimized mass of the magnetic material is 34 kg. The maximum repetition rate is limited at 100 Hz by the thyratron used in the first stage of the circuit, the driver supplying to the load about 20 kW average power. (author). 1 tab., 3 figs., 3 refs.

  2. Multichannel compressive sensing MRI using noiselet encoding.

    Directory of Open Access Journals (Sweden)

    Kamlesh Pawar

    Full Text Available The incoherence between measurement and sparsifying transform matrices and the restricted isometry property (RIP of measurement matrix are two of the key factors in determining the performance of compressive sensing (CS. In CS-MRI, the randomly under-sampled Fourier matrix is used as the measurement matrix and the wavelet transform is usually used as sparsifying transform matrix. However, the incoherence between the randomly under-sampled Fourier matrix and the wavelet matrix is not optimal, which can deteriorate the performance of CS-MRI. Using the mathematical result that noiselets are maximally incoherent with wavelets, this paper introduces the noiselet unitary bases as the measurement matrix to improve the incoherence and RIP in CS-MRI. Based on an empirical RIP analysis that compares the multichannel noiselet and multichannel Fourier measurement matrices in CS-MRI, we propose a multichannel compressive sensing (MCS framework to take the advantage of multichannel data acquisition used in MRI scanners. Simulations are presented in the MCS framework to compare the performance of noiselet encoding reconstructions and Fourier encoding reconstructions at different acceleration factors. The comparisons indicate that multichannel noiselet measurement matrix has better RIP than that of its Fourier counterpart, and that noiselet encoded MCS-MRI outperforms Fourier encoded MCS-MRI in preserving image resolution and can achieve higher acceleration factors. To demonstrate the feasibility of the proposed noiselet encoding scheme, a pulse sequences with tailored spatially selective RF excitation pulses was designed and implemented on a 3T scanner to acquire the data in the noiselet domain from a phantom and a human brain. The results indicate that noislet encoding preserves image resolution better than Fouirer encoding.

  3. Magnetic Compression Experiment at General Fusion with Simulation Results

    Science.gov (United States)

    Dunlea, Carl; Khalzov, Ivan; Hirose, Akira; Xiao, Chijin; Fusion Team, General

    2017-10-01

    The magnetic compression experiment at GF was a repetitive non-destructive test to study plasma physics applicable to Magnetic Target Fusion compression. A spheromak compact torus (CT) is formed with a co-axial gun into a containment region with an hour-glass shaped inner flux conserver, and an insulating outer wall. External coil currents keep the CT off the outer wall (levitation) and then rapidly compress it inwards. The optimal external coil configuration greatly improved both the levitated CT lifetime and the rate of shots with good compressional flux conservation. As confirmed by spectrometer data, the improved levitation field profile reduced plasma impurity levels by suppressing the interaction between plasma and the insulating outer wall during the formation process. We developed an energy and toroidal flux conserving finite element axisymmetric MHD code to study CT formation and compression. The Braginskii MHD equations with anisotropic heat conduction were implemented. To simulate plasma / insulating wall interaction, we couple the vacuum field solution in the insulating region to the full MHD solution in the remainder of the domain. We see good agreement between simulation and experiment results. Partly funded by NSERC and MITACS Accelerate.

  4. Relationship between the Compressive and Tensile Strength of Recycled Concrete

    International Nuclear Information System (INIS)

    El Dalati, R.; Haddad, S.; Matar, P.; Chehade, F.H

    2011-01-01

    Concrete recycling consists of crushing the concrete provided by demolishing the old constructions, and of using the resulted small pieces as aggregates in the new concrete compositions. The resulted aggregates are called recycled aggregates and the new mix of concrete containing a percentage of recycled aggregates is called recycled concrete. Our previous researches have indicated the optimal percentages of recycled aggregates to be used for different cases of recycled concrete related to the original aggregates nature. All results have shown that the concrete compressive strength is significantly reduced when using recycled aggregates. In order to obtain realistic values of compressive strength, some tests have been carried out by adding water-reducer plasticizer and a specified additional quantity of cement. The results have shown that for a limited range of plasticizer percentage, and a fixed value of additional cement, the compressive strength has reached reasonable value. This paper treats of the effect of using recycled aggregates on the tensile strength of concrete, where concrete results from the special composition defined by our previous work. The aim is to determine the relationship between the compressive and tensile strength of recycled concrete. (author)

  5. Informational analysis for compressive sampling in radar imaging.

    Science.gov (United States)

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  6. Using autoencoders for mammogram compression.

    Science.gov (United States)

    Tan, Chun Chet; Eswaran, Chikkannan

    2011-02-01

    This paper presents the results obtained for medical image compression using autoencoder neural networks. Since mammograms (medical images) are usually of big sizes, training of autoencoders becomes extremely tedious and difficult if the whole image is used for training. We show in this paper that the autoencoders can be trained successfully by using image patches instead of the whole image. The compression performances of different types of autoencoders are compared based on two parameters, namely mean square error and structural similarity index. It is found from the experimental results that the autoencoder which does not use Restricted Boltzmann Machine pre-training yields better results than those which use this pre-training method.

  7. Culture: copying, compression, and conventionality.

    Science.gov (United States)

    Tamariz, Mónica; Kirby, Simon

    2015-01-01

    Through cultural transmission, repeated learning by new individuals transforms cultural information, which tends to become increasingly compressible (Kirby, Cornish, & Smith, ; Smith, Tamariz, & Kirby, ). Existing diffusion chain studies include in their design two processes that could be responsible for this tendency: learning (storing patterns in memory) and reproducing (producing the patterns again). This paper manipulates the presence of learning in a simple iterated drawing design experiment. We find that learning seems to be the causal factor behind the increase in compressibility observed in the transmitted information, while reproducing is a source of random heritable innovations. Only a theory invoking these two aspects of cultural learning will be able to explain human culture's fundamental balance between stability and innovation. Copyright © 2014 Cognitive Science Society, Inc.

  8. Instability of ties in compression

    DEFF Research Database (Denmark)

    Buch-Hansen, Thomas Cornelius

    2013-01-01

    Masonry cavity walls are loaded by wind pressure and vertical load from upper floors. These loads results in bending moments and compression forces in the ties connecting the outer and the inner wall in a cavity wall. Large cavity walls are furthermore loaded by differential movements from...... the temperature gradient between the outer and the inner wall, which results in critical increase of the bending moments in the ties. Since the ties are loaded by combined compression and moment forces, the loadbearing capacity is derived from instability equilibrium equations. Most of them are iterative, since...... exact instability solutions are complex to derive, not to mention the extra complexity introducing dimensional instability from the temperature gradients. Using an inverse variable substitution and comparing an exact theory with an analytical instability solution a method to design tie...

  9. Diagnostic imaging of compression neuropathy

    International Nuclear Information System (INIS)

    Weishaupt, D.; Andreisek, G.

    2007-01-01

    Compression-induced neuropathy of peripheral nerves can cause severe pain of the foot and ankle. Early diagnosis is important to institute prompt treatment and to minimize potential injury. Although clinical examination combined with electrophysiological studies remain the cornerstone of the diagnostic work-up, in certain cases, imaging may provide key information with regard to the exact anatomic location of the lesion or aid in narrowing the differential diagnosis. In other patients with peripheral neuropathies of the foot and ankle, imaging may establish the etiology of the condition and provide information crucial for management and/or surgical planning. MR imaging and ultrasound provide direct visualization of the nerve and surrounding abnormalities. Bony abnormalities contributing to nerve compression are best assessed by radiographs and CT. Knowledge of the anatomy, the etiology, typical clinical findings, and imaging features of peripheral neuropathies affecting the peripheral nerves of the foot and ankle will allow for a more confident diagnosis. (orig.) [de

  10. Particle Engineering of Excipients for Direct Compression: Understanding the Role of Material Properties.

    Science.gov (United States)

    Mangal, Sharad; Meiser, Felix; Morton, David; Larson, Ian

    2015-01-01

    Tablets represent the preferred and most commonly dispensed pharmaceutical dosage form for administering active pharmaceutical ingredients (APIs). Minimizing the cost of goods and improving manufacturing output efficiency has motivated companies to use direct compression as a preferred method of tablet manufacturing. Excipients dictate the success of direct compression, notably by optimizing powder formulation compactability and flow, thus there has been a surge in creating excipients specifically designed to meet these needs for direct compression. Greater scientific understanding of tablet manufacturing coupled with effective application of the principles of material science and particle engineering has resulted in a number of improved direct compression excipients. Despite this, significant practical disadvantages of direct compression remain relative to granulation, and this is partly due to the limitations of direct compression excipients. For instance, in formulating high-dose APIs, a much higher level of excipient is required relative to wet or dry granulation and so tablets are much bigger. Creating excipients to enable direct compression of high-dose APIs requires the knowledge of the relationship between fundamental material properties and excipient functionalities. In this paper, we review the current understanding of the relationship between fundamental material properties and excipient functionality for direct compression.

  11. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Directory of Open Access Journals (Sweden)

    Yin Zhanping

    2005-01-01

    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP 2.0 employing an advanced data compression scheme. Though optional in WAP 2.0 , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP 2.0 utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP 2.0 by over 45% in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  12. A Proxy Architecture to Enhance the Performance of WAP 2.0 by Data Compression

    Directory of Open Access Journals (Sweden)

    Yin Zhanping

    2005-01-01

    Full Text Available This paper presents a novel proxy architecture for wireless application protocol (WAP employing an advanced data compression scheme. Though optional in WAP , a proxy can isolate the wireless from the wired domain to prevent error propagations and to eliminate wireless session delays (WSD by enabling long-lived connections between the proxy and wireless terminals. The proposed data compression scheme combines content compression together with robust header compression (ROHC, which minimizes the air-interface traffic data, thus significantly reduces the wireless access time. By using the content compression at the transport layer, it also enables TLS tunneling, which overcomes the end-to-end security problem in WAP 1.x. Performance evaluations show that while WAP 1.x is optimized for narrowband wireless channels, WAP utilizing TCP/IP outperforms WAP 1.x over wideband wireless channels even without compression. The proposed data compression scheme reduces the wireless access time of WAP by over in CDMA2000 1XRTT channels, and in low-speed IS-95 channels, substantially reduces access time to give comparable performance to WAP 1.x. The performance enhancement is mainly contributed by the reply content compression, with ROHC offering further enhancements.

  13. Modification Design of Petrol Engine for Alternative Fueling using Compressed Natural Gas

    Directory of Open Access Journals (Sweden)

    Eliezer Uchechukwu Okeke

    2013-04-01

    Full Text Available This paper is on the modification design of petrol engine for alternative fuelling using Compressed Natural Gas (CNG. It provides an analytical background in the modification design process. A petrol engine Honda CR-V 2.0 auto which has a compression ratio of 9.8 was selected as case study. In order for this petrol engine to run on CNG, its compression had to be increased. An optimal compression ratio of 11.97 was computed using the standard temperature-specific volume relationship for an isentropic compression process. This computation of compression ratio is based on an inlet air temperature of 30oC (representative of tropical ambient condition and pre-combustion temperature of 540oC (corresponding to the auto-ignition temperature of CNG. Using this value of compression ratio, a dimensional modification Quantity =1.803mm was obtained using simple geometric relationships. This value of 1.803mm is needed to increase the length of the connecting rod, the compression height of the piston or reducing the sealing plate’s thickness. After the modification process, a CNG engine of air standard efficiency 62.7% (this represents a 4.67% increase over the petrol engine, capable of a maximum power of 83.6kW at 6500rpm, was obtained.

  14. Compressed air energy storage system

    Science.gov (United States)

    Ahrens, Frederick W.; Kartsounes, George T.

    1981-01-01

    An internal combustion reciprocating engine is operable as a compressor during slack demand periods utilizing excess power from a power grid to charge air into an air storage reservoir and as an expander during peak demand periods to feed power into the power grid utilizing air obtained from the air storage reservoir together with combustible fuel. Preferably the internal combustion reciprocating engine is operated at high pressure and a low pressure turbine and compressor are also employed for air compression and power generation.

  15. Compressing spatio-temporal trajectories

    DEFF Research Database (Denmark)

    Gudmundsson, Joachim; Katajainen, Jyrki; Merrick, Damian

    2009-01-01

    such that the most common spatio-temporal queries can still be answered approximately after the compression has taken place. In the process, we develop an implementation of the Douglas–Peucker path-simplification algorithm which works efficiently even in the case where the polygonal path given as input is allowed...... to self-intersect. For a polygonal path of size n, the processing time is O(nlogkn) for k=2 or k=3 depending on the type of simplification....

  16. [Compression treatment for burned skin].

    Science.gov (United States)

    Jaafar, Fadhel; Lassoued, Mohamed A; Sahnoun, Mahdi; Sfar, Souad; Cheikhrouhou, Morched

    2012-02-01

    The regularity of a compressive knit is defined as its ability to perform its function in a burnt skin. This property is essential to avoid the phenomenon of rejection of the material or toxicity problems But: Make knits biocompatible with high burnet of human skin. We fabric knits of elastic material. To ensure good adhesion to the skin, we made elastic material, typically a tight loop knitted. The Length of yarn absorbed by stitch and the raw matter are changed with each sample. The physical properties of each sample are measured and compared. Surface modifications are made to these samples by impregnation of microcapsules based on jojoba oil. Knits are compressif, elastic in all directions, light, thin, comfortable, and washable for hygiene issues. In addition, the washing can find their compressive properties. The Jojoba Oil microcapsules hydrated the human burnet skin. This moisturizer is used to the firmness of the wound and it gives flexibility to the skin. Compressive Knits are biocompatible with burnet skin. The mixture of natural and synthetic fibers is irreplaceable in terms comfort and regularity.

  17. Compressibility effects on turbulent mixing

    Science.gov (United States)

    Panickacheril John, John; Donzis, Diego

    2016-11-01

    We investigate the effect of compressibility on passive scalar mixing in isotropic turbulence with a focus on the fundamental mechanisms that are responsible for such effects using a large Direct Numerical Simulation (DNS) database. The database includes simulations with Taylor Reynolds number (Rλ) up to 100, turbulent Mach number (Mt) between 0.1 and 0.6 and Schmidt number (Sc) from 0.5 to 1.0. We present several measures of mixing efficiency on different canonical flows to robustly identify compressibility effects. We found that, like shear layers, mixing is reduced as Mach number increases. However, data also reveal a non-monotonic trend with Mt. To assess directly the effect of dilatational motions we also present results with both dilatational and soleniodal forcing. Analysis suggests that a small fraction of dilatational forcing decreases mixing time at higher Mt. Scalar spectra collapse when normalized by Batchelor variables which suggests that a compressive mechanism similar to Batchelor mixing in incompressible flows might be responsible for better mixing at high Mt and with dilatational forcing compared to pure solenoidal mixing. We also present results on scalar budgets, in particular on production and dissipation. Support from NSF is gratefully acknowledged.

  18. Image compression of bone images

    International Nuclear Information System (INIS)

    Hayrapetian, A.; Kangarloo, H.; Chan, K.K.; Ho, B.; Huang, H.K.

    1989-01-01

    This paper reports a receiver operating characteristic (ROC) experiment conducted to compare the diagnostic performance of a compressed bone image with the original. The compression was done on custom hardware that implements an algorithm based on full-frame cosine transform. The compression ratio in this study is approximately 10:1, which was decided after a pilot experiment. The image set consisted of 45 hand images, including normal images and images containing osteomalacia and osteitis fibrosa. Each image was digitized with a laser film scanner to 2,048 x 2,048 x 8 bits. Six observers, all board-certified radiologists, participated in the experiment. For each ROC session, an independent ROC curve was constructed and the area under that curve calculated. The image set was randomized for each session, as was the order for viewing the original and reconstructed images. Analysis of variance was used to analyze the data and derive statistically significant results. The preliminary results indicate that the diagnostic quality of the reconstructed image is comparable to that of the original image

  19. Compressing DNA sequence databases with coil

    Directory of Open Access Journals (Sweden)

    Hendy Michael D

    2008-05-01

    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  20. A review on the recent development of solar absorption and vapour compression based hybrid air conditioning with low temperature storage

    Directory of Open Access Journals (Sweden)

    Noor D. N.

    2016-01-01

    Full Text Available Conventional air conditioners or vapour compression systems are main contributors to energy consumption in modern buildings. There are common environmental issues emanating from vapour compression system such as greenhouse gas emission and heat wastage. These problems can be reduced by adaptation of solar energy components to vapour compression system. However, intermittence input of daily solar radiation was the main issue of solar energy system. This paper presents the recent studies on hybrid air conditioning system. In addition, the basic vapour compression system and components involved in the solar air conditioning system are discussed. Introduction of low temperature storage can be an interactive solution and improved economically which portray different modes of operating strategies. Yet, very few studies have examined on optimal operating strategies of the hybrid system. Finally, the findings of this review will help suggest optimization of solar absorption and vapour compression based hybrid air conditioning system for future work while considering both economic and environmental factors.

  1. CPAC: Energy-Efficient Data Collection through Adaptive Selection of Compression Algorithms for Sensor Networks

    Science.gov (United States)

    Lee, HyungJune; Kim, HyunSeok; Chang, Ik Joon

    2014-01-01

    We propose a technique to optimize the energy efficiency of data collection in sensor networks by exploiting a selective data compression. To achieve such an aim, we need to make optimal decisions regarding two aspects: (1) which sensor nodes should execute compression; and (2) which compression algorithm should be used by the selected sensor nodes. We formulate this problem into binary integer programs, which provide an energy-optimal solution under the given latency constraint. Our simulation results show that the optimization algorithm significantly reduces the overall network-wide energy consumption for data collection. In the environment having a stationary sink from stationary sensor nodes, the optimized data collection shows 47% energy savings compared to the state-of-the-art collection protocol (CTP). More importantly, we demonstrate that our optimized data collection provides the best performance in an intermittent network under high interference. In such networks, we found that the selective compression for frequent packet retransmissions saves up to 55% energy compared to the best known protocol. PMID:24721763

  2. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-01-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  3. Applications of wavelet-based compression to multidimensional earth science data

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-02-01

    A data compression algorithm involving vector quantization (VQ) and the discrete wavelet transform (DWT) is applied to two different types of multidimensional digital earth-science data. The algorithm (WVQ) is optimized for each particular application through an optimization procedure that assigns VQ parameters to the wavelet transform subbands subject to constraints on compression ratio and encoding complexity. Preliminary results of compressing global ocean model data generated on a Thinking Machines CM-200 supercomputer are presented. The WVQ scheme is used in both a predictive and nonpredictive mode. Parameters generated by the optimization algorithm axe reported, as are signal-to-noise ratio (SNR) measurements of actual quantized data. The problem of extrapolating hydrodynamic variables across the continental landmasses in order to compute the DWT on a rectangular grid is discussed. Results are also presented for compressing Landsat TM 7-band data using the WVQ scheme.The formulation of the optimization problem is presented along with SNR measurements of actual quantized data. Postprocessing applications are considered in which the seven spectral bands are clustered into 256 clusters using a k-means algorithm and analyzed using the Los Alamos multispectral data analysis program, SPECTRUM, both before and after being compressed using the WVQ program.

  4. Envera Variable Compression Ratio Engine

    Energy Technology Data Exchange (ETDEWEB)

    Charles Mendler

    2011-03-15

    Aggressive engine downsizing, variable compression ratio and use of the Atkinson cycle are being combined to improve fuel economy by up to 40 percent relative to port fuel injected gasoline engines, while maintaining full engine power. Approach Engine downsizing is viewed by US and foreign automobile manufacturers as one of the best options for improving fuel economy. While this strategy has already demonstrated a degree of success, downsizing and fuel economy gains are currently limited. With new variable compression ratio technology however, the degree of engine downsizing and fuel economy improvement can be greatly increased. A small variable compression ratio (VCR) engine has the potential to return significantly higher vehicle fuel economy while also providing high power. Affordability and potential for near term commercialization are key attributes of the Envera VCR engine. VCR Technology To meet torque and power requirements, a smaller engine needs to do more work per stroke. This is typically accomplished by boosting the incoming charge with either a turbo or supercharger so that more energy is present in the cylinder per stroke to do the work. With current production engines the degree of engine boosting (which correlates to downsizing) is limited by detonation (combustion knock) at high boost levels. Additionally, the turbo or supercharger needs to be responsive and efficient while providing the needed boost. VCR technology eliminates the limitation of engine knock at high load levels by reducing compression ratio to {approx}9:1 (or whatever level is appropriate) when high boost pressures are needed. By reducing the compression ratio during high load demand periods there is increased volume in the cylinder at top dead center (TDC) which allows more charge (or energy) to be present in the cylinder without increasing the peak pressure. Cylinder pressure is thus kept below the level at which the engine would begin to knock. When loads on the engine are low

  5. JPEG and wavelet compression of ophthalmic images

    Science.gov (United States)

    Eikelboom, Robert H.; Yogesan, Kanagasingam; Constable, Ian J.; Barry, Christopher J.

    1999-05-01

    This study was designed to determine the degree and methods of digital image compression to produce ophthalmic imags of sufficient quality for transmission and diagnosis. The photographs of 15 subjects, which inclined eyes with normal, subtle and distinct pathologies, were digitized to produce 1.54MB images and compressed to five different methods: (i) objectively by calculating the RMS error between the uncompressed and compressed images, (ii) semi-subjectively by assessing the visibility of blood vessels, and (iii) subjectively by asking a number of experienced observers to assess the images for quality and clinical interpretation. Results showed that as a function of compressed image size, wavelet compressed images produced less RMS error than JPEG compressed images. Blood vessel branching could be observed to a greater extent after Wavelet compression compared to JPEG compression produced better images then a JPEG compression for a given image size. Overall, it was shown that images had to be compressed to below 2.5 percent for JPEG and 1.7 percent for Wavelet compression before fine detail was lost, or when image quality was too poor to make a reliable diagnosis.

  6. Curvelet-based compressive sensing for InSAR raw data

    Science.gov (United States)

    Costa, Marcello G.; da Silva Pinho, Marcelo; Fernandes, David

    2015-10-01

    The aim of this work is to evaluate the compression performance of SAR raw data for interferometry applications collected by airborne from BRADAR (Brazilian SAR System operating in X and P bands) using the new approach based on compressive sensing (CS) to achieve an effective recovery with a good phase preserving. For this framework is desirable a real-time capability, where the collected data can be compressed to reduce onboard storage and bandwidth required for transmission. In the CS theory, a sparse unknown signals can be recovered from a small number of random or pseudo-random measurements by sparsity-promoting nonlinear recovery algorithms. Therefore, the original signal can be significantly reduced. To achieve the sparse representation of SAR signal, was done a curvelet transform. The curvelets constitute a directional frame, which allows an optimal sparse representation of objects with discontinuities along smooth curves as observed in raw data and provides an advanced denoising optimization. For the tests were made available a scene of 8192 x 2048 samples in range and azimuth in X-band with 2 m of resolution. The sparse representation was compressed using low dimension measurements matrices in each curvelet subband. Thus, an iterative CS reconstruction method based on IST (iterative soft/shrinkage threshold) was adjusted to recover the curvelets coefficients and then the original signal. To evaluate the compression performance were computed the compression ratio (CR), signal to noise ratio (SNR), and because the interferometry applications require more reconstruction accuracy the phase parameters like the standard deviation of the phase (PSD) and the mean phase error (MPE) were also computed. Moreover, in the image domain, a single-look complex image was generated to evaluate the compression effects. All results were computed in terms of sparsity analysis to provides an efficient compression and quality recovering appropriated for inSAR applications

  7. Approximate equiangular tight frames for compressed sensing and CDMA applications

    Science.gov (United States)

    Tsiligianni, Evaggelia; Kondi, Lisimachos P.; Katsaggelos, Aggelos K.

    2017-12-01

    Performance guarantees for recovery algorithms employed in sparse representations, and compressed sensing highlights the importance of incoherence. Optimal bounds of incoherence are attained by equiangular unit norm tight frames (ETFs). Although ETFs are important in many applications, they do not exist for all dimensions, while their construction has been proven extremely difficult. In this paper, we construct frames that are close to ETFs. According to results from frame and graph theory, the existence of an ETF depends on the existence of its signature matrix, that is, a symmetric matrix with certain structure and spectrum consisting of two distinct eigenvalues. We view the construction of a signature matrix as an inverse eigenvalue problem and propose a method that produces frames of any dimensions that are close to ETFs. Due to the achieved equiangularity property, the so obtained frames can be employed as spreading sequences in synchronous code-division multiple access (s-CDMA) systems, besides compressed sensing.

  8. Investigations on response time of magnetorheological elastomer under compression mode

    Science.gov (United States)

    Zhu, Mi; Yu, Miao; Qi, Song; Fu, Jie

    2018-05-01

    For efficient fast control of vibration system with magnetorheological elastomer (MRE)-based smart device, the response time of MRE material is the key parameter which directly affects the control performance of the vibration system. For a step coil current excitation, this paper proposed a Maxwell behavior model with time constant λ to describe the normal force response of MRE, and the response time of MRE was extracted through the separation of coil response time. Besides, the transient responses of MRE under compression mode were experimentally investigated, and the effects of (i) applied current, (ii) particle distribution and (iii) compressive strain on the response time of MRE were addressed. The results revealed that the three factors can affect the response characteristic of MRE quite significantly. Besides the intrinsic importance for contributing to the response evaluation and effective design of MRE device, this study may conduce to the optimal design of controller for MRE control system.

  9. The Diagonal Compression Field Method using Circular Fans

    DEFF Research Database (Denmark)

    Hansen, Thomas

    2005-01-01

    This paper presents a new design method, which is a modification of the diagonal compression field method, the modification consisting of the introduction of circular fan stress fields. The traditional method does not allow changes of the concrete compression direction throughout a given beam...... if equilibrium is strictly required. This is conservative, since it is not possible fully to utilize the concrete strength in regions with low shear stresses. The larger inclination (the smaller -value) of the uniaxial concrete stress the more transverse shear reinforcement is needed; hence it would be optimal...... if the -value for a given beam could be set to a low value in regions with high shear stresses and thereafter increased in regions with low shear stresses. Thus the shear reinforcement would be reduced and the concrete strength would be utilized in a better way. In the paper it is shown how circular fan stress...

  10. Grid-free compressive beamforming

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter

    2015-01-01

    sparsity on a continuous optimization variable. The DOA estimation problem with infinitely many unknowns, i.e., source locations and amplitudes, is solved over a few optimization variables with semidefinite programming. The grid-free CS reconstruction provides high-resolution imaging even with non...

  11. Magnetic compression into Brillouin flow

    International Nuclear Information System (INIS)

    Becker, R.

    1977-01-01

    The trajectories of beam edge electrons are calculated in the transition region between an electrostatic gun and an increasing magnetic field for various field shapes, transition length, and cathode fluxes, assuming that the resultant beam is of Brillouin flow type. The results give a good physical interpretation to the axial gradient of the magnetic field being responsible for the amount of magnetic compression and also for the proper injection conditions. Therefore it becomes possible to predict from the known characteristics of any fairly laminary electrostatic gun the necessary axial gradient of the magnetic field and the axial position of the gun with respect to the field build-up. (orig.) [de

  12. Antiproton compression and radial measurements

    CERN Document Server

    Andresen, G B; Bowe, P D; Bray, C C; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Fajans, J; Fujiwara, M C; Funakoshi, R; Gill, D R; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jenkins, M J; Jorgensen, L V; Kurchaninov, L; Lambo, R; Madsen, N; Nolan, P; Olchanski, K; Olin, A; Page R D; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Seif El Nasr, S; Silveira, D M; Storey, J W; Thompson, R I; Van der Werf, D P; Wurtele, J S; Yamazaki, Y

    2008-01-01

    Control of the radial profile of trapped antiproton clouds is critical to trapping antihydrogen. We report detailed measurements of the radial manipulation of antiproton clouds, including areal density compressions by factors as large as ten, achieved by manipulating spatially overlapped electron plasmas. We show detailed measurements of the near-axis antiproton radial profile, and its relation to that of the electron plasma. We also measure the outer radial profile by ejecting antiprotons to the trap wall using an octupole magnet.

  13. Capillary waves of compressible fluids

    International Nuclear Information System (INIS)

    Falk, Kerstin; Mecke, Klaus

    2011-01-01

    The interplay of thermal noise and molecular forces is responsible for surprising features of liquids on sub-micrometer lengths-in particular at interfaces. Not only does the surface tension depend on the size of an applied distortion and nanoscopic thin liquid films dewet faster than would be expected from hydrodynamics, but also the dispersion relation of capillary waves differ at the nanoscale from the familiar macroscopic behavior. Starting with the stochastic Navier-Stokes equation we study the coupling of capillary waves to acoustic surface waves which is possible in compressible fluids. We find propagating 'acoustic-capillary waves' at nanometer wavelengths where in incompressible fluids capillary waves are overdamped.

  14. Shock compression of diamond crystal

    OpenAIRE

    Kondo, Ken-ichi; Ahrens, Thomas J.

    1983-01-01

    Two shock wave experiments employing inclined mirrors have been carried out to determine the Hugoniot elastic limit (HEL), final shock state at 191 and 217 GPa, and the post-shock state of diamond crystal, which is shock-compressed along the intermediate direction between the and crystallographic axes. The HEL wave has a velocity of 19.9 ± 0.3 mm/µsec and an amplitude of 63 ± 28 GPa. An alternate interpretation of the inclined wedge mirror streak record suggests a ramp precursor wave and th...

  15. Offshore compression system design for low cost high and reliability

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Carlos J. Rocha de O.; Carrijo Neto, Antonio Dias; Cordeiro, Alexandre Franca [Chemtech Engineering Services and Software Ltd., Rio de Janeiro, RJ (Brazil). Special Projects Div.], Emails: antonio.carrijo@chemtech.com.br, carlos.rocha@chemtech.com.br, alexandre.cordeiro@chemtech.com.br

    2010-07-01

    In the offshore oil fields, the oil streams coming from the wells usually have significant amounts of gas. This gas is separated at low pressure and has to be compressed to the export pipeline pressure, usually at high pressure to reduce the needed diameter of the pipelines. In the past, this gases where flared, but nowadays there are a increasing pressure for the energy efficiency improvement of the oil rigs and the use of this gaseous fraction. The most expensive equipment of this kind of plant are the compression and power generation systems, being the second a strong function of the first, because the most power consuming equipment are the compressors. For this reason, the optimization of the compression system in terms of efficiency and cost are determinant to the plant profit. The availability of the plants also have a strong influence in the plant profit, specially in gas fields where the products have a relatively low aggregated value, compared to oil. Due this, the third design variable of the compression system becomes the reliability. As high the reliability, larger will be the plant production. The main ways to improve the reliability of compression system are the use of multiple compression trains in parallel, in a 2x50% or 3x50% configuration, with one in stand-by. Such configurations are possible and have some advantages and disadvantages, but the main side effect is the increase of the cost. This is the offshore common practice, but that does not always significantly improve the plant availability, depending of the previous process system. A series arrangement and a critical evaluation of the overall system in some cases can provide a cheaper system with equal or better performance. This paper shows a case study of the procedure to evaluate a compression system design to improve the reliability but without extreme cost increase, balancing the number of equipment, the series or parallel arrangement, and the driver selection. Two cases studies will be

  16. Colon Targeted Guar Gum Compression Coated Tablets of Flurbiprofen: Formulation, Development, and Pharmacokinetics

    Directory of Open Access Journals (Sweden)

    Sateesh Kumar Vemula

    2013-01-01

    Full Text Available The rationale of the present study is to formulate flurbiprofen colon targeted compression coated tablets using guar gum to improve the therapeutic efficacy by increasing drug levels in colon, and also to reduce the side effects in upper gastrointestinal tract. Direct compression method was used to prepare flurbiprofen core tablets, and they were compression coated with guar gum. Then the tablets were optimized with the support of in vitro dissolution studies, and further it was proved by pharmacokinetic studies. The optimized formulation (F4 showed almost complete drug release in the colon (99.86% within 24 h without drug loss in the initial lag period of 5 h (only 6.84% drug release was observed during this period. The pharmacokinetic estimations proved the capability of guar gum compression coated tablets to achieve colon targeting. The Cmax of colon targeted tablets was 11956.15 ng/mL at Tmax of 10 h whereas it was 15677.52 ng/mL at 3 h in case of immediate release tablets. The area under the curve for the immediate release and compression coated tablets was 40385.78 and 78214.50 ng-h/mL and the mean resident time was 3.49 and 10.78 h, respectively. In conclusion, formulation of guar gum compression coated tablets was appropriate for colon targeting of flurbiprofen.

  17. Colon Targeted Guar Gum Compression Coated Tablets of Flurbiprofen: Formulation, Development, and Pharmacokinetics

    Science.gov (United States)

    Bontha, Vijaya Kumar

    2013-01-01

    The rationale of the present study is to formulate flurbiprofen colon targeted compression coated tablets using guar gum to improve the therapeutic efficacy by increasing drug levels in colon, and also to reduce the side effects in upper gastrointestinal tract. Direct compression method was used to prepare flurbiprofen core tablets, and they were compression coated with guar gum. Then the tablets were optimized with the support of in vitro dissolution studies, and further it was proved by pharmacokinetic studies. The optimized formulation (F4) showed almost complete drug release in the colon (99.86%) within 24 h without drug loss in the initial lag period of 5 h (only 6.84% drug release was observed during this period). The pharmacokinetic estimations proved the capability of guar gum compression coated tablets to achieve colon targeting. The C max of colon targeted tablets was 11956.15 ng/mL at T max of 10 h whereas it was 15677.52 ng/mL at 3 h in case of immediate release tablets. The area under the curve for the immediate release and compression coated tablets was 40385.78 and 78214.50 ng-h/mL and the mean resident time was 3.49 and 10.78 h, respectively. In conclusion, formulation of guar gum compression coated tablets was appropriate for colon targeting of flurbiprofen. PMID:24260738

  18. Numerical study of the effects of carbon felt electrode compression in all-vanadium redox flow batteries

    International Nuclear Information System (INIS)

    Oh, Kyeongmin; Won, Seongyeon; Ju, Hyunchul

    2015-01-01

    Highlights: • The effects of electrode compression on VRFB are examined. • The electronic conductivity is improved when the compression is increased. • The kinetic losses are similar regardless of the electrode compression level. • The vanadium distribution is more uniform within highly compressed electrode. - Abstract: The porous carbon felt electrode is one of the major components of all-vanadium redox flow batteries (VRFBs). These electrodes are necessarily compressed during stack assembly to prevent liquid electrolyte leakage and diminish the interfacial contact resistance among VRFB stack components. The porous structure and properties of carbon felt electrodes have a considerable influence on the electrochemical reactions, transport features, and cell performance. Thus, a numerical study was performed herein to investigate the effects of electrode compression on the charge and discharge behavior of VRFBs. A three-dimensional, transient VRFB model developed in a previous study was employed to simulate VRFBs under two degrees of electrode compression (10% vs. 20%). The effects of electrode compression were precisely evaluated by analysis of the solid/electrolyte potential profiles, transfer current density, and vanadium concentration distributions, as well as the overall charge and discharge performance. The model predictions highlight the beneficial impact of electrode compression; the electronic conductivity of the carbon felt electrode is the main parameter improved by electrode compression, leading to reduction in ohmic loss through the electrodes. In contrast, the kinetics of the redox reactions and transport of vanadium species are not significantly altered by the degree of electrode compression (10% to 20%). This study enhances the understanding of electrode compression effects and demonstrates that the present VRFB model is a valuable tool for determining the optimal design and compression of carbon felt electrodes in VRFBs.

  19. Energy Preserved Sampling for Compressed Sensing MRI

    Directory of Open Access Journals (Sweden)

    Yudong Zhang

    2014-01-01

    Full Text Available The sampling patterns, cost functions, and reconstruction algorithms play important roles in optimizing compressed sensing magnetic resonance imaging (CS-MRI. Simple random sampling patterns did not take into account the energy distribution in k-space and resulted in suboptimal reconstruction of MR images. Therefore, a variety of variable density (VD based samplings patterns had been developed. To further improve it, we propose a novel energy preserving sampling (ePRESS method. Besides, we improve the cost function by introducing phase correction and region of support matrix, and we propose iterative thresholding algorithm (ITA to solve the improved cost function. We evaluate the proposed ePRESS sampling method, improved cost function, and ITA reconstruction algorithm by 2D digital phantom and 2D in vivo MR brains of healthy volunteers. These assessments demonstrate that the proposed ePRESS method performs better than VD, POWER, and BKO; the improved cost function can achieve better reconstruction quality than conventional cost function; and the ITA is faster than SISTA and is competitive with FISTA in terms of computation time.

  20. Accelerated Compressed Sensing Based CT Image Reconstruction.

    Science.gov (United States)

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  1. Accelerated Compressed Sensing Based CT Image Reconstruction

    Directory of Open Access Journals (Sweden)

    SayedMasoud Hashemi

    2015-01-01

    Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  2. Economic Modeling of Compressed Air Energy Storage

    Directory of Open Access Journals (Sweden)

    Rui Bo

    2013-04-01

    Full Text Available Due to the variable nature of wind resources, the increasing penetration level of wind power will have a significant impact on the operation and planning of the electric power system. Energy storage systems are considered an effective way to compensate for the variability of wind generation. This paper presents a detailed production cost simulation model to evaluate the economic value of compressed air energy storage (CAES in systems with large-scale wind power generation. The co-optimization of energy and ancillary services markets is implemented in order to analyze the impacts of CAES, not only on energy supply, but also on system operating reserves. Both hourly and 5-minute simulations are considered to capture the economic performance of CAES in the day-ahead (DA and real-time (RT markets. The generalized network flow formulation is used to model the characteristics of CAES in detail. The proposed model is applied on a modified IEEE 24-bus reliability test system. The numerical example shows that besides the economic benefits gained through energy arbitrage in the DA market, CAES can also generate significant profits by providing reserves, compensating for wind forecast errors and intra-hour fluctuation, and participating in the RT market.

  3. Compressed sensing in imaging mass spectrometry

    International Nuclear Information System (INIS)

    Bartels, Andreas; Dülk, Patrick; Trede, Dennis; Alexandrov, Theodore; Maaß, Peter

    2013-01-01

    Imaging mass spectrometry (IMS) is a technique of analytical chemistry for spatially resolved, label-free and multipurpose analysis of biological samples that is able to detect the spatial distribution of hundreds of molecules in one experiment. The hyperspectral IMS data is typically generated by a mass spectrometer analyzing the surface of the sample. In this paper, we propose a compressed sensing approach to IMS which potentially allows for faster data acquisition by collecting only a part of the pixels in the hyperspectral image and reconstructing the full image from this data. We present an integrative approach to perform both peak-picking spectra and denoising m/z-images simultaneously, whereas the state of the art data analysis methods solve these problems separately. We provide a proof of the robustness of the recovery of both the spectra and individual channels of the hyperspectral image and propose an algorithm to solve our optimization problem which is based on proximal mappings. The paper concludes with the numerical reconstruction results for an IMS dataset of a rat brain coronal section. (paper)

  4. Efficient JPEG 2000 Image Compression Scheme for Multihop Wireless Networks

    Directory of Open Access Journals (Sweden)

    Halim Sghaier

    2011-08-01

    Full Text Available When using wireless sensor networks for real-time data transmission, some critical points should be considered. Restricted computational power, reduced memory, narrow bandwidth and energy supplied present strong limits in sensor nodes. Therefore, maximizing network lifetime and minimizing energy consumption are always optimization goals. To overcome the computation and energy limitation of individual sensor nodes during image transmission, an energy efficient image transport scheme is proposed, taking advantage of JPEG2000 still image compression standard using MATLAB and C from Jasper. JPEG2000 provides a practical set of features, not necessarily available in the previous standards. These features were achieved using techniques: the discrete wavelet transform (DWT, and embedded block coding with optimized truncation (EBCOT. Performance of the proposed image transport scheme is investigated with respect to image quality and energy consumption. Simulation results are presented and show that the proposed scheme optimizes network lifetime and reduces significantly the amount of required memory by analyzing the functional influence of each parameter of this distributed image compression algorithm.

  5. Atomic effect algebras with compression bases

    International Nuclear Information System (INIS)

    Caragheorgheopol, Dan; Tkadlec, Josef

    2011-01-01

    Compression base effect algebras were recently introduced by Gudder [Demonstr. Math. 39, 43 (2006)]. They generalize sequential effect algebras [Rep. Math. Phys. 49, 87 (2002)] and compressible effect algebras [Rep. Math. Phys. 54, 93 (2004)]. The present paper focuses on atomic compression base effect algebras and the consequences of atoms being foci (so-called projections) of the compressions in the compression base. Part of our work generalizes results obtained in atomic sequential effect algebras by Tkadlec [Int. J. Theor. Phys. 47, 185 (2008)]. The notion of projection-atomicity is introduced and studied, and several conditions that force a compression base effect algebra or the set of its projections to be Boolean are found. Finally, we apply some of these results to sequential effect algebras and strengthen a previously established result concerning a sufficient condition for them to be Boolean.

  6. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2013-01-01

    Compressibility, Turbulence and High Speed Flow introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range, through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. The book provides the reader with the necessary background and current trends in the theoretical and experimental aspects of compressible turbulent flows and compressible turbulence. Detailed derivations of the pertinent equations describing the motion of such turbulent flows is provided and an extensive discussion of the various approaches used in predicting both free shear and wall bounded flows is presented. Experimental measurement techniques common to the compressible flow regime are introduced with particular emphasis on the unique challenges presented by high speed flows. Both experimental and numerical simulation work is supplied throughout to provide the reader with an overall perspective of current tre...

  7. Medical image compression based on vector quantization with variable block sizes in wavelet domain.

    Science.gov (United States)

    Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo

    2012-01-01

    An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  8. Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Huiyan Jiang

    2012-01-01

    Full Text Available An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.

  9. Wavelet transform-vector quantization compression of supercomputer ocean model simulation output

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J N; Brislawn, C M

    1992-11-12

    We describe a new procedure for efficient compression of digital information for storage and transmission purposes. The algorithm involves a discrete wavelet transform subband decomposition of the data set, followed by vector quantization of the wavelet transform coefficients using application-specific vector quantizers. The new vector quantizer design procedure optimizes the assignment of both memory resources and vector dimensions to the transform subbands by minimizing an exponential rate-distortion functional subject to constraints on both overall bit-rate and encoder complexity. The wavelet-vector quantization method, which originates in digital image compression. is applicable to the compression of other multidimensional data sets possessing some degree of smoothness. In this paper we discuss the use of this technique for compressing the output of supercomputer simulations of global climate models. The data presented here comes from Semtner-Chervin global ocean models run at the National Center for Atmospheric Research and at the Los Alamos Advanced Computing Laboratory.

  10. SPECTRUM analysis of multispectral imagery in conjunction with wavelet/KLT data compression

    Energy Technology Data Exchange (ETDEWEB)

    Bradley, J.N.; Brislawn, C.M.

    1993-12-01

    The data analysis program, SPECTRUM, is used for fusion, visualization, and classification of multi-spectral imagery. The raw data used in this study is Landsat Thematic Mapper (TM) 7-channel imagery, with 8 bits of dynamic range per channel. To facilitate data transmission and storage, a compression algorithm is proposed based on spatial wavelet transform coding and KLT decomposition of interchannel spectral vectors, followed by adaptive optimal multiband scalar quantization. The performance of SPECTRUM clustering and visualization is evaluated on compressed multispectral data. 8-bit visualizations of 56-bit data show little visible distortion at 50:1 compression and graceful degradation at higher compression ratios. Two TM images were processed in this experiment: a 1024 x 1024-pixel scene of the region surrounding the Chernobyl power plant, taken a few months before the reactor malfunction, and a 2048 x 2048 image of Moscow and surrounding countryside.

  11. MR diagnosis of retropatellar chondral lesions under compression. A comparison with histological findings

    Energy Technology Data Exchange (ETDEWEB)

    Andresen, R. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany); Radmer, S. [Dept. of Radiology and Nuclear Medicine, Behring Municipal Hospital, Academic Teaching Hospital, Free Univ. of Berlin (Germany); Koenig, H. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany); Banzer, D. [Dept. of Radiology and Nuclear Medicine, Behring Municipal Hospital, Academic Teaching Hospital, Free Univ. of Berlin (Germany); Wolf, K.J. [Dept. of Radiology, Div. of Radiodiagnostics, Steglitz Medical Centre, Free Univ. of Berlin (Germany)

    1996-01-01

    Purpose: The aim of the study was to improve the chondromalacia patellae (CMP) diagnosis by MR imaging under defined compression of the retropatellar cartilage, using a specially designed knee compressor. The results were compared with histological findings to obtain an MR classification of CMP. Method: MR imaging was performed in in vitro studies of 25 knees from cadavers to investigate the effects of compression on the rentropatellar articular cartilage. The results were verified by subsequent histological evaluations. Results: There was significant difference in cartilage thickness reduction and signal intensity behaviour under compression according to the stage of CMP. Conclusion: Based on the decrease in cartilage thickness, signal intensity behaviour under compression, and cartilage morphology, the studies permitted and MR classifiction of CMP into stages I-IV in line with the histological findings. Healthy cartilage was clearly distinguished, a finding which may optimize CMP diagnosis. (orig.).

  12. Loss of interface pressure in various compression bandage systems over seven days.

    Science.gov (United States)

    Protz, Kerstin; Heyer, Kristina; Verheyen-Cronau, Ida; Augustin, Matthias

    2014-01-01

    Manufacturers' instructions of multi-component compression bandage systems inform that these products can remain up to 7 days during the therapy of venous leg ulcer. This implies that the pressure needed will be sustained during this time. The present research investigated the persistence of pressure of compression systems over 7 days. All 6 compression systems available in Germany at the time of the trial were tested on 35 volunteering persons without signs of venous leg disease. Bandaging with short-stretch bandages was included for comparison. Pressure was measured by using PicoPress®. Initially, all products showed sufficient resting pressure of 40 mm Hg checked with a pressure monitor, except for one system in which the pressure fell by at least 23.8%, the maximum being 47.5% over a period of 7 days. The currently available compression systems are not fit to keep the required pressure. Optimized products need to be developed.

  13. Excessive chest compression rate is associated with insufficient compression depth in prehospital cardiac arrest.

    Science.gov (United States)

    Monsieurs, Koenraad G; De Regge, Melissa; Vansteelandt, Kristof; De Smet, Jeroen; Annaert, Emmanuel; Lemoyne, Sabine; Kalmar, Alain F; Calle, Paul A

    2012-11-01

    BACKGROUND AND GOAL OF STUDY: The relationship between chest compression rate and compression depth is unknown. In order to characterise this relationship, we performed an observational study in prehospital cardiac arrest patients. We hypothesised that faster compressions are associated with decreased depth. In patients undergoing prehospital cardiopulmonary resuscitation by health care professionals, chest compression rate and depth were recorded using an accelerometer (E-series monitor-defibrillator, Zoll, U.S.A.). Compression depth was compared for rates 120/min. A difference in compression depth ≥0.5 cm was considered clinically significant. Mixed models with repeated measurements of chest compression depth and rate (level 1) nested within patients (level 2) were used with compression rate as a continuous and as a categorical predictor of depth. Results are reported as means and standard error (SE). One hundred and thirty-three consecutive patients were analysed (213,409 compressions). Of all compressions 2% were 120/min, 36% were 5 cm. In 77 out of 133 (58%) patients a statistically significant lower depth was observed for rates >120/min compared to rates 80-120/min, in 40 out of 133 (30%) this difference was also clinically significant. The mixed models predicted that the deepest compression (4.5 cm) occurred at a rate of 86/min, with progressively lower compression depths at higher rates. Rates >145/min would result in a depth compression depth for rates 80-120/min was on average 4.5 cm (SE 0.06) compared to 4.1 cm (SE 0.06) for compressions >120/min (mean difference 0.4 cm, Pcompression rates and lower compression depths. Avoiding excessive compression rates may lead to more compressions of sufficient depth. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Science.gov (United States)

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu

    2017-09-01

    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  15. How Wage Compression Affects Job Turnover

    OpenAIRE

    Heyman, Fredrik

    2008-01-01

    I use Swedish establishment-level panel data to test Bertola and Rogerson’s (1997) hypothesis of a positive relation between the degree of wage compression and job reallocation. Results indicate that the effect of wage compression on job turnover is positive and significant in the manufacturing sector. The wage compression effect is stronger on job destruction than on job creation, consistent with downward wage rigidity. Further results include a strong positive relationship between the fract...

  16. Compressed Air Production Using Vehicle Suspension

    OpenAIRE

    Ninad Arun Malpure; Sanket Nandlal Bhansali

    2015-01-01

    Abstract Generally compressed air is produced using different types of air compressors which consumes lot of electric energy and is noisy. In this paper an innovative idea is put forth for production of compressed air using movement of vehicle suspension which normal is wasted. The conversion of the force energy into the compressed air is carried out by the mechanism which consists of the vehicle suspension system hydraulic cylinder Non-return valve air compressor and air receiver. We are co...

  17. Subband Coding Methods for Seismic Data Compression

    Science.gov (United States)

    Kiely, A.; Pollara, F.

    1995-01-01

    This paper presents a study of seismic data compression techniques and a compression algorithm based on subband coding. The compression technique described could be used as a progressive transmission system, where successive refinements of the data can be requested by the user. This allows seismologists to first examine a coarse version of waveforms with minimal usage of the channel and then decide where refinements are required. Rate-distortion performance results are presented and comparisons are made with two block transform methods.

  18. Compressibility of the protein-water interface

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-01

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (˜0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ˜45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than in

  19. Eccentric crank variable compression ratio mechanism

    Science.gov (United States)

    Lawrence, Keith Edward [Kobe, JP; Moser, William Elliott [Peoria, IL; Roozenboom, Stephan Donald [Washington, IL; Knox, Kevin Jay [Peoria, IL

    2008-05-13

    A variable compression ratio mechanism for an internal combustion engine that has an engine block and a crankshaft is disclosed. The variable compression ratio mechanism has a plurality of eccentric disks configured to support the crankshaft. Each of the plurality of eccentric disks has at least one cylindrical portion annularly surrounded by the engine block. The variable compression ratio mechanism also has at least one actuator configured to rotate the plurality of eccentric disks.

  20. Computer calculations of compressibility of natural gas

    Energy Technology Data Exchange (ETDEWEB)

    Abou-Kassem, J.H.; Mattar, L.; Dranchuk, P.M

    An alternative method for the calculation of pseudo reduced compressibility of natural gas is presented. The method is incorporated into the routines by adding a single FORTRAN statement before the RETURN statement. The method is suitable for computer and hand-held calculator applications. It produces the same reduced compressibility as other available methods but is computationally superior. Tabular definitions of coefficients and comparisons of predicted pseudo reduced compressibility using different methods are presented, along with appended FORTRAN subroutines. 7 refs., 2 tabs.

  1. Compressibility of the protein-water interface.

    Science.gov (United States)

    Persson, Filip; Halle, Bertil

    2018-06-07

    The compressibility of a protein relates to its stability, flexibility, and hydrophobic interactions, but the measurement, interpretation, and computation of this important thermodynamic parameter present technical and conceptual challenges. Here, we present a theoretical analysis of protein compressibility and apply it to molecular dynamics simulations of four globular proteins. Using additively weighted Voronoi tessellation, we decompose the solution compressibility into contributions from the protein and its hydration shells. We find that positively cross-correlated protein-water volume fluctuations account for more than half of the protein compressibility that governs the protein's pressure response, while the self correlations correspond to small (∼0.7%) fluctuations of the protein volume. The self compressibility is nearly the same as for ice, whereas the total protein compressibility, including cross correlations, is ∼45% of the bulk-water value. Taking the inhomogeneous solvent density into account, we decompose the experimentally accessible protein partial compressibility into intrinsic, hydration, and molecular exchange contributions and show how they can be computed with good statistical accuracy despite the dominant bulk-water contribution. The exchange contribution describes how the protein solution responds to an applied pressure by redistributing water molecules from lower to higher density; it is negligibly small for native proteins, but potentially important for non-native states. Because the hydration shell is an open system, the conventional closed-system compressibility definitions yield a pseudo-compressibility. We define an intrinsic shell compressibility, unaffected by occupation number fluctuations, and show that it approaches the bulk-water value exponentially with a decay "length" of one shell, less than the bulk-water compressibility correlation length. In the first hydration shell, the intrinsic compressibility is 25%-30% lower than

  2. Thermal compression modulus of polarized neutron matter

    International Nuclear Information System (INIS)

    Abd-Alla, M.

    1990-05-01

    We applied the equation of state for pure polarized neutron matter at finite temperature, calculated previously, to calculate the compression modulus. The compression modulus of pure neutron matter at zero temperature is very large and reflects the stiffness of the equation of state. It has a little temperature dependence. Introducing the spin excess parameter in the equation of state calculations is important because it has a significant effect on the compression modulus. (author). 25 refs, 2 tabs

  3. Cosmological Particle Data Compression in Practice

    Science.gov (United States)

    Zeyen, M.; Ahrens, J.; Hagen, H.; Heitmann, K.; Habib, S.

    2017-12-01

    In cosmological simulations trillions of particles are handled and several terabytes of unstructured particle data are generated in each time step. Transferring this data directly from memory to disk in an uncompressed way results in a massive load on I/O and storage systems. Hence, one goal of domain scientists is to compress the data before storing it to disk while minimizing the loss of information. To prevent reading back uncompressed data from disk, this can be done in an in-situ process. Since the simulation continuously generates data, the available time for the compression of one time step is limited. Therefore, the evaluation of compression techniques has shifted from only focusing on compression rates to include run-times and scalability.In recent years several compression techniques for cosmological data have become available. These techniques can be either lossy or lossless, depending on the technique. For both cases, this study aims to evaluate and compare the state of the art compression techniques for unstructured particle data. This study focuses on the techniques available in the Blosc framework with its multi-threading support, the XZ Utils toolkit with the LZMA algorithm that achieves high compression rates, and the widespread FPZIP and ZFP methods for lossy compressions.For the investigated compression techniques, quantitative performance indicators such as compression rates, run-time/throughput, and reconstruction errors are measured. Based on these factors, this study offers a comprehensive analysis of the individual techniques and discusses their applicability for in-situ compression. In addition, domain specific measures are evaluated on the reconstructed data sets, and the relative error rates and statistical properties are analyzed and compared. Based on this study future challenges and directions in the compression of unstructured cosmological particle data were identified.

  4. Assessment of myocardial bridge by cardiac CT: Intracoronary transluminal attenuation gradient derived from diastolic phase predicts systolic compression

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Meng Meng; Zhang, Yang; Li, Yue Hua; Li, Wen Bin; Li, Ming Hua; Zhang, Jiayin [Institute of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People' s Hospital, Shangha (China)

    2017-08-01

    To study the predictive value of transluminal attenuation gradient (TAG) derived from diastolic phase of coronary computed tomography angiography (CCTA) for identifying systolic compression of myocardial bridge (MB). Consecutive patients diagnosed with MB based on CCTA findings and without obstructive coronary artery disease were retrospectively enrolled. In total, 143 patients with 144 MBs were included in the study. Patients were classified into three groups: without systolic compression, with systolic compression < 50%, and with systolic compression ≥ 50%. TAG was defined as the linear regression coefficient between intraluminal attenuation in Hounsfield units (HU) and length from the vessel ostium. Other indices such as the length and depth of the MB were also recorded. TAG was the lowest in MB patients with systolic compression ≥ 50% (-19.9 ± 8.7 HU/10 mm). Receiver operating characteristic curve analysis was performed to determine the optimal cutoff values for identifying systolic compression ≥ 50%. The result indicated an optimal cutoff value of TAG as -18.8 HU/10 mm (area under curve = 0.778, p < 0.001), which yielded higher sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy (54.1, 80.5, 72.8, and 75.0%, respectively). In addition, the TAG of MB with diastolic compression was significantly lower than the TAG of MB without diastolic compression (-21.4 ± 4.8 HU/10 mm vs. -12.7 ± 8 HU/10 mm, p < 0.001). TAG was a better predictor of MB with systolic compression ≥ 50%, compared to the length or depth of the MB. The TAG of MB with persistent diastolic compression was significantly lower than the TAG without diastolic compression.

  5. Compressed Data Structures for Range Searching

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vind, Søren Juhl

    2015-01-01

    matrices and web graphs. Our contribution is twofold. First, we show how to compress geometric repetitions that may appear in standard range searching data structures (such as K-D trees, Quad trees, Range trees, R-trees, Priority R-trees, and K-D-B trees), and how to implement subsequent range queries...... on the compressed representation with only a constant factor overhead. Secondly, we present a compression scheme that efficiently identifies geometric repetitions in point sets, and produces a hierarchical clustering of the point sets, which combined with the first result leads to a compressed representation...

  6. Energy Conservation In Compressed Air Systems

    International Nuclear Information System (INIS)

    Yusuf, I.Y.; Dewu, B.B.M.

    2004-01-01

    Compressed air is an essential utility that accounts for a substantial part of the electricity consumption (bill) in most industrial plants. Although the general saying Air is free of charge is not true for compressed air, the utility's cost is not accorded the rightful importance due to its by most industries. The paper will show that the cost of 1 unit of energy in the form of compressed air is at least 5 times the cost electricity (energy input) required to produce it. The paper will also provide energy conservation tips in compressed air systems

  7. Study of CSR longitudinal bunch compression cavity

    International Nuclear Information System (INIS)

    Yin Dayu; Li Peng; Liu Yong; Xie Qingchun

    2009-01-01

    The scheme of longitudinal bunch compression cavity for the Cooling Storage Ring (CSR)is an important issue. Plasma physics experiments require high density heavy ion beam and short pulsed bunch,which can be produced by non-adiabatic compression of bunch implemented by a fast compression with 90 degree rotation in the longitudinal phase space. The phase space rotation in fast compression is initiated by a fast jump of the RF-voltage amplitude. For this purpose, the CSR longitudinal bunch compression cavity, loaded with FINEMET-FT-1M is studied and simulated with MAFIA code. In this paper, the CSR longitudinal bunch compression cavity is simulated and the initial bunch length of 238 U 72+ with 250 MeV/u will be compressed from 200 ns to 50 ns.The construction and RF properties of the CSR longitudinal bunch compression cavity are simulated and calculated also with MAFIA code. The operation frequency of the cavity is 1.15 MHz with peak voltage of 80 kV, and the cavity can be used to compress heavy ions in the CSR. (authors)

  8. Memory hierarchy using row-based compression

    Science.gov (United States)

    Loh, Gabriel H.; O'Connor, James M.

    2016-10-25

    A system includes a first memory and a device coupleable to the first memory. The device includes a second memory to cache data from the first memory. The second memory includes a plurality of rows, each row including a corresponding set of compressed data blocks of non-uniform sizes and a corresponding set of tag blocks. Each tag block represents a corresponding compressed data block of the row. The device further includes decompression logic to decompress data blocks accessed from the second memory. The device further includes compression logic to compress data blocks to be stored in the second memory.

  9. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  10. Compression therapy after ankle fracture surgery

    DEFF Research Database (Denmark)

    Winge, R; Bayer, L; Gottlieb, H

    2017-01-01

    PURPOSE: The main purpose of this systematic review was to investigate the effect of compression treatment on the perioperative course of ankle fractures and describe its effect on edema, pain, ankle joint mobility, wound healing complication, length of stay (LOS) and time to surgery (TTS). The aim...... undergoing surgery, testing either intermittent pneumatic compression, compression bandage and/or compression stocking and reporting its effect on edema, pain, ankle joint mobility, wound healing complication, LOS and TTS. To conclude on data a narrative synthesis was performed. RESULTS: The review included...

  11. Compressed Sensing with Rank Deficient Dictionaries

    DEFF Research Database (Denmark)

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn

    2012-01-01

    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C...

  12. Less is More: Bigger Data from Compressive Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Andrew; Browning, Nigel D.

    2017-07-01

    Compressive sensing approaches are beginning to take hold in (scanning) transmission electron microscopy (S/TEM) [1,2,3]. Compressive sensing is a mathematical theory about acquiring signals in a compressed form (measurements) and the probability of recovering the original signal by solving an inverse problem [4]. The inverse problem is underdetermined (more unknowns than measurements), so it is not obvious that recovery is possible. Compression is achieved by taking inner products of the signal with measurement weight vectors. Both Gaussian random weights and Bernoulli (0,1) random weights form a large class of measurement vectors for which recovery is possible. The measurements can also be designed through an optimization process. The key insight for electron microscopists is that compressive sensing can be used to increase acquisition speed and reduce dose. Building on work initially developed for optical cameras, this new paradigm will allow electron microscopists to solve more problems in the engineering and life sciences. We will be collecting orders of magnitude more data than previously possible. The reason that we will have more data is because we will have increased temporal/spatial/spectral sampling rates, and we will be able ability to interrogate larger classes of samples that were previously too beam sensitive to survive the experiment. For example consider an in-situ experiment that takes 1 minute. With traditional sensing, we might collect 5 images per second for a total of 300 images. With compressive sensing, each of those 300 images can be expanded into 10 more images, making the collection rate 50 images per second, and the decompressed data a total of 3000 images [3]. But, what are the implications, in terms of data, for this new methodology? Acquisition of compressed data will require downstream reconstruction to be useful. The reconstructed data will be much larger than traditional data, we will need space to store the reconstructions during

  13. Compressive failure with interacting cracks

    International Nuclear Information System (INIS)

    Yang Guoping; Liu Xila

    1993-01-01

    The failure processes in concrete and other brittle materials are just the results of the propagation, coalescence and interaction of many preexisting microcracks or voids. To understand the real behaviour of the brittle materials, it is necessary to bridge the gap from the relatively matured one crack behaviour to the stochastically distributed imperfections, that is, to concern the crack propagation and interaction of microscopic mechanism with macroscopic parameters of brittle materials. Brittle failure in compression has been studied theoretically by Horii and Nemat-Nasser (1986), in which a closed solution was obtained for a preexisting flaw or some special regular flaws. Zaitsev and Wittmann (1981) published a paper on crack propagation in compression, which is so-called numerical concrete, but they did not take account of the interaction among the microcracks. As for the modelling of the influence of crack interaction on fracture parameters, many studies have also been reported. Up till now, some researcher are working on crack interaction considering the ratios of SIFs with and without consideration of the interaction influences, there exist amplifying or shielding effects of crack interaction which are depending on the relative positions of these microcracks. The present paper attempts to simulate the whole failure process of brittle specimen in compression, which includes the complicated coupling effects between the interaction and propagation of randomly distributed or other typical microcrack configurations step by step. The lengths, orientations and positions of microcracks are all taken as random variables. The crack interaction among many preexisting random microcracks is evaluated with the help of a simple interaction matrix (Yang and Liu, 1991). For the subcritically stable propagation of microcracks in mixed mode fracture, fairly known maximum hoop stress criterion is adopted to compute branching lengths and directions at each tip of the crack

  14. General purpose graphic processing unit implementation of adaptive pulse compression algorithms

    Science.gov (United States)

    Cai, Jingxiao; Zhang, Yan

    2017-07-01

    This study introduces a practical approach to implement real-time signal processing algorithms for general surveillance radar based on NVIDIA graphical processing units (GPUs). The pulse compression algorithms are implemented using compute unified device architecture (CUDA) libraries such as CUDA basic linear algebra subroutines and CUDA fast Fourier transform library, which are adopted from open source libraries and optimized for the NVIDIA GPUs. For more advanced, adaptive processing algorithms such as adaptive pulse compression, customized kernel optimization is needed and investigated. A statistical optimization approach is developed for this purpose without needing much knowledge of the physical configurations of the kernels. It was found that the kernel optimization approach can significantly improve the performance. Benchmark performance is compared with the CPU performance in terms of processing accelerations. The proposed implementation framework can be used in various radar systems including ground-based phased array radar, airborne sense and avoid radar, and aerospace surveillance radar.

  15. Fragment separator momentum compression schemes

    Energy Technology Data Exchange (ETDEWEB)

    Bandura, Laura, E-mail: bandura@anl.gov [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Erdelyi, Bela [Argonne National Laboratory, Argonne, IL 60439 (United States); Northern Illinois University, DeKalb, IL 60115 (United States); Hausmann, Marc [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Kubo, Toshiyuki [RIKEN Nishina Center, RIKEN, Wako (Japan); Nolen, Jerry [Argonne National Laboratory, Argonne, IL 60439 (United States); Portillo, Mauricio [Facility for Rare Isotope Beams (FRIB), 1 Cyclotron, East Lansing, MI 48824-1321 (United States); Sherrill, Bradley M. [National Superconducting Cyclotron Lab, Michigan State University, 1 Cyclotron, East Lansing, MI 48824-1321 (United States)

    2011-07-21

    We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.

  16. Fragment separator momentum compression schemes

    International Nuclear Information System (INIS)

    Bandura, Laura; Erdelyi, Bela; Hausmann, Marc; Kubo, Toshiyuki; Nolen, Jerry; Portillo, Mauricio; Sherrill, Bradley M.

    2011-01-01

    We present a scheme to use a fragment separator and profiled energy degraders to transfer longitudinal phase space into transverse phase space while maintaining achromatic beam transport. The first order beam optics theory of the method is presented and the consequent enlargement of the transverse phase space is discussed. An interesting consequence of the technique is that the first order mass resolving power of the system is determined by the first dispersive section up to the energy degrader, independent of whether or not momentum compression is used. The fragment separator at the Facility for Rare Isotope Beams is a specific application of this technique and is described along with simulations by the code COSY INFINITY.

  17. Lossless Compression of Digital Images

    DEFF Research Database (Denmark)

    Martins, Bo

    Presently, tree coders are the best bi-level image coders. The currentISO standard, JBIG, is a good example.By organising code length calculations properly a vast number of possible models (trees) can be investigated within reasonable time prior to generating code.A number of general-purpose coders...... version that is substantially faster than its precursorsand brings it close to the multi-pass coders in compression performance.Handprinted characters are of unequal complexity; recent work by Singer and Tishby demonstrates that utilizing the physiological process of writing one can synthesize cursive.......The feature vector of a bitmap initially constitutes a lossy representation of the contour(s) of the bitmap. The initial feature space is usually too large but can be reduced automatically by use ofa predictive code length or predictive error criterion....

  18. Compressive creep of silicon nitride

    International Nuclear Information System (INIS)

    Silva, C.R.M. da; Melo, F.C.L. de; Cairo, C.A.; Piorino Neto, F.

    1990-01-01

    Silicon nitride samples were formed by pressureless sintering process, using neodymium oxide and a mixture of neodymium oxide and yttrio oxide as sintering aids. The short term compressive creep behaviour was evaluated over a stress range of 50-300 MPa and temperature range 1200 - 1350 0 C. Post-sintering heat treatments in nitrogen with a stepwise decremental variation of temperature were performed in some samples and microstructural analysis by X-ray diffraction and transmission electron microscopy showed that the secondary crystalline phase which form from the remnant glass are dependent upon composition and percentage of aditives. Stress exponent values near to unity were obtained for materials with low glass content suggesting grain boundary diffusion accommodation processes. Cavitation will thereby become prevalent with increase in stress, temperature and decrease in the degree of crystallization of the grain boundary phase. (author) [pt

  19. Right brachial angiography with compression

    International Nuclear Information System (INIS)

    Ruggiero, G.; Dalbuono, S.; Tampieri, D.

    1982-01-01

    A technique for performing right brachial anigography by compressing the right anterior-inferior part of the neck is proposed, as a result of studying the left carotid circulation without puncturing the left carotid artery. A success was obtained in about 75% of cases. The success of the technique depends mainly on the anatomical nature of the innominate artery. When the technique is successful both left carotid arteries in the neck and their intracranial branches can be satisfactorily visualized. In some cases visualization of the left vertebral artery was also otbained. Attention is drawn also on the increased diagnostic possibilities of studying the vessels in the neck with a greater dilution of the contrast medium. (orig.)

  20. Shock compression of geological materials

    International Nuclear Information System (INIS)

    Kirk, S; Braithwaite, C; Williamson, D; Jardine, A

    2014-01-01

    Understanding the shock compression of geological materials is important for many applications, and is particularly important to the mining industry. During blast mining the response to shock loading determines the wave propagation speed and resulting fragmentation of the rock. The present work has studied the Hugoniot of two geological materials; Lake Quarry Granite and Gosford Sandstone. For samples of these materials, the composition was characterised in detail. The Hugoniot of Lake Quarry Granite was predicted from this information as the material is fully dense and was found to be in good agreement with the measured Hugoniot. Gosford Sandstone is porous and undergoes compaction during shock loading. Such behaviour is similar to other granular material and we show how it can be described using a P-a compaction model.

  1. Modeling Compressed Turbulence with BHR

    Science.gov (United States)

    Israel, Daniel

    2011-11-01

    Turbulence undergoing compression or expansion occurs in systems ranging from internal combustion engines to supernovae. One common feature in many of these systems is the presence of multiple reacting species. Direct numerical simulation data is available for the single-fluid, low turbulent Mach number case. Wu, et al. (1985) compared their DNS results to several Reynolds-averaged Navier-Stokes models. They also proposed a three-equation k - ɛ - τ model, in conjunction with a Reynolds-stress model. Subsequent researchers have proposed alternative corrections to the standard k - ɛ formulation. Here we investigate three variants of the BHR model (Besnard, 1992). BHR is a model for multi-species variable-density turbulence. The three variants are the linear eddy-viscosity, algebraic-stress, and full Reynolds-stress formulations. We then examine the predictions of the model for the fluctuating density field for the case of variable-density turbulence.

  2. Nuclear transmutation by flux compression

    International Nuclear Information System (INIS)

    Seifritz, W.

    2001-01-01

    A new idea for the transmutation of minor actinides, long (and even short) lived fission products is presented. It is based an the property of neutron flux compression in nuclear (fast and/or thermal) reactors possessing spatially non-stationary critical masses. An advantage factor for the burn-up fluence of the elements to be transmuted in the order of magnitude of 100 and more is obtainable compared with the classical way of transmutation. Three typical examples of such transmuters (a subcritical ringreactor with a rotating reflector, a sub-critical ring reactor with a rotating spallation source, the socalled ''pulsed energy amplifier'', and a fast burn-wave reactor) are presented and analysed with regard to this purpose. (orig.) [de

  3. New thermodynamical systems. Alternative of compression-absorption; Nouveaux systemes thermodynamiques. Alternative de la compression-absorption

    Energy Technology Data Exchange (ETDEWEB)

    Feidt, M.; Brunin, O.; Lottin, O.; Vidal, J.F. [Universite Henri Poincare Nancy, 54 - Vandoeuvre-les-Nancy (France); Hivet, B. [Electricite de France, 77 - Moret sur Loing (France)

    1996-12-31

    This paper describes a 5 years joint research work carried out by Electricite de France (EdF) and the ESPE group of the LEMTA on compression-absorption heat pumps. It shows how a thermodynamical model of machinery, completed with precise exchanger-reactor models, allows to simulate and dimension (and eventually optimize) the system. A small power prototype has been tested and the first results are analyzed with the help of the models. A real scale experiment in industrial sites is expected in the future. (J.S.) 20 refs.

  4. Development of 1D Liner Compression Code for IDL

    Science.gov (United States)

    Shimazu, Akihisa; Slough, John; Pancotti, Anthony

    2015-11-01

    A 1D liner compression code is developed to model liner implosion dynamics in the Inductively Driven Liner Experiment (IDL) where FRC plasmoid is compressed via inductively-driven metal liners. The driver circuit, magnetic field, joule heating, and liner dynamics calculations are performed at each time step in sequence to couple these effects in the code. To obtain more realistic magnetic field results for a given drive coil geometry, 2D and 3D effects are incorporated into the 1D field calculation through use of correction factor table lookup approach. Commercial low-frequency electromagnetic fields solver, ANSYS Maxwell 3D, is used to solve the magnetic field profile for static liner condition at various liner radius in order to derive correction factors for the 1D field calculation in the code. The liner dynamics results from the code is verified to be in good agreement with the results from commercial explicit dynamics solver, ANSYS Explicit Dynamics, and previous liner experiment. The developed code is used to optimize the capacitor bank and driver coil design for better energy transfer and coupling. FRC gain calculations are also performed using the liner compression data from the code for the conceptual design of the reactor sized system for fusion energy gains.

  5. Edge compression techniques for visualization of dense directed graphs.

    Science.gov (United States)

    Dwyer, Tim; Henry Riche, Nathalie; Marriott, Kim; Mears, Christopher

    2013-12-01

    We explore the effectiveness of visualizing dense directed graphs by replacing individual edges with edges connected to 'modules'-or groups of nodes-such that the new edges imply aggregate connectivity. We only consider techniques that offer a lossless compression: that is, where the entire graph can still be read from the compressed version. The techniques considered are: a simple grouping of nodes with identical neighbor sets; Modular Decomposition which permits internal structure in modules and allows them to be nested; and Power Graph Analysis which further allows edges to cross module boundaries. These techniques all have the same goal--to compress the set of edges that need to be rendered to fully convey connectivity--but each successive relaxation of the module definition permits fewer edges to be drawn in the rendered graph. Each successive technique also, we hypothesize, requires a higher degree of mental effort to interpret. We test this hypothetical trade-off with two studies involving human participants. For Power Graph Analysis we propose a novel optimal technique based on constraint programming. This enables us to explore the parameter space for the technique more precisely than could be achieved with a heuristic. Although applicable to many domains, we are motivated by--and discuss in particular--the application to software dependency analysis.

  6. Compressible Convection Experiment using Xenon Gas in a Centrifuge

    Science.gov (United States)

    Menaut, R.; Alboussiere, T.; Corre, Y.; Huguet, L.; Labrosse, S.; Deguen, R.; Moulin, M.

    2017-12-01

    We present here an experiment especially designed to study compressible convection in the lab. For significant compressible convection effects, the parameters of the experiment have to be optimized: we use xenon gaz in a cubic cell. This cell is placed in a centrifuge to artificially increase the apparent gravity and heated from below. With these choices, we are able to reach a dissipation number close to Earth's outer core value. We will present our results for different heating fluxes and rotation rates. We success to observe an adiabatic gradient of 3K/cm in the cell. Studies of pressure and temperature fluctuations lead us to think that the convection takes place under the form of a single roll in the cell for high heating flux. Moreover, these fluctuations show that the flow is geostrophic due to the high rotation speed. This important role of rotation, via Coriolis force effects, in our experimental setup leads us to develop a 2D quasigeostrophic compressible model in the anelastic liquid approximation. We test numerically this model with the finite element solver FreeFem++ and compare its results with our experimental data. In conclusion, we will present our project for the next experiment in which the cubic cell will be replace by a annulus cell. We will discuss the new expected effects due to this geometry as Rossby waves and zonal flows.

  7. Real-time video compressing under DSP/BIOS

    Science.gov (United States)

    Chen, Qiu-ping; Li, Gui-ju

    2009-10-01

    This paper presents real-time MPEG-4 Simple Profile video compressing based on the DSP processor. The programming framework of video compressing is constructed using TMS320C6416 Microprocessor, TDS510 simulator and PC. It uses embedded real-time operating system DSP/BIOS and the API functions to build periodic function, tasks and interruptions etcs. Realize real-time video compressing. To the questions of data transferring among the system. Based on the architecture of the C64x DSP, utilized double buffer switched and EDMA data transfer controller to transit data from external memory to internal, and realize data transition and processing at the same time; the architecture level optimizations are used to improve software pipeline. The system used DSP/BIOS to realize multi-thread scheduling. The whole system realizes high speed transition of a great deal of data. Experimental results show the encoder can realize real-time encoding of 768*576, 25 frame/s video images.

  8. A New Approach for Fingerprint Image Compression

    Energy Technology Data Exchange (ETDEWEB)

    Mazieres, Bertrand

    1997-12-01

    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  9. Compressive Strength of Cometary Surfaces Derived from Radar Observations

    Science.gov (United States)

    ElShafie, A.; Heggy, E.

    2014-12-01

    Landing on a comet nucleus and probing it, mechanically using harpoons, penetrometers and drills, and electromagnetically using low frequency radar waves is a complex task that will be tackled by the Rosetta mission for Comet 67P/Churyumov-Gerasimenko. The mechanical properties (i.e. density, porosity and compressive strength) and the electrical properties (i.e. the real and imaginary parts of the dielectric constant) of the comet nucleus, constrain both the mechanical and electromagnetic probing capabilities of Rosetta, as well as the choice of landing site, the safety of the landing, and subsurface data interpretation. During landing, the sounding radar data that will be collected by Rosetta's CONSERT experiment can be used to probe the comet's upper regolith layer by assessing its dielectric properties, which are then inverted to retrieve the surface mechanical properties. These observations can help characterize the mechanical properties of the landing site, which will optimize the operation of the anchor system. In this effort, we correlate the mechanical and electrical properties of cometary analogs to each other, and derive an empirical model that can be used to retrieve density, porosity and compressive strength from the dielectric properties of the upper regolith inverted from CONSERT observations during the landing phase. In our approach we consider snow as a viable cometary material analog due to its low density and its porous nature. Therefore, we used the compressive strength and dielectric constant measurements conducted on snow at a temperature of 250 K and a density range of 0.4-0.9 g/cm3 in order to investigate the relation between compressive strength and dielectric constant under cometary-relevant density range. Our results suggest that compressive strength increases linearly as function of the dielectric constant over the observed density range mentioned above. The minimum and maximum compressive strength of 0.5 and 4.5 MPa corresponded to a

  10. Compression of Short Text on Embedded Systems

    DEFF Research Database (Denmark)

    Rein, S.; Gühmann, C.; Fitzek, Frank

    2006-01-01

    The paper details a scheme for lossless compression of a short data series larger than 50 bytes. The method uses arithmetic coding and context modelling with a low-complexity data model. A data model that takes 32 kBytes of RAM already cuts the data size in half. The compression scheme just takes...

  11. Recoil Experiments Using a Compressed Air Cannon

    Science.gov (United States)

    Taylor, Brett

    2006-01-01

    Ping-Pong vacuum cannons, potato guns, and compressed air cannons are popular and dramatic demonstrations for lecture and lab. Students enjoy them for the spectacle, but they can also be used effectively to teach physics. Recently we have used a student-built compressed air cannon as a laboratory activity to investigate impulse, conservation of…

  12. Rupture of esophagus by compressed air.

    Science.gov (United States)

    Wu, Jie; Tan, Yuyong; Huo, Jirong

    2016-11-01

    Currently, beverages containing compressed air such as cola and champagne are widely used in our daily life. Improper ways to unscrew the bottle, usually by teeth, could lead to an injury, even a rupture of the esophagus. This letter to editor describes a case of esophageal rupture caused by compressed air.

  13. MP3 compression of Doppler ultrasound signals.

    Science.gov (United States)

    Poepping, Tamie L; Gill, Jeremy; Fenster, Aaron; Holdsworth, David W

    2003-01-01

    The effect of lossy, MP3 compression on spectral parameters derived from Doppler ultrasound (US) signals was investigated. Compression was tested on signals acquired from two sources: 1. phase quadrature and 2. stereo audio directional output. A total of 11, 10-s acquisitions of Doppler US signal were collected from each source at three sites in a flow phantom. Doppler signals were digitized at 44.1 kHz and compressed using four grades of MP3 compression (in kilobits per second, kbps; compression ratios in brackets): 1400 kbps (uncompressed), 128 kbps (11:1), 64 kbps (22:1) and 32 kbps (44:1). Doppler spectra were characterized by peak velocity, mean velocity, spectral width, integrated power and ratio of spectral power between negative and positive velocities. The results suggest that MP3 compression on digital Doppler US signals is feasible at 128 kbps, with a resulting 11:1 compression ratio, without compromising clinically relevant information. Higher compression ratios led to significant differences for both signal sources when compared with the uncompressed signals. Copyright 2003 World Federation for Ultrasound in Medicine & Biology

  14. Normalized compression distance of multisets with applications

    NARCIS (Netherlands)

    Cohen, A.R.; Vitányi, P.M.B.

    Pairwise normalized compression distance (NCD) is a parameter-free, feature-free, alignment-free, similarity metric based on compression. We propose an NCD of multisets that is also metric. Previously, attempts to obtain such an NCD failed. For classification purposes it is superior to the pairwise

  15. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  16. Compression and fast retrieval of SNP data.

    Science.gov (United States)

    Sambo, Francesco; Di Camillo, Barbara; Toffolo, Gianna; Cobelli, Claudio

    2014-11-01

    The increasing interest in rare genetic variants and epistatic genetic effects on complex phenotypic traits is currently pushing genome-wide association study design towards datasets of increasing size, both in the number of studied subjects and in the number of genotyped single nucleotide polymorphisms (SNPs). This, in turn, is leading to a compelling need for new methods for compression and fast retrieval of SNP data. We present a novel algorithm and file format for compressing and retrieving SNP data, specifically designed for large-scale association studies. Our algorithm is based on two main ideas: (i) compress linkage disequilibrium blocks in terms of differences with a reference SNP and (ii) compress reference SNPs exploiting information on their call rate and minor allele frequency. Tested on two SNP datasets and compared with several state-of-the-art software tools, our compression algorithm is shown to be competitive in terms of compression rate and to outperform all tools in terms of time to load compressed data. Our compression and decompression algorithms are implemented in a C++ library, are released under the GNU General Public License and are freely downloadable from http://www.dei.unipd.it/~sambofra/snpack.html. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Dynamic compression and sound quality of music

    NARCIS (Netherlands)

    Lieshout, van R.A.J.M.; Wagenaars, W.M.; Houtsma, A.J.M.; Stikvoort, E.F.

    1984-01-01

    Amplitude compression is often used to match the dynamic: range of music to a particular playback situation in order to ensure, e .g ., continuous audibility in a noisy environment or unobtrusiveness if the music is intended as a quiet background. Since amplitude compression is a nonlinear process,

  18. Subjective evaluation of dynamic compression in music

    NARCIS (Netherlands)

    Wagenaars, W.M.; Houtsma, A.J.M.; Lieshout, van R.A.J.M.

    1986-01-01

    Amplitude compression is often used to match the dynamic range of music to a particular playback situation so as to ensure continuous audibility in a noisy environment. Since amplitude compression is a nonlinear process, it is potentially very damaging to sound quality. Three physical parameters of

  19. Mammography parameters: compression, dose, and discomfort

    International Nuclear Information System (INIS)

    Blanco, S.; Di Risio, C.; Andisco, D.; Rojas, R.R.; Rojas, R.M.

    2017-01-01

    Objective: To confirm the importance of compression in mammography and relate it to the discomfort expressed by the patients. Materials and methods: Two samples of 402 and 268 mammographies were obtained from two diagnostic centres that use the same mammographic equipment, but different compression techniques. The patient age range was from 21 to 50 years old. (authors) [es

  20. Hardware compression using common portions of data

    Science.gov (United States)

    Chang, Jichuan; Viswanathan, Krishnamurthy

    2015-03-24

    Methods and devices are provided for data compression. Data compression can include receiving a plurality of data chunks, sampling at least some of the plurality of data chunks extracting a common portion from a number of the plurality of data chunks based on the sampling, and storing a remainder of the plurality of data chunks in memory.

  1. Diagnostic value of MRI for nerve root compression due to lumbar canal stenosis. Clinical and anatomic study

    International Nuclear Information System (INIS)

    Seki, Michihiro; Kikuchi, Shinichi; Kageyama, Kazuhiro; Katakura, Toshihiko; Suzuki, Kenji

    1995-01-01

    Magnetic resonance imaging (MRI) was undertaken in 26 patients with surgically proven nerve root compression due to lumbar canal stenosis. The findings on coronary images were compared with those of selective radiculography to assess the diagnostic ability of MRI to determine the site of nerve root compression. Intermission and partial defect, which reflect nerve root compression, were seen in only 5 (19.2%) of 26 nerve roots on MRI, as compared with 20 (76.9%) on radiculography. Thus MRI alone was difficult to diagnose nerve root compression due to lumbar canal stenosis. Furthermore, the optimum angle of coronary views was determined in 13 cadavers. Para-sagittal views were found to be optimal for the observation of the whole running of the nerve root. Three-dimensional MRI was found to have a potential to diagnose nerve root compression in the intervertebral foramen and the distal part of the intervertebral foramen. (N.K.)

  2. Effect Of RPC Compositions On: Compressive Strength and Absorption

    Directory of Open Access Journals (Sweden)

    Ahmed Sultan Ali

    2016-03-01

    Full Text Available Concrete is a critical material for the construction of infrastructure facilities throughout the world. A new material known as Reactive Powder Concrete (RPC, or sometimes called Ultra-High Performance Concrete (UHPC, is becoming available that differs significantly from traditional concretes. It is an ultra high strength and high ductility composite material with advanced mechanical properties. It consists of special concrete whose microstructure is optimized by precise gradation of all particles in the mix to yield maximum density. Different RPC mixes in the experimental investigation of the present study the mechanical properties of RPC including compressive strength, density and absorption. The main variables used in the production of the different RPC mixes of the present research are three, namely, type of pozzolanic admixture (metakaolin, micro silica, and silica fume, type of fibers (steel and polypropylene fibers and volume fraction of fibers (1.0,1.5, and 2.0%. The experimental results indicated that RPC mixes with silica fume gave the highest values of compressive strength and density and lowest value of absorption in comparison with RPC using micro silica or metakaolin where metakaolin was the third in such comparisons. However the RPC mixes used in the present investigation gave group compressive strength ranging between 164 -195 MPa. It was also found that the use of steel fibers with high volume fraction (2% in an RPC mix increases the compressive strength by 8% and density of the concrete by 2.5% and reduces its absorption by 13%, unlike an RPC mix using polypropylene fibers of lesser volume fraction.

  3. Optimisation algorithms for ECG data compression.

    Science.gov (United States)

    Haugland, D; Heber, J G; Husøy, J H

    1997-07-01

    The use of exact optimisation algorithms for compressing digital electrocardiograms (ECGs) is demonstrated. As opposed to traditional time-domain methods, which use heuristics to select a small subset of representative signal samples, the problem of selecting the subset is formulated in rigorous mathematical terms. This approach makes it possible to derive algorithms guaranteeing the smallest possible reconstruction error when a bounded selection of signal samples is interpolated. The proposed model resembles well-known network models and is solved by a cubic dynamic programming algorithm. When applied to standard test problems, the algorithm produces a compressed representation for which the distortion is about one-half of that obtained by traditional time-domain compression techniques at reasonable compression ratios. This illustrates that, in terms of the accuracy of decoded signals, existing time-domain heuristics for ECG compression may be far from what is theoretically achievable. The paper is an attempt to bridge this gap.

  4. Mathematical transforms and image compression: A review

    Directory of Open Access Journals (Sweden)

    Satish K. Singh

    2010-07-01

    Full Text Available It is well known that images, often used in a variety of computer and other scientific and engineering applications, are difficult to store and transmit due to their sizes. One possible solution to overcome this problem is to use an efficient digital image compression technique where an image is viewed as a matrix and then the operations are performed on the matrix. All the contemporary digital image compression systems use various mathematical transforms for compression. The compression performance is closely related to the performance by these mathematical transforms in terms of energy compaction and spatial frequency isolation by exploiting inter-pixel redundancies present in the image data. Through this paper, a comprehensive literature survey has been carried out and the pros and cons of various transform-based image compression models have also been discussed.

  5. Sudden viscous dissipation in compressing plasma turbulence

    Science.gov (United States)

    Davidovits, Seth; Fisch, Nathaniel

    2015-11-01

    Compression of a turbulent plasma or fluid can cause amplification of the turbulent kinetic energy, if the compression is fast compared to the turnover and viscous dissipation times of the turbulent eddies. The consideration of compressing turbulent flows in inviscid fluids has been motivated by the suggestion that amplification of turbulent kinetic energy occurred on experiments at the Weizmann Institute of Science Z-Pinch. We demonstrate a sudden viscous dissipation mechanism whereby this amplified turbulent kinetic energy is rapidly converted into thermal energy, which further increases the temperature, feeding back to further enhance the dissipation. Application of this mechanism in compression experiments may be advantageous, if the plasma can be kept comparatively cold during much of the compression, reducing radiation and conduction losses, until the plasma suddenly becomes hot. This work was supported by DOE through contract 67350-9960 (Prime # DOE DE-NA0001836) and by the DTRA.

  6. Stress analysis of shear/compression test

    International Nuclear Information System (INIS)

    Nishijima, S.; Okada, T.; Ueno, S.

    1997-01-01

    Stress analysis has been made on the glass fiber reinforced plastics (GFRP) subjected to the combined shear and compression stresses by means of finite element method. The two types of experimental set up were analyzed, that is parallel and series method where the specimen were compressed by tilted jigs which enable to apply the combined stresses, to the specimen. Modified Tsai-Hill criterion was employed to judge the failure under the combined stresses that is the shear strength under the compressive stress. The different failure envelopes were obtained between the two set ups. In the parallel system the shear strength once increased with compressive stress then decreased. On the contrary in the series system the shear strength decreased monotonicly with compressive stress. The difference is caused by the different stress distribution due to the different constraint conditions. The basic parameters which control the failure under the combined stresses will be discussed

  7. Interactive computer graphics applications for compressible aerodynamics

    Science.gov (United States)

    Benson, Thomas J.

    1994-01-01

    Three computer applications have been developed to solve inviscid compressible fluids problems using interactive computer graphics. The first application is a compressible flow calculator which solves for isentropic flow, normal shocks, and oblique shocks or centered expansions produced by two dimensional ramps. The second application couples the solutions generated by the first application to a more graphical presentation of the results to produce a desk top simulator of three compressible flow problems: 1) flow past a single compression ramp; 2) flow past two ramps in series; and 3) flow past two opposed ramps. The third application extends the results of the second to produce a design tool which solves for the flow through supersonic external or mixed compression inlets. The applications were originally developed to run on SGI or IBM workstations running GL graphics. They are currently being extended to solve additional types of flow problems and modified to operate on any X-based workstation.

  8. Pareto-optimal alloys

    DEFF Research Database (Denmark)

    Bligaard, Thomas; Johannesson, Gisli Holmar; Ruban, Andrei

    2003-01-01

    Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties and the ......Large databases that can be used in the search for new materials with specific properties remain an elusive goal in materials science. The problem is complicated by the fact that the optimal material for a given application is usually a compromise between a number of materials properties...... and the cost. In this letter we present a database consisting of the lattice parameters, bulk moduli, and heats of formation for over 64 000 ordered metallic alloys, which has been established by direct first-principles density-functional-theory calculations. Furthermore, we use a concept from economic theory......, the Pareto-optimal set, to determine optimal alloy solutions for the compromise between low compressibility, high stability, and cost....

  9. Low power design of wireless endoscopy compression/communication architecture

    Directory of Open Access Journals (Sweden)

    Zitouni Abdelkrim

    2018-05-01

    Full Text Available A wireless endoscopy capsule represents an efficient device interesting on the examination of digestive diseases. Many performance criteria’s (silicon area, dissipated power, image quality, computational time, etc. need to be deeply studied.In this paper, our interest is the optimization of the indicated criteria. The proposed methodology is based on exploring the advantages of the DCT/DWT transforms by combining them into single architecture. For arithmetic operations, the MCLA technique is used. This architecture integrates also a CABAC entropy coder that supports all binarization schemes. AMBA/I2C architecture is developed for assuring optimized communication.The comparisons of the proposed architecture with the most popular methods explained in related works show efficient results in terms dissipated power, hardware cost, and computation speed. Keywords: Wireless endoscopy capsule, DCT/DWT image compression, CABAC entropy coder, AMBA/I2C multi-bus architecture

  10. The impact of chest compression rates on quality of chest compressions - a manikin study.

    Science.gov (United States)

    Field, Richard A; Soar, Jasmeet; Davies, Robin P; Akhtar, Naheed; Perkins, Gavin D

    2012-03-01

    Chest compressions are often performed at a variable rate during cardiopulmonary resuscitation (CPR). The effect of compression rate on other chest compression quality variables (compression depth, duty-cycle, leaning, performance decay over time) is unknown. This randomised controlled cross-over manikin study examined the effect of different compression rates on the other chest compression quality variables. Twenty healthcare professionals performed 2 min of continuous compressions on an instrumented manikin at rates of 80, 100, 120, 140 and 160 min(-1) in a random order. An electronic metronome was used to guide compression rate. Compression data were analysed by repeated measures ANOVA and are presented as mean (SD). Non-parametric data was analysed by Friedman test. At faster compression rates there were significant improvements in the number of compressions delivered (160(2) at 80 min(-1) vs. 312(13) compressions at 160 min(-1), P<0.001); and compression duty-cycle (43(6)% at 80 min(-1) vs. 50(7)% at 160 min(-1), P<0.001). This was at the cost of a significant reduction in compression depth (39.5(10)mm at 80 min(-1) vs. 34.5(11)mm at 160 min(-1), P<0.001); and earlier decay in compression quality (median decay point 120 s at 80 min(-1) vs. 40s at 160 min(-1), P<0.001). Additionally not all participants achieved the target rate (100% at 80 min(-1) vs. 70% at 160 min(-1)). Rates above 120 min(-1) had the greatest impact on reducing chest compression quality. For Guidelines 2005 trained rescuers, a chest compression rate of 100-120 min(-1) for 2 min is feasible whilst maintaining adequate chest compression quality in terms of depth, duty-cycle, leaning, and decay in compression performance. Further studies are needed to assess the impact of the Guidelines 2010 recommendation for deeper and faster chest compressions. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. An investigation on compression strength analysis of commercial aluminium tube to aluminium 2025 tube plate by using TIG welding process

    Energy Technology Data Exchange (ETDEWEB)

    Kannan, S., E-mail: kannan.dgl201127@gmail.com [Department of Mechanical Engineering and Mining Machinery Engineering, Indian Institute of Technology (ISM), Dhanbad, Jharkhand, India, 826004 (India); Senthil Kumaran, S., E-mail: sskumaran@ymail.com [Research and Development Center, Department of Mechanical Engineering, RVS Educational Trust' s Group of Institutions, RVS School of Engineering and Technology, Dindigul, Tamilnadu, India, 624005 (India); Kumaraswamidhas, L.A., E-mail: lakdhas1978@gmail.com [Department of Mechanical Engineering and Mining Machinery Engineering, Indian School of Mines University, Dhanbad, Jharkhand, India, 826004 (India)

    2016-05-05

    In this present study, Tungsten inert gas (TIG) welding was applied to weld the dissimilar materials and authenticate the mechanical and metallurgical properties of tube to tube plate made up of commercial aluminium and Al 2025 respectively using an Zirconiated tungsten electrode along with filler material aluminium ER 2219. In total, twenty five pieces has been subjected to compression strength and hardness value to evaluate the optimal joint strength. The three optimization technique has been used in this experiment. Taguchi L{sub 25} orthogonal array is used to identify the most influencing process parameter which affects the joint strength. ANOVA method is measured for both compression strength and hardness to calculate the percentage of contribution for each process parameter. Genetic algorithm is used to validate the results obtained from the both experimental value and optimization value. The micro structural study is depicted the welding joints characterization in between tube to tube plate joints. The radiograph test is conducted to prove the welds are non-defective and no flaws are found during the welding process. The mechanical property of compression strength and hardness has been measured to obtain the optimal joint strength of the welded sample was about 174.846 MPa and 131.364 Hv respectively. - Highlights: • Commercial Al tube and Al 2025 tube plate successfully welded by TIG welding. • Compression strength and hardness value proves to obtain optimal joint strength. • The maximum compression and hardness was achieved in various input parameters.

  12. An investigation on compression strength analysis of commercial aluminium tube to aluminium 2025 tube plate by using TIG welding process

    International Nuclear Information System (INIS)

    Kannan, S.; Senthil Kumaran, S.; Kumaraswamidhas, L.A.

    2016-01-01

    In this present study, Tungsten inert gas (TIG) welding was applied to weld the dissimilar materials and authenticate the mechanical and metallurgical properties of tube to tube plate made up of commercial aluminium and Al 2025 respectively using an Zirconiated tungsten electrode along with filler material aluminium ER 2219. In total, twenty five pieces has been subjected to compression strength and hardness value to evaluate the optimal joint strength. The three optimization technique has been used in this experiment. Taguchi L_2_5 orthogonal array is used to identify the most influencing process parameter which affects the joint strength. ANOVA method is measured for both compression strength and hardness to calculate the percentage of contribution for each process parameter. Genetic algorithm is used to validate the results obtained from the both experimental value and optimization value. The micro structural study is depicted the welding joints characterization in between tube to tube plate joints. The radiograph test is conducted to prove the welds are non-defective and no flaws are found during the welding process. The mechanical property of compression strength and hardness has been measured to obtain the optimal joint strength of the welded sample was about 174.846 MPa and 131.364 Hv respectively. - Highlights: • Commercial Al tube and Al 2025 tube plate successfully welded by TIG welding. • Compression strength and hardness value proves to obtain optimal joint strength. • The maximum compression and hardness was achieved in various input parameters.

  13. Optimization and industry new frontiers

    CERN Document Server

    Korotkikh, Victor

    2003-01-01

    Optimization from Human Genes to Cutting Edge Technologies The challenges faced by industry today are so complex that they can only be solved through the help and participation of optimization ex­ perts. For example, many industries in e-commerce, finance, medicine, and engineering, face several computational challenges due to the mas­ sive data sets that arise in their applications. Some of the challenges include, extended memory algorithms and data structures, new program­ ming environments, software systems, cryptographic protocols, storage devices, data compression, mathematical and statistical methods for knowledge mining, and information visualization. With advances in computer and information systems technologies, and many interdisci­ plinary efforts, many of the "data avalanche challenges" are beginning to be addressed. Optimization is the most crucial component in these efforts. Nowadays, the main task of optimization is to investigate the cutting edge frontiers of these technologies and systems ...

  14. Isostatic compression of buffer blocks. Middle scale

    International Nuclear Information System (INIS)

    Ritola, J.; Pyy, E.

    2012-01-01

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  15. Isostatic compression of buffer blocks. Middle scale

    Energy Technology Data Exchange (ETDEWEB)

    Ritola, J.; Pyy, E. [VTT Technical Research Centre of Finland, Espoo (Finland)

    2012-01-15

    Manufacturing of buffer components using isostatic compression method has been studied in small scale in 2008 (Laaksonen 2010). These tests included manufacturing of buffer blocks using different bentonite materials and different compression pressures. Isostatic mould technology was also tested, along with different methods to fill the mould, such as vibration and partial vacuum, as well as a stepwise compression of the blocks. The development of manufacturing techniques has continued with small-scale (30 %) blocks (diameter 600 mm) in 2009. This was done in a separate project: Isostatic compression, manufacturing and testing of small scale (D = 600 mm) buffer blocks. The research on the isostatic compression method continued in 2010 in a project aimed to test and examine the isostatic manufacturing process of buffer blocks at 70 % scale (block diameter 1200 to 1300 mm), and the aim was to continue in 2011 with full-scale blocks (diameter 1700 mm). A total of nine bentonite blocks were manufactured at 70 % scale, of which four were ring-shaped and the rest were cylindrical. It is currently not possible to manufacture full-scale blocks, because there is no sufficiently large isostatic press available. However, such a compression unit is expected to be possible to use in the near future. The test results of bentonite blocks, produced with an isostatic pressing method at different presses and at different sizes, suggest that the technical characteristics, for example bulk density and strength values, are somewhat independent of the size of the block, and that the blocks have fairly homogenous characteristics. Water content and compression pressure are the two most important properties determining the characteristics of the compressed blocks. By adjusting these two properties it is fairly easy to produce blocks at a desired density. The commonly used compression pressure in the manufacturing of bentonite blocks is 100 MPa, which compresses bentonite to approximately

  16. Fast lossless compression via cascading Bloom filters.

    Science.gov (United States)

    Rozov, Roye; Shamir, Ron; Halperin, Eran

    2014-01-01

    Data from large Next Generation Sequencing (NGS) experiments present challenges both in terms of costs associated with storage and in time required for file transfer. It is sometimes possible to store only a summary relevant to particular applications, but generally it is desirable to keep all information needed to revisit experimental results in the future. Thus, the need for efficient lossless compression methods for NGS reads arises. It has been shown that NGS-specific compression schemes can improve results over generic compression methods, such as the Lempel-Ziv algorithm, Burrows-Wheeler transform, or Arithmetic Coding. When a reference genome is available, effective compression can be achieved by first aligning the reads to the reference genome, and then encoding each read using the alignment position combined with the differences in the read relative to the reference. These reference-based methods have been shown to compress better than reference-free schemes, but the alignment step they require demands several hours of CPU time on a typical dataset, whereas reference-free methods can usually compress in minutes. We present a new approach that achieves highly efficient compression by using a reference genome, but completely circumvents the need for alignment, affording a great reduction in the time needed to compress. In contrast to reference-based methods that first align reads to the genome, we hash all reads into Bloom filters to encode, and decode by querying the same Bloom filters using read-length subsequences of the reference genome. Further compression is achieved by using a cascade of such filters. Our method, called BARCODE, runs an order of magnitude faster than reference-based methods, while compressing an order of magnitude better than reference-free methods, over a broad range of sequencing coverage. In high coverage (50-100 fold), compared to the best tested compressors, BARCODE saves 80-90% of the running time while only increasing space

  17. Improvement of a thermoelectric and vapour compression hybrid refrigerator

    International Nuclear Information System (INIS)

    Astrain, D.; Martínez, A.; Rodríguez, A.

    2012-01-01

    This paper presents the improvement in the performance of a domestic hybrid refrigerator that combines vapour compression technology for the cooler and freezer compartments, and thermoelectric technology for a new compartment. The heat emitted by the Peltier modules is discharged into the freezer compartment, forming a cascade refrigeration system. This configuration leads to a significant improvement in the coefficient of operation. Thus, the electric power consumption of the modules and the refrigerator decreases by 95% and 20% respectively, with respect to those attained with a cascade refrigeration system connected with the cooler compartment. The optimization process is based on a computational model that simulates the behaviour of the whole refrigerator. Two prototypes have been built and tested. Experimental results indicate that the temperature of the new compartment is easily set up at any value between 0 and −4 °C, the oscillation of this temperature is always lower than 0.4 °C, and the electric power consumption is low enough to include this hybrid refrigerator into energy efficiency class A, according European rules and regulations. - Highlights: ► Optimization of a vapour compression and thermoelectric hybrid refrigerator. ► Two prototypes built and tested. Computational model for the whole refrigerator. ► Electric power consumption of the modules and the refrigerator 95% and 20% lower. ► New compartment refrigerated with thermoelectric technology. ► Inner temperature adjustable from 0 to −4 °C. Oscillations lower than ±0.2 °C.

  18. The Compressed Baryonic Matter Experiment at FAIR

    Directory of Open Access Journals (Sweden)

    Heuser J.M.

    2011-04-01

    Full Text Available The Compressed Baryonic Matter (CBM experiment is being planned at the international research centre FAIR, under realization next to the GSI laboratory in Darmstadt, Germany. Its physics programme addresses the QCD phase diagram in the region of highest net baryon densities. Of particular interest are the expected first order phase transition from partonic to hadronic matter, ending in a critical point, and modifications of hadron properties in the dense medium as a signal of chiral symmetry restoration. Laid out as a fixed-target experiment at the synchrotrons SIS-100/SIS-300, providing magnetic bending power of 100 and 300 T/m, the CBM detector will record both proton-nucleus and nucleus-nucleus collisions at beam energies up to 45A GeV. Hadronic, leptonic and photonic observables have to be measured with large acceptance. The nuclear interaction rates will reach up to 10 MHz to measure extremely rare probes like charm near threshold. Two versions of the experiment are being studied, optimized for either electron-hadron or muon identification, combined with silicon detector based charged-particle tracking and micro-vertex detection. The research programme will start at SIS-100 with ion beams between 2 and 11A GeV, and protons up to energies of 29 GeV using the HADES detector and an initial configuration of the CBM experiment. The CBM physics requires the development of novel detector systems, trigger and data acquisition concepts as well as innovative real-time reconstruction techniques. Progress with feasibility studies of the experiment and the development of its detector systems are discussed.

  19. Compression of a mixed antiproton and electron non-neutral plasma to high densities

    Science.gov (United States)

    Aghion, Stefano; Amsler, Claude; Bonomi, Germano; Brusa, Roberto S.; Caccia, Massimo; Caravita, Ruggero; Castelli, Fabrizio; Cerchiari, Giovanni; Comparat, Daniel; Consolati, Giovanni; Demetrio, Andrea; Di Noto, Lea; Doser, Michael; Evans, Craig; Fanì, Mattia; Ferragut, Rafael; Fesel, Julian; Fontana, Andrea; Gerber, Sebastian; Giammarchi, Marco; Gligorova, Angela; Guatieri, Francesco; Haider, Stefan; Hinterberger, Alexander; Holmestad, Helga; Kellerbauer, Alban; Khalidova, Olga; Krasnický, Daniel; Lagomarsino, Vittorio; Lansonneur, Pierre; Lebrun, Patrice; Malbrunot, Chloé; Mariazzi, Sebastiano; Marton, Johann; Matveev, Victor; Mazzotta, Zeudi; Müller, Simon R.; Nebbia, Giancarlo; Nedelec, Patrick; Oberthaler, Markus; Pacifico, Nicola; Pagano, Davide; Penasa, Luca; Petracek, Vojtech; Prelz, Francesco; Prevedelli, Marco; Rienaecker, Benjamin; Robert, Jacques; Røhne, Ole M.; Rotondi, Alberto; Sandaker, Heidi; Santoro, Romualdo; Smestad, Lillian; Sorrentino, Fiodor; Testera, Gemma; Tietje, Ingmari C.; Widmann, Eberhard; Yzombard, Pauline; Zimmer, Christian; Zmeskal, Johann; Zurlo, Nicola; Antonello, Massimiliano

    2018-04-01

    We describe a multi-step "rotating wall" compression of a mixed cold antiproton-electron non-neutral plasma in a 4.46 T Penning-Malmberg trap developed in the context of the AEḡIS experiment at CERN. Such traps are routinely used for the preparation of cold antiprotons suitable for antihydrogen production. A tenfold antiproton radius compression has been achieved, with a minimum antiproton radius of only 0.17 mm. We describe the experimental conditions necessary to perform such a compression: minimizing the tails of the electron density distribution is paramount to ensure that the antiproton density distribution follows that of the electrons. Such electron density tails are remnants of rotating wall compression and in many cases can remain unnoticed. We observe that the compression dynamics for a pure electron plasma behaves the same way as that of a mixed antiproton and electron plasma. Thanks to this optimized compression method and the high single shot antiproton catching efficiency, we observe for the first time cold and dense non-neutral antiproton plasmas with particle densities n ≥ 1013 m-3, which pave the way for an efficient pulsed antihydrogen production in AEḡIS.

  20. Influence of bottom ash of palm oil on compressive strength of concrete

    Science.gov (United States)

    Saputra, Andika Ade Indra; Basyaruddin, Laksono, Muhamad Hasby; Muntaha, Mohamad

    2017-11-01

    The technological development of concrete demands innovation regarding the alternative material as a part of the effort in improving quality and minimizing reliance on currently used raw materials such as bottom ash of palm oil. Bottom ash known as domestic waste stemming from palm oil cultivation in East Kalimantan contains silica. Like cement in texture and size, bottom ash can be mixed with concrete in which the silica in concrete could help increase the compressive strength of concrete. This research was conducted by comparing between normal concrete and concrete containing bottom ash as which the materials were apart of cement replacement. The bottom ash used in this research had to pass sieve size (#200). The composition tested in this research involved ratio between cement and bottom ash with the following percentages: 100%: 0%, 90%: 10%, 85%: 15% and 80%: 20%. Planned to be within the same amount of compressive strength (fc 25 MPa), the compressive strength of concrete was tested at the age of 7, 14, and 28 days. Research result shows that the addition of bottom ash to concrete influenced workability in concrete, but it did not significantly influence the compressive strength of concrete. Based on the result of compressive strength test, the optimal compressive strength was obtained from the mixture of 100% cement and 0% bottom ash.

  1. Micro-Mechanical Analysis About Kink Band in Carbon Fiber/Epoxy Composites Under Longitudinal Compression

    Science.gov (United States)

    Zhang, Mi; Guan, Zhidong; Wang, Xiaodong; Du, Shanyi

    2017-10-01

    Kink band is a typical phenomenon for composites under longitudinal compression. In this paper, theoretical analysis and finite element simulation were conducted to analyze kink angle as well as compressive strength of composites. Kink angle was considered to be an important character throughout longitudinal compression process. Three factors including plastic matrix, initial fiber misalignment and rotation due to loading were considered for theoretical analysis. Besides, the relationship between kink angle and fiber volume fraction was improved and optimized by theoretical derivation. In addition, finite element models considering fiber stochastic strength and Drucker-Prager constitutive model for matrix were conducted in ABAQUS to analyze kink band formation process, which corresponded with the experimental results. Through simulation, the loading and failure procedure can be evidently divided into three stages: elastic stage, softening stage, and fiber break stage. It also shows that kink band is a result of fiber misalignment and plastic matrix. Different values of initial fiber misalignment angle, wavelength and fiber volume fraction were considered to explore the effects on compressive strength and kink angle. Results show that compressive strength increases with the decreasing of initial fiber misalignment angle, the decreasing of initial fiber misalignment wavelength and the increasing of fiber volume fraction, while kink angle decreases in these situations. Orthogonal array in statistics was also built to distinguish the effect degree of these factors. It indicates that initial fiber misalignment angle has the largest impact on compressive strength and kink angle.

  2. The Formation and Evolution of Shear Bands in Plane Strain Compressed Nickel-Base Superalloy

    Directory of Open Access Journals (Sweden)

    Bin Tang

    2018-02-01

    Full Text Available The formation and evolution of shear bands in Inconel 718 nickel-base superalloy under plane strain compression was investigated in the present work. It is found that the propagation of shear bands under plane strain compression is more intense in comparison with conventional uniaxial compression. The morphology of shear bands was identified to generally fall into two categories: in “S” shape at severe conditions (low temperatures and high strain rates and “X” shape at mild conditions (high temperatures and low strain rates. However, uniform deformation at the mesoscale without shear bands was also obtained by compressing at 1050 °C/0.001 s−1. By using the finite element method (FEM, the formation mechanism of the shear bands in the present study was explored for the special deformation mode of plane strain compression. Furthermore, the effect of processing parameters, i.e., strain rate and temperature, on the morphology and evolution of shear bands was discussed following a phenomenological approach. The plane strain compression attempt in the present work yields important information for processing parameters optimization and failure prediction under plane strain loading conditions of the Inconel 718 superalloy.

  3. KungFQ: a simple and powerful approach to compress fastq files.

    Science.gov (United States)

    Grassi, Elena; Di Gregorio, Federico; Molineris, Ivan

    2012-01-01

    Nowadays storing data derived from deep sequencing experiments has become pivotal and standard compression algorithms do not exploit in a satisfying manner their structure. A number of reference-based compression algorithms have been developed but they are less adequate when approaching new species without fully sequenced genomes or nongenomic data. We developed a tool that takes advantages of fastq characteristics and encodes them in a binary format optimized in order to be further compressed with standard tools (such as gzip or lzma). The algorithm is straightforward and does not need any external reference file, it scans the fastq only once and has a constant memory requirement. Moreover, we added the possibility to perform lossy compression, losing some of the original information (IDs and/or qualities) but resulting in smaller files; it is also possible to define a quality cutoff under which corresponding base calls are converted to N. We achieve 2.82 to 7.77 compression ratios on various fastq files without losing information and 5.37 to 8.77 losing IDs, which are often not used in common analysis pipelines. In this paper, we compare the algorithm performance with known tools, usually obtaining higher compression levels.

  4. A real-time ECG data compression and transmission algorithm for an e-health device.

    Science.gov (United States)

    Lee, SangJoon; Kim, Jungkuk; Lee, Myoungho

    2011-09-01

    This paper introduces a real-time data compression and transmission algorithm between e-health terminals for a periodic ECGsignal. The proposed algorithm consists of five compression procedures and four reconstruction procedures. In order to evaluate the performance of the proposed algorithm, the algorithm was applied to all 48 recordings of MIT-BIH arrhythmia database, and the compress ratio (CR), percent root mean square difference (PRD), percent root mean square difference normalized (PRDN), rms, SNR, and quality score (QS) values were obtained. The result showed that the CR was 27.9:1 and the PRD was 2.93 on average for all 48 data instances with a 15% window size. In addition, the performance of the algorithm was compared to those of similar algorithms introduced recently by others. It was found that the proposed algorithm showed clearly superior performance in all 48 data instances at a compression ratio lower than 15:1, whereas it showed similar or slightly inferior PRD performance for a data compression ratio higher than 20:1. In light of the fact that the similarity with the original data becomes meaningless when the PRD is higher than 2, the proposed algorithm shows significantly better performance compared to the performance levels of other algorithms. Moreover, because the algorithm can compress and transmit data in real time, it can be served as an optimal biosignal data transmission method for limited bandwidth communication between e-health devices.

  5. The influence of double nested layer waviness on compression strength of carbon fiber composite materials

    International Nuclear Information System (INIS)

    Khan, Z.M.

    1997-01-01

    As advanced composite materials having superior physical and mechanical properties are being developed, optimization of their production processes in eagerly being sought. One of the most common defect in production of structural composites is layer waviness. Layer waviness is more pronounced in thick section flat and cylindrical laminates that are extensively used in missile casings, submersibles and space platforms. Layer waviness undulates the entire layers of a multidirectional laminate in through-the-thickness direction leading to gross deterioration of its compression strength. This research investigates the influence of multiple layer waviness in a double nest formation on the compression strength of a composite laminate. Different wave fractions of wave 0 degree centigrade layer fabricated in IM/85510-7 carbon - epoxy composite laminate on a steel mold using single step fabrication procedure. The laminate was cured on a heated press according to specific curing cycle. Static compression testing was performed using NASA short block compression fixture on an MTS servo Hydraulic machine. The purpose of these tests was to determine the effects of multiple layer wave regions on the compression strength of composite laminate. The experimental and analytical results revealed that up to about 35% fraction of wave 0 degree layer exceeded 35%. This analysis indicated that the percentage of 0 degree wavy layer may be used to estimate the reduction in compression strength of a composite laminate under restricted conditions. (author)

  6. Shock compression profiles in ceramics

    Energy Technology Data Exchange (ETDEWEB)

    Grady, D.E.; Moody, R.L.

    1996-03-01

    An investigation of the shock compression properties of high-strength ceramics has been performed using controlled planar impact techniques. In a typical experimental configuration, a ceramic target disc is held stationary, and it is struck by plates of either a similar ceramic or by plates of a well-characterized metal. All tests were performed using either a single-stage propellant gun or a two-stage light-gas gun. Particle velocity histories were measured with laser velocity interferometry (VISAR) at the interface between the back of the target ceramic and a calibrated VISAR window material. Peak impact stresses achieved in these experiments range from about 3 to 70 GPa. Ceramics tested under shock impact loading include: Al{sub 2}O{sub 3}, AlN, B{sub 4}C, SiC, Si{sub 3}N{sub 4}, TiB{sub 2}, WC and ZrO{sub 2}. This report compiles the VISAR wave profiles and experimental impact parameters within a database-useful for response model development, computational model validation studies, and independent assessment of the physics of dynamic deformation on high-strength, brittle solids.

  7. Rapid reconnection in compressible plasma

    International Nuclear Information System (INIS)

    Heyn, M.F.; Semenov, V.S.

    1996-01-01

    A study of set-up, propagation, and interaction of non-linear and linear magnetohydrodynamic waves driven by magnetic reconnection is presented. The source term of the waves generated by magnetic reconnection is obtained explicitly in terms of the initial background conditions and the local reconnection electric field. The non-linear solution of the problem found earlier, serves as a basis for formulation and extensive investigation of the corresponding linear initial-boundary value problem of compressible magnetohydrodynamics. In plane geometry, the Green close-quote s function of the problem is obtained and its properties are discussed. For the numerical evaluation it turns out that a specific choice of the integration contour in the complex plane of phase velocities is much more effective than the convolution with the real Green close-quote s function. Many complex effects like intrinsic wave coupling, anisotropic propagation characteristics, generation of surface and side wave modes in a finite beta plasma are retained in this analysis. copyright 1996 American Institute of Physics

  8. The Compressed Baryonic Matter experiment

    Directory of Open Access Journals (Sweden)

    Seddiki Sélim

    2014-04-01

    Full Text Available The Compressed Baryonic Matter (CBM experiment is a next-generation fixed-target detector which will operate at the future Facility for Antiproton and Ion Research (FAIR in Darmstadt. The goal of this experiment is to explore the QCD phase diagram in the region of high net baryon densities using high-energy nucleus-nucleus collisions. Its research program includes the study of the equation-of-state of nuclear matter at high baryon densities, the search for the deconfinement and chiral phase transitions and the search for the QCD critical point. The CBM detector is designed to measure both bulk observables with a large acceptance and rare diagnostic probes such as charm particles, multi-strange hyperons, and low mass vector mesons in their di-leptonic decay. The physics program of CBM will be summarized, followed by an overview of the detector concept, a selection of the expected physics performance, and the status of preparation of the experiment.

  9. Compression of seismic data: filter banks and extended transforms, synthesis and adaptation; Compression de donnees sismiques: bancs de filtres et transformees etendues, synthese et adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Duval, L.

    2000-11-01

    Wavelet and wavelet packet transforms are the most commonly used algorithms for seismic data compression. Wavelet coefficients are generally quantized and encoded by classical entropy coding techniques. We first propose in this work a compression algorithm based on the wavelet transform. The wavelet transform is used together with a zero-tree type coding, with first use in seismic applications. Classical wavelet transforms nevertheless yield a quite rigid approach, since it is often desirable to adapt the transform stage to the properties of each type of signal. We thus propose a second algorithm using, instead of wavelets, a set of so called 'extended transforms'. These transforms, originating from the filter bank theory, are parameterized. Classical examples are Malvar's Lapped Orthogonal Transforms (LOT) or de Queiroz et al. Generalized Lapped Orthogonal Transforms (GenLOT). We propose several optimization criteria to build 'extended transforms' which are adapted the properties of seismic signals. We further show that these transforms can be used with the same zero-tree type coding technique as used with wavelets. Both proposed algorithms provide exact compression rate choice, block-wise compression (in the case of extended transforms) and partial decompression for quality control or visualization. Performances are tested on a set of actual seismic data. They are evaluated for several quality measures. We also compare them to other seismic compression algorithms. (author)

  10. Composite Techniques Based Color Image Compression

    Directory of Open Access Journals (Sweden)

    Zainab Ibrahim Abood

    2017-03-01

    Full Text Available Compression for color image is now necessary for transmission and storage in the data bases since the color gives a pleasing nature and natural for any object, so three composite techniques based color image compression is implemented to achieve image with high compression, no loss in original image, better performance and good image quality. These techniques are composite stationary wavelet technique (S, composite wavelet technique (W and composite multi-wavelet technique (M. For the high energy sub-band of the 3rd level of each composite transform in each composite technique, the compression parameters are calculated. The best composite transform among the 27 types is the three levels of multi-wavelet transform (MMM in M technique which has the highest values of energy (En and compression ratio (CR and least values of bit per pixel (bpp, time (T and rate distortion R(D. Also the values of the compression parameters of the color image are nearly the same as the average values of the compression parameters of the three bands of the same image.

  11. Tokamak plasma variations under rapid compression

    International Nuclear Information System (INIS)

    Holmes, J.A.; Peng, Y.K.M.; Lynch, S.J.

    1980-04-01

    Changes in plasmas undergoing large, rapid compressions are examined numerically over the following range of aspect ratios A:3 greater than or equal to A greater than or equal to 1.5 for major radius compressions of circular, elliptical, and D-shaped cross sections; and 3 less than or equal to A less than or equal to 6 for minor radius compressions of circular and D-shaped cross sections. The numerical approach combines the computation of fixed boundary MHD equilibria with single-fluid, flux-surface-averaged energy balance, particle balance, and magnetic flux diffusion equations. It is found that the dependences of plasma current I/sub p/ and poloidal beta anti β/sub p/ on the compression ratio C differ significantly in major radius compressions from those proposed by Furth and Yoshikawa. The present interpretation is that compression to small A dramatically increases the plasma current, which lowers anti β/sub p/ and makes the plasma more paramagnetic. Despite large values of toroidal beta anti β/sub T/ (greater than or equal to 30% with q/sub axis/ approx. = 1, q/sub edge/ approx. = 3), this tends to concentrate more toroidal flux near the magnetic axis, which means that a reduced minor radius is required to preserve the continuity of the toroidal flux function F at the plasma edge. Minor radius compressions to large aspect ratio agree well with the Furth-Yoshikawa scaling laws

  12. Compression experiments on the TOSKA tokamak

    International Nuclear Information System (INIS)

    Cima, G.; McGuire, K.M.; Robinson, D.C.; Wootton, A.J.

    1980-10-01

    Results from minor radius compression experiments on a tokamak plasma in TOSCA are reported. The compression is achieved by increasing the toroidal field up to twice its initial value in 200μs. Measurements show that particles and magnetic flux are conserved. When the initial energy confinement time is comparable with the compression time, energy gains are greater than for an adiabatic change of state. The total beta value increases. Central beta values approximately 3% are measured when a small major radius compression is superimposed on a minor radius compression. Magnetic field fluctuations are affected: both the amplitude and period decrease. Starting from low energy confinement times, approximately 200μs, increases in confinement times up to approximately 1 ms are measured. The increase in plasma energy results from a large reduction in the power losses during the compression. When the initial energy confinement time is much longer than the compression time, the parameter changes are those expected for an adiabatic change of state. (author)

  13. Highly Efficient Compression Algorithms for Multichannel EEG.

    Science.gov (United States)

    Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda

    2018-05-01

    The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.

  14. ERGC: an efficient referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2015-11-01

    Genome sequencing has become faster and more affordable. Consequently, the number of available complete genomic sequences is increasing rapidly. As a result, the cost to store, process, analyze and transmit the data is becoming a bottleneck for research and future medical applications. So, the need for devising efficient data compression and data reduction techniques for biological sequencing data is growing by the day. Although there exists a number of standard data compression algorithms, they are not efficient in compressing biological data. These generic algorithms do not exploit some inherent properties of the sequencing data while compressing. To exploit statistical and information-theoretic properties of genomic sequences, we need specialized compression algorithms. Five different next-generation sequencing data compression problems have been identified and studied in the literature. We propose a novel algorithm for one of these problems known as reference-based genome compression. We have done extensive experiments using five real sequencing datasets. The results on real genomes show that our proposed algorithm is indeed competitive and performs better than the best known algorithms for this problem. It achieves compression ratios that are better than those of the currently best performing algorithms. The time to compress and decompress the whole genome is also very promising. The implementations are freely available for non-commercial purposes. They can be downloaded from http://engr.uconn.edu/∼rajasek/ERGC.zip. rajasek@engr.uconn.edu. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Survived ileocecal blowout from compressed air.

    Science.gov (United States)

    Weber, Marco; Kolbus, Frank; Dressler, Jan; Lessig, Rüdiger

    2011-03-01

    Industrial accidents with compressed air entering the gastro-intestinal tract often run fatally. The pressures usually over-exceed those used by medical applications such as colonoscopy and lead to vast injuries of the intestines with high mortality. The case described in this report is of a 26-year-old man who was harmed by compressed air that entered through the anus. He survived because of fast emergency operation. This case underlines necessity of explicit instruction considering hazards handling compressed air devices to maintain safety at work. Further, our observations support the hypothesis that the mucosa is the most elastic layer of the intestine wall.

  16. Radial and axial compression of pure electron

    International Nuclear Information System (INIS)

    Park, Y.; Soga, Y.; Mihara, Y.; Takeda, M.; Kamada, K.

    2013-01-01

    Experimental studies are carried out on compression of the density distribution of a pure electron plasma confined in a Malmberg-Penning Trap in Kanazawa University. More than six times increase of the on-axis density is observed under application of an external rotating electric field that couples to low-order Trivelpiece-Gould modes. Axial compression of the density distribution with the axial length of a factor of two is achieved by controlling the confining potential at both ends of the plasma. Substantial increase of the axial kinetic energy is observed during the axial compression. (author)

  17. Plant for compacting compressible radioactive waste

    International Nuclear Information System (INIS)

    Baatz, H.; Rittscher, D.; Lueer, H.J.; Ambros, R.

    1983-01-01

    The waste is filled into auxiliary barrels made of sheet steel and compressed with the auxiliary barrels into steel jackets. These can be stacked in storage barrels. A hydraulic press is included in the plant, which has a horizontal compression chamber and a horizontal pressure piston, which works against a counter bearing slider. There is a filling and emptying device for the pressure chamber behind the counter bearing slider. The auxiliary barrels can be introduced into the compression chamber by the filling and emptying device. The pressure piston also pushes out the steel jackets formed, so that they are taken to the filling and emptying device. (orig./HP) [de

  18. Compressed Gas Safety for Experimental Fusion Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Lee C. Cadwallader

    2004-09-01

    Experimental fusion facilities present a variety of hazards to the operators and staff. There are unique or specialized hazards, including magnetic fields, cryogens, radio frequency emissions, and vacuum reservoirs. There are also more general industrial hazards, such as a wide variety of electrical power, pressurized air, and cooling water systems in use, there are crane and hoist loads, working at height, and handling compressed gas cylinders. This paper outlines the projectile hazard assoicated with compressed gas cylinders and mthods of treatment to provide for compressed gas safety. This information should be of interest to personnel at both magnetic and inertial fusion experiments.

  19. Logarithmic compression methods for spectral data

    Science.gov (United States)

    Dunham, Mark E.

    2003-01-01

    A method is provided for logarithmic compression, transmission, and expansion of spectral data. A log Gabor transformation is made of incoming time series data to output spectral phase and logarithmic magnitude values. The output phase and logarithmic magnitude values are compressed by selecting only magnitude values above a selected threshold and corresponding phase values to transmit compressed phase and logarithmic magnitude values. A reverse log Gabor transformation is then performed on the transmitted phase and logarithmic magnitude values to output transmitted time series data to a user.

  20. How compressible is recombinant battery separator mat?

    Energy Technology Data Exchange (ETDEWEB)

    Pendry, C. [Hollingsworth and Vose, Postlip Mills Winchcombe (United Kingdom)

    1999-03-01

    In the past few years, the recombinant battery separator mat (RBSM) for valve-regulated lead/acid (VRLA) batteries has become the focus of much attention. Compression, and the ability of microglass separators to maintain a level of `springiness` have helped reduce premature capacity loss. As higher compressions are reached, we need to determine what, if any, damage can be caused during the assembly process. This paper reviews the findings when RBSM materials, with different surface areas, are compressed under forces up to 500 kPa in the dry state. (orig.)

  1. Physics Based Modeling of Compressible Turbulance

    Science.gov (United States)

    2016-11-07

    AFRL-AFOSR-VA-TR-2016-0345 PHYSICS -BASED MODELING OF COMPRESSIBLE TURBULENCE PARVIZ MOIN LELAND STANFORD JUNIOR UNIV CA Final Report 09/13/2016...on the AFOSR project (FA9550-11-1-0111) entitled: Physics based modeling of compressible turbulence. The period of performance was, June 15, 2011...by ANSI Std. Z39.18 Page 1 of 2FORM SF 298 11/10/2016https://livelink.ebs.afrl.af.mil/livelink/llisapi.dll PHYSICS -BASED MODELING OF COMPRESSIBLE

  2. Combined Sparsifying Transforms for Compressive Image Fusion

    Directory of Open Access Journals (Sweden)

    ZHAO, L.

    2013-11-01

    Full Text Available In this paper, we present a new compressive image fusion method based on combined sparsifying transforms. First, the framework of compressive image fusion is introduced briefly. Then, combined sparsifying transforms are presented to enhance the sparsity of images. Finally, a reconstruction algorithm based on the nonlinear conjugate gradient is presented to get the fused image. The simulations demonstrate that by using the combined sparsifying transforms better results can be achieved in terms of both the subjective visual effect and the objective evaluation indexes than using only a single sparsifying transform for compressive image fusion.

  3. Evolution Of Nonlinear Waves in Compressing Plasma

    International Nuclear Information System (INIS)

    Schmit, P.F.; Dodin, I.Y.; Fisch, N.J.

    2011-01-01

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size Δ during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches Δ. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  4. Compressive Load Resistance Characteristics of Rice Grain

    OpenAIRE

    Sumpun Chaitep; Chaiy R. Metha Pathawee; Pipatpong Watanawanyoo

    2008-01-01

    Investigation was made to observe the compressive load property of rice gain both rough rice and brown grain. Six rice varieties (indica and japonica) were examined with the moisture content at 10-12%. A compressive load with reference to a principal axis normal to the thickness of the grain were conducted at selected inclined angles of 0°, 15°, 30°, 45°, 60° and 70°. The result showed the compressive load resistance of rice grain based on its characteristic of yield s...

  5. Evolution Of Nonlinear Waves in Compressing Plasma

    Energy Technology Data Exchange (ETDEWEB)

    P.F. Schmit, I.Y. Dodin, and N.J. Fisch

    2011-05-27

    Through particle-in-cell simulations, the evolution of nonlinear plasma waves is examined in one-dimensional collisionless plasma undergoing mechanical compression. Unlike linear waves, whose wavelength decreases proportionally to the system length L(t), nonlinear waves, such as solitary electron holes, conserve their characteristic size {Delta} during slow compression. This leads to a substantially stronger adiabatic amplification as well as rapid collisionless damping when L approaches {Delta}. On the other hand, cessation of compression halts the wave evolution, yielding a stable mode.

  6. Delivery of compression therapy for venous leg ulcers.

    Science.gov (United States)

    Zarchi, Kian; Jemec, Gregor B E

    2014-07-01

    Despite the documented effect of compression therapy in clinical studies and its widespread prescription, treatment of venous leg ulcers is often prolonged and recurrence rates high. Data on provided compression therapy are limited. To assess whether home care nurses achieve adequate subbandage pressure when treating patients with venous leg ulcers and the factors that predict the ability to achieve optimal pressure. We performed a cross-sectional study from March 1, 2011, through March 31, 2012, in home care centers in 2 Danish municipalities. Sixty-eight home care nurses who managed wounds in their everyday practice were included. Participant-masked measurements of subbandage pressure achieved with an elastic, long-stretch, single-component bandage; an inelastic, short-stretch, single-component bandage; and a multilayer, 2-component bandage, as well as, association between achievement of optimal pressure and years in the profession, attendance at wound care educational programs, previous work experience, and confidence in bandaging ability. A substantial variation in the exerted pressure was found: subbandage pressures ranged from 11 mm Hg exerted by an inelastic bandage to 80 mm Hg exerted by a 2-component bandage. The optimal subbandage pressure range, defined as 30 to 50 mm Hg, was achieved by 39 of 62 nurses (63%) applying the 2-component bandage, 28 of 68 nurses (41%) applying the elastic bandage, and 27 of 68 nurses (40%) applying the inelastic bandage. More than half the nurses applying the inelastic (38 [56%]) and elastic (36 [53%]) bandages obtained pressures less than 30 mm Hg. At best, only 17 of 62 nurses (27%) using the 2-component bandage achieved subbandage pressure within the range they aimed for. In this study, none of the investigated factors was associated with the ability to apply a bandage with optimal pressure. This study demonstrates the difficulty of achieving the desired subbandage pressure and indicates that a substantial proportion of

  7. Studies of a new multi-layer compression bandage for the treatment of venous ulceration.

    Science.gov (United States)

    Scriven, J M; Bello, M; Taylor, L E; Wood, A J; London, N J

    2000-03-01

    This study aimed to develop an alternative graduated compression bandage for the treatment of venous leg ulcers. Alternative bandage components were identified and assessed for optimal performance as a graduated multi-layer compression bandage. Subsequently the physical characteristics and clinical efficacy of the optimal bandage combination was prospectively examined. Ten healthy limbs were used to develop the optimal combination and 20 limbs with venous ulceration to compare the physical properties of the two bandage types. Subsequently 42 consecutive ulcerated limbs were prospectively treated to examine the efficacy of the new bandage combination. The new combination produced graduated median (range) sub-bandage pressures (mmHg) as follows: ankle 59 (42-100), calf 36 (27-67) and knee 35 (16-67). Over a seven-day period this combination maintained a comparable level of compression with the Charing Cross system, and achieved an overall healing rate at one year of 88%. The described combination should be brought to the attention of healthcare professionals treating venous ulcers as a possible alternative to other forms of multi-layer graduated compression bandages pending prospective, randomised clinical trials.

  8. Beam steering performance of compressed Luneburg lens based on transformation optics

    Science.gov (United States)

    Gao, Ju; Wang, Cong; Zhang, Kuang; Hao, Yang; Wu, Qun

    2018-06-01

    In this paper, two types of compressed Luneburg lenses based on transformation optics are investigated and simulated using two different sources, namely, waveguides and dipoles, which represent plane and spherical wave sources, respectively. We determined that the largest beam steering angle and the related feed point are intrinsic characteristics of a certain type of compressed Luneburg lens, and that the optimized distance between the feed and lens, gain enhancement, and side-lobe suppression are related to the type of source. Based on our results, we anticipate that these lenses will prove useful in various future antenna applications.

  9. Analytical model for super-compression of multi-structured pellet

    International Nuclear Information System (INIS)

    Yabe, T.; Niu, K.

    1975-09-01

    We present a one-dimensional analytical model which can be applied to the super-compression of the multistructured pellet. The main result shows that the time dependence of the input power E for the optimal compression is given by E proportional to (1 - t/tsub(s))sup(-3(G+1)/2G) where G=(rho 1 /rho 2 )sup(1/4), rho 1 and rho 2 are the densities of the D-T fuel and the high Z material respectively, and tsub(s) if the characteristic time interval. (auth.)

  10. Compression and archiving of digital images

    International Nuclear Information System (INIS)

    Huang, H.K.

    1988-01-01

    This paper describes the application of a full-frame bit-allocation image compression technique to a hierarchical digital image archiving system consisting of magnetic disks, optical disks and an optical disk library. The digital archiving system without the compression has been in clinical operation in the Pediatric Radiology for more than half a year. The database in the system consists of all pediatric inpatients including all images from computed radiography, digitized x-ray films, CT, MR, and US. The rate of image accumulation is approximately 1,900 megabytes per week. The hardware design of the compression module is based on a Motorola 68020 microprocessor, A VME bus, a 16 megabyte image buffer memory board, and three Motorola digital signal processing 56001 chips on a VME board for performing the two-dimensional cosine transform and the quantization. The clinical evaluation of the compression module with the image archiving system is expected to be in February 1988

  11. Pulse power applications of flux compression generators

    International Nuclear Information System (INIS)

    Fowler, C.M.; Caird, R.S.; Erickson, D.J.; Freeman, B.L.

    1981-01-01

    Characteristics are presented for two different types of explosive driven flux compression generators and a megavolt pulse transformer. Status reports are given for rail gun and plasma focus programs for which the generators serve as power sources

  12. Compressibility, turbulence and high speed flow

    CERN Document Server

    Gatski, Thomas B

    2009-01-01

    This book introduces the reader to the field of compressible turbulence and compressible turbulent flows across a broad speed range through a unique complimentary treatment of both the theoretical foundations and the measurement and analysis tools currently used. For the computation of turbulent compressible flows, current methods of averaging and filtering are presented so that the reader is exposed to a consistent development of applicable equation sets for both the mean or resolved fields as well as the transport equations for the turbulent stress field. For the measurement of turbulent compressible flows, current techniques ranging from hot-wire anemometry to PIV are evaluated and limitations assessed. Characterizing dynamic features of free shear flows, including jets, mixing layers and wakes, and wall-bounded flows, including shock-turbulence and shock boundary-layer interactions, obtained from computations, experiments and simulations are discussed. Key features: * Describes prediction methodologies in...

  13. Wavelets: Applications to Image Compression-II

    Indian Academy of Sciences (India)

    Wavelets: Applications to Image Compression-II. Sachin P ... successful application of wavelets in image com- ... b) Soft threshold: In this case, all the coefficients x ..... [8] http://www.jpeg.org} Official site of the Joint Photographic Experts Group.

  14. Efficiency of Compressed Air Energy Storage

    DEFF Research Database (Denmark)

    Elmegaard, Brian; Brix, Wiebke

    2011-01-01

    The simplest type of a Compressed Air Energy Storage (CAES) facility would be an adiabatic process consisting only of a compressor, a storage and a turbine, compressing air into a container when storing and expanding when producing. This type of CAES would be adiabatic and would if the machines...... were reversible have a storage efficiency of 100%. However, due to the specific capacity of the storage and the construction materials the air is cooled during and after compression in practice, making the CAES process diabatic. The cooling involves exergy losses and thus lowers the efficiency...... of the storage significantly. The efficiency of CAES as an electricity storage may be defined in several ways, we discuss these and find that the exergetic efficiency of compression, storage and production together determine the efficiency of CAES. In the paper we find that the efficiency of the practical CAES...

  15. Compression Behavior of High Performance Polymeric Fibers

    National Research Council Canada - National Science Library

    Kumar, Satish

    2003-01-01

    Hydrogen bonding has proven to be effective in improving the compressive strength of rigid-rod polymeric fibers without resulting in a decrease in tensile strength while covalent crosslinking results in brittle fibers...

  16. Large Eddy Simulation for Compressible Flows

    CERN Document Server

    Garnier, E; Sagaut, P

    2009-01-01

    Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...

  17. Embedment of Chlorpheniramine Maleate in Directly Compressed ...

    African Journals Online (AJOL)

    chlorpheniramine maleate (CPM) from its matrix tablets prepared by direct compression. Methods: Different ratios of compritol and kollidon SR (containing 50 % matrix component) in 1:1, 1:2, ... Magnesium stearate and hydrochloric acid were.

  18. FRC translation into a compression coil

    International Nuclear Information System (INIS)

    Chrien, R.E.

    1985-01-01

    Several features of the problem of FRC translation into a compression coil are considered. First, the magnitude of the guide field is calculated and found to exceed that which would be applied to a flux conserver. Second, energy conservation is applied to FRC translation from a flux conserver into a compression coil. It is found that a significant temperature decrease is required for translation to be energetically possible. The temperature change depends on the external inductance in the compression circuit. An analogous case is that of a compression region composed of a compound magnet; in this case the temperature change depends on the ratio of inner and outer coil radii. Finally, the kinematics of intermediate translation states are calculated using an ''abrupt transition'' model. It is found, in this model, that the FRC must overcome a potential hill during translation, which requires a small initial velocity

  19. Adiabatic Liquid Piston Compressed Air Energy Storage

    DEFF Research Database (Denmark)

    Petersen, Tage; Elmegaard, Brian; Pedersen, Allan Schrøder

    the system. The compression leads to a significant increase in temperature, and the heat generated is dumped into the ambient. This energy loss results in a low efficiency of the system, and when expanding the air, the expansion leads to a temperature drop reducing the mechanical output of the expansion......), but no such units are in operation at present. The CAES system investigated in this project uses a different approach to avoid compression heat loss. The system uses a pre-compressed pressure vessel full of air. A liquid is pumped into the bottom of the vessel when charging and the same liquid is withdrawn through......-CAES system is significantly higher than existing CAES systems due to a low or nearly absent compression heat loss. Furthermore, pumps/turbines, which use a liquid as a medium, are more efficient than air/gas compressors/turbines. In addition, the demand for fuel during expansion does not occur. •The energy...

  20. Seneca Compressed Air Energy Storage (CAES) Project

    Energy Technology Data Exchange (ETDEWEB)

    None

    2012-11-30

    This document provides specifications for the process air compressor for a compressed air storage project, requests a budgetary quote, and provides supporting information, including compressor data, site specific data, water analysis, and Seneca CAES value drivers.

  1. Compressive multi-mode superresolution display

    KAUST Repository

    Heide, Felix

    2014-01-01

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image. © 2014 Optical Society of America.

  2. Compressive properties of sandwiches with functionally graded ...

    Indian Academy of Sciences (India)

    319–328. c Indian Academy of Sciences. Compressive properties ... †Mechanical Engineering, National Institute of Technology Karnataka, Surathkal, India .... spheres) which might aid in building FG composites is not explored ... Sample code.

  3. The Distinction of Hot Herbal Compress, Hot Compress, and Topical Diclofenac as Myofascial Pain Syndrome Treatment.

    Science.gov (United States)

    Boonruab, Jurairat; Nimpitakpong, Netraya; Damjuti, Watchara

    2018-01-01

    This randomized controlled trial aimed to investigate the distinctness after treatment among hot herbal compress, hot compress, and topical diclofenac. The registrants were equally divided into groups and received the different treatments including hot herbal compress, hot compress, and topical diclofenac group, which served as the control group. After treatment courses, Visual Analog Scale and 36-Item Short Form Health survey were, respectively, used to establish the level of pain intensity and quality of life. In addition, cervical range of motion and pressure pain threshold were also examined to identify the motional effects. All treatments showed significantly decreased level of pain intensity and increased cervical range of motion, while the intervention groups exhibited extraordinary capability compared with the topical diclofenac group in pressure pain threshold and quality of life. In summary, hot herbal compress holds promise to be an efficacious treatment parallel to hot compress and topical diclofenac.

  4. High-quality compressive ghost imaging

    Science.gov (United States)

    Huang, Heyan; Zhou, Cheng; Tian, Tian; Liu, Dongqi; Song, Lijun

    2018-04-01

    We propose a high-quality compressive ghost imaging method based on projected Landweber regularization and guided filter, which effectively reduce the undersampling noise and improve the resolution. In our scheme, the original object is reconstructed by decomposing of regularization and denoising steps instead of solving a minimization problem in compressive reconstruction process. The simulation and experimental results show that our method can obtain high ghost imaging quality in terms of PSNR and visual observation.

  5. Bond graph modeling of centrifugal compression systems

    OpenAIRE

    Uddin, Nur; Gravdahl, Jan Tommy

    2015-01-01

    A novel approach to model unsteady fluid dynamics in a compressor network by using a bond graph is presented. The model is intended in particular for compressor control system development. First, we develop a bond graph model of a single compression system. Bond graph modeling offers a different perspective to previous work by modeling the compression system based on energy flow instead of fluid dynamics. Analyzing the bond graph model explains the energy flow during compressor surge. Two pri...

  6. Prechamber Compression-Ignition Engine Performance

    Science.gov (United States)

    Moore, Charles S; Collins, John H , Jr

    1938-01-01

    Single-cylinder compression-ignition engine tests were made to investigate the performance characteristics of prechamber type of cylinder head. Certain fundamental variables influencing engine performance -- clearance distribution, size, shape, and direction of the passage connecting the cylinder and prechamber, shape of prechamber, cylinder clearance, compression ratio, and boosting -- were independently tested. Results of motoring and of power tests, including several typical indicator cards, are presented.

  7. NRGC: a novel referential genome compression algorithm.

    Science.gov (United States)

    Saha, Subrata; Rajasekaran, Sanguthevar

    2016-11-15

    Next-generation sequencing techniques produce millions to billions of short reads. The procedure is not only very cost effective but also can be done in laboratory environment. The state-of-the-art sequence assemblers then construct the whole genomic sequence from these reads. Current cutting edge computing technology makes it possible to build genomic sequences from the billions of reads within a minimal cost and time. As a consequence, we see an explosion of biological sequences in recent times. In turn, the cost of storing the sequences in physical memory or transmitting them over the internet is becoming a major bottleneck for research and future medical applications. Data compression techniques are one of the most important remedies in this context. We are in need of suitable data compression algorithms that can exploit the inherent structure of biological sequences. Although standard data compression algorithms are prevalent, they are not suitable to compress biological sequencing data effectively. In this article, we propose a novel referential genome compression algorithm (NRGC) to effectively and efficiently compress the genomic sequences. We have done rigorous experiments to evaluate NRGC by taking a set of real human genomes. The simulation results show that our algorithm is indeed an effective genome compression algorithm that performs better than the best-known algorithms in most of the cases. Compression and decompression times are also very impressive. The implementations are freely available for non-commercial purposes. They can be downloaded from: http://www.engr.uconn.edu/~rajasek/NRGC.zip CONTACT: rajasek@engr.uconn.edu. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Signal compression in radar using FPGA

    OpenAIRE

    Escamilla Hemández, Enrique; Kravchenko, Víctor; Ponomaryov, Volodymyr; Duchen Sánchez, Gonzalo; Hernández Sánchez, David

    2010-01-01

    We present the hardware implementation of radar real time processing procedures using a simple, fast technique based on FPGA (Field Programmable Gate Array) architecture. This processing includes different window procedures during pulse compression in synthetic aperture radar (SAR). The radar signal compression processing is realized using matched filter, and classical and novel window functions, where we focus on better solution for minimum values of sidelobes. The proposed architecture expl...

  9. Compressive sensing based algorithms for electronic defence

    CERN Document Server

    Mishra, Amit Kumar

    2017-01-01

    This book details some of the major developments in the implementation of compressive sensing in radio applications for electronic defense and warfare communication use. It provides a comprehensive background to the subject and at the same time describes some novel algorithms. It also investigates application value and performance-related parameters of compressive sensing in scenarios such as direction finding, spectrum monitoring, detection, and classification.

  10. Does the quality of chest compressions deteriorate when the chest compression rate is above 120/min?

    Science.gov (United States)

    Lee, Soo Hoon; Kim, Kyuseok; Lee, Jae Hyuk; Kim, Taeyun; Kang, Changwoo; Park, Chanjong; Kim, Joonghee; Jo, You Hwan; Rhee, Joong Eui; Kim, Dong Hoon

    2014-08-01

    The quality of chest compressions along with defibrillation is the cornerstone of cardiopulmonary resuscitation (CPR), which is known to improve the outcome of cardiac arrest. We aimed to investigate the relationship between the compression rate and other CPR quality parameters including compression depth and recoil. A conventional CPR training for lay rescuers was performed 2 weeks before the 'CPR contest'. CPR anytime training kits were distributed to respective participants for self-training on their own in their own time. The participants were tested for two-person CPR in pairs. The quantitative and qualitative data regarding the quality of CPR were collected from a standardised check list and SkillReporter, and compared by the compression rate. A total of 161 teams consisting of 322 students, which includes 116 men and 206 women, participated in the CPR contest. The mean depth and rate for chest compression were 49.0±8.2 mm and 110.2±10.2/min. Significantly deeper chest compression depths were noted at rates over 120/min than those at any other rates (47.0±7.4, 48.8±8.4, 52.3±6.7, p=0.008). Chest compression depth was proportional to chest compression rate (r=0.206, pcompression including chest compression depth and chest recoil by chest compression rate. Further evaluation regarding the upper limit of the chest compression rate is needed to ensure complete full chest wall recoil while maintaining an adequate chest compression depth. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  11. Compressibility characteristics of Sabak Bernam Marine Clay

    Science.gov (United States)

    Lat, D. C.; Ali, N.; Jais, I. B. M.; Baharom, B.; Yunus, N. Z. M.; Salleh, S. M.; Azmi, N. A. C.

    2018-04-01

    This study is carried out to determine the geotechnical properties and compressibility characteristics of marine clay collected at Sabak Bernam. The compressibility characteristics of this soil are determined from 1-D consolidation test and verified by existing correlations by other researchers. No literature has been found on the compressibility characteristics of Sabak Bernam Marine Clay. It is important to carry out this study since this type of marine clay covers large coastal area of west coast Malaysia. This type of marine clay was found on the main road connecting Klang to Perak and the road keeps experiencing undulation and uneven settlement which jeopardise the safety of the road users. The soil is indicated in the Generalised Soil Map of Peninsular Malaysia as a CLAY with alluvial soil on recent marine and riverine alluvium. Based on the British Standard Soil Classification and Plasticity Chart, the soil is classified as a CLAY with very high plasticity (CV). Results from laboratory test on physical properties and compressibility parameters show that Sabak Bernam Marine Clay (SBMC) is highly compressible, has low permeability and poor drainage characteristics. The compressibility parameters obtained for SBMC is in a good agreement with other researchers in the same field.

  12. A review on compressed pattern matching

    Directory of Open Access Journals (Sweden)

    Surya Prakash Mishra

    2016-09-01

    Full Text Available Compressed pattern matching (CPM refers to the task of locating all the occurrences of a pattern (or set of patterns inside the body of compressed text. In this type of matching, pattern may or may not be compressed. CPM is very useful in handling large volume of data especially over the network. It has many applications in computational biology, where it is useful in finding similar trends in DNA sequences; intrusion detection over the networks, big data analytics etc. Various solutions have been provided by researchers where pattern is matched directly over the uncompressed text. Such solution requires lot of space and consumes lot of time when handling the big data. Various researchers have proposed the efficient solutions for compression but very few exist for pattern matching over the compressed text. Considering the future trend where data size is increasing exponentially day-by-day, CPM has become a desirable task. This paper presents a critical review on the recent techniques on the compressed pattern matching. The covered techniques includes: Word based Huffman codes, Word Based Tagged Codes; Wavelet Tree Based Indexing. We have presented a comparative analysis of all the techniques mentioned above and highlighted their advantages and disadvantages.

  13. Efficient predictive algorithms for image compression

    CERN Document Server

    Rosário Lucas, Luís Filipe; Maciel de Faria, Sérgio Manuel; Morais Rodrigues, Nuno Miguel; Liberal Pagliari, Carla

    2017-01-01

    This book discusses efficient prediction techniques for the current state-of-the-art High Efficiency Video Coding (HEVC) standard, focusing on the compression of a wide range of video signals, such as 3D video, Light Fields and natural images. The authors begin with a review of the state-of-the-art predictive coding methods and compression technologies for both 2D and 3D multimedia contents, which provides a good starting point for new researchers in the field of image and video compression. New prediction techniques that go beyond the standardized compression technologies are then presented and discussed. In the context of 3D video, the authors describe a new predictive algorithm for the compression of depth maps, which combines intra-directional prediction, with flexible block partitioning and linear residue fitting. New approaches are described for the compression of Light Field and still images, which enforce sparsity constraints on linear models. The Locally Linear Embedding-based prediction method is in...

  14. Compressing bitmap indexes for faster search operations

    International Nuclear Information System (INIS)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-01-01

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed

  15. Compressing bitmap indexes for faster search operations

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Kesheng; Otoo, Ekow J.; Shoshani, Arie

    2002-04-25

    In this paper, we study the effects of compression on bitmap indexes. The main operations on the bitmaps during query processing are bitwise logical operations such as AND, OR, NOT, etc. Using the general purpose compression schemes, such as gzip, the logical operations on the compressed bitmaps are much slower than on the uncompressed bitmaps. Specialized compression schemes, like the byte-aligned bitmap code(BBC), are usually faster in performing logical operations than the general purpose schemes, but in many cases they are still orders of magnitude slower than the uncompressed scheme. To make the compressed bitmap indexes operate more efficiently, we designed a CPU-friendly scheme which we refer to as the word-aligned hybrid code (WAH). Tests on both synthetic and real application data show that the new scheme significantly outperforms well-known compression schemes at a modest increase in storage space. Compared to BBC, a scheme well-known for its operational efficiency, WAH performs logical operations about 12 times faster and uses only 60 percent more space. Compared to the uncompressed scheme, in most test cases WAH is faster while still using less space. We further verified with additional tests that the improvement in logical operation speed translates to similar improvement in query processing speed.

  16. HCCI Engine Optimization and Control

    Energy Technology Data Exchange (ETDEWEB)

    Rolf D. Reitz

    2005-09-30

    The goal of this project was to develop methods to optimize and control Homogeneous-Charge Compression Ignition (HCCI) engines, with emphasis on diesel-fueled engines. HCCI offers the potential of nearly eliminating IC engine NOx and particulate emissions at reduced cost over Compression Ignition Direct Injection engines (CIDI) by controlling pollutant emissions in-cylinder. The project was initiated in January, 2002, and the present report is the final report for work conducted on the project through December 31, 2004. Periodic progress has also been reported at bi-annual working group meetings held at USCAR, Detroit, MI, and at the Sandia National Laboratories. Copies of these presentation materials are available on CD-ROM, as distributed by the Sandia National Labs. In addition, progress has been documented in DOE Advanced Combustion Engine R&D Annual Progress Reports for FY 2002, 2003 and 2004. These reports are included as the Appendices in this Final report.

  17. Nonlinear optimization

    CERN Document Server

    Ruszczynski, Andrzej

    2011-01-01

    Optimization is one of the most important areas of modern applied mathematics, with applications in fields from engineering and economics to finance, statistics, management science, and medicine. While many books have addressed its various aspects, Nonlinear Optimization is the first comprehensive treatment that will allow graduate students and researchers to understand its modern ideas, principles, and methods within a reasonable time, but without sacrificing mathematical precision. Andrzej Ruszczynski, a leading expert in the optimization of nonlinear stochastic systems, integrates the theory and the methods of nonlinear optimization in a unified, clear, and mathematically rigorous fashion, with detailed and easy-to-follow proofs illustrated by numerous examples and figures. The book covers convex analysis, the theory of optimality conditions, duality theory, and numerical methods for solving unconstrained and constrained optimization problems. It addresses not only classical material but also modern top...

  18. Shearlets and Optimally Sparse Approximations

    DEFF Research Database (Denmark)

    Kutyniok, Gitta; Lemvig, Jakob; Lim, Wang-Q

    2012-01-01

    Multivariate functions are typically governed by anisotropic features such as edges in images or shock fronts in solutions of transport-dominated equations. One major goal both for the purpose of compression as well as for an efficient analysis is the provision of optimally sparse approximations...... optimally sparse approximations of this model class in 2D as well as 3D. Even more, in contrast to all other directional representation systems, a theory for compactly supported shearlet frames was derived which moreover also satisfy this optimality benchmark. This chapter shall serve as an introduction...... to and a survey about sparse approximations of cartoon-like images by band-limited and also compactly supported shearlet frames as well as a reference for the state-of-the-art of this research field....

  19. Website Optimization

    CERN Document Server

    King, Andrew

    2008-01-01

    Remember when an optimized website was one that merely didn't take all day to appear? Times have changed. Today, website optimization can spell the difference between enterprise success and failure, and it takes a lot more know-how to achieve success. This book is a comprehensive guide to the tips, techniques, secrets, standards, and methods of website optimization. From increasing site traffic to maximizing leads, from revving up responsiveness to increasing navigability, from prospect retention to closing more sales, the world of 21st century website optimization is explored, exemplified a

  20. Multiobjective optimization for design of multifunctional sandwich panel heat pipes with micro-architected truss cores

    International Nuclear Information System (INIS)

    Roper, Christopher S.

    2011-01-01

    A micro-architected multifunctional structure, a sandwich panel heat pipe with a micro-scale truss core and arterial wick, is modeled and optimized. To characterize multiple functionalities, objective equations are formulated for density, compressive modulus, compressive strength, and maximum heat flux. Multiobjective optimization is used to determine the Pareto-optimal design surfaces, which consist of hundreds of individually optimized designs. The Pareto-optimal surfaces for different working fluids (water, ethanol, and perfluoro(methylcyclohexane)) as well as different micro-scale truss core materials (metal, ceramic, and polymer) are determined and compared. Examination of the Pareto fronts allows comparison of the trade-offs between density, compressive stiffness, compressive strength, and maximum heat flux in the design of multifunctional sandwich panel heat pipes with micro-scale truss cores. Heat fluxes up to 3.0 MW/m 2 are predicted for silicon carbide truss core heat pipes with water as the working fluid.

  1. Using the Sadakane Compressed Suffix Tree to Solve the All-Pairs Suffix-Prefix Problem

    Directory of Open Access Journals (Sweden)

    Maan Haj Rachid

    2014-01-01

    Full Text Available The all-pairs suffix-prefix matching problem is a basic problem in string processing. It has an application in the de novo genome assembly task, which is one of the major bioinformatics problems. Due to the large size of the input data, it is crucial to use fast and space efficient solutions. In this paper, we present a space-economical solution to this problem using the generalized Sadakane compressed suffix tree. Furthermore, we present a parallel algorithm to provide more speed for shared memory computers. Our sequential and parallel algorithms are optimized by exploiting features of the Sadakane compressed index data structure. Experimental results show that our solution based on the Sadakane’s compressed index consumes significantly less space than the ones based on noncompressed data structures like the suffix tree and the enhanced suffix array. Our experimental results show that our parallel algorithm is efficient and scales well with increasing number of processors.

  2. Adaptive bit plane quadtree-based block truncation coding for image compression

    Science.gov (United States)

    Li, Shenda; Wang, Jin; Zhu, Qing

    2018-04-01

    Block truncation coding (BTC) is a fast image compression technique applied in spatial domain. Traditional BTC and its variants mainly focus on reducing computational complexity for low bit rate compression, at the cost of lower quality of decoded images, especially for images with rich texture. To solve this problem, in this paper, a quadtree-based block truncation coding algorithm combined with adaptive bit plane transmission is proposed. First, the direction of edge in each block is detected using Sobel operator. For the block with minimal size, adaptive bit plane is utilized to optimize the BTC, which depends on its MSE loss encoded by absolute moment block truncation coding (AMBTC). Extensive experimental results show that our method gains 0.85 dB PSNR on average compare to some other state-of-the-art BTC variants. So it is desirable for real time image compression applications.

  3. 30 CFR 57.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 57.13020 Section 57... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-UNDERGROUND METAL AND NONMETAL MINES Compressed Air and Boilers § 57.13020 Use of compressed air. At no time shall compressed air be directed toward a...

  4. 30 CFR 56.13020 - Use of compressed air.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Use of compressed air. 56.13020 Section 56... MINE SAFETY AND HEALTH SAFETY AND HEALTH STANDARDS-SURFACE METAL AND NONMETAL MINES Compressed Air and Boilers § 56.13020 Use of compressed air. At no time shall compressed air be directed toward a person...

  5. 30 CFR 77.412 - Compressed air systems.

    Science.gov (United States)

    2010-07-01

    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Compressed air systems. 77.412 Section 77.412... for Mechanical Equipment § 77.412 Compressed air systems. (a) Compressors and compressed-air receivers... involving the pressure system of compressors, receivers, or compressed-air-powered equipment shall not be...

  6. Optimality Conditions in Vector Optimization

    CERN Document Server

    Jiménez, Manuel Arana; Lizana, Antonio Rufián

    2011-01-01

    Vector optimization is continuously needed in several science fields, particularly in economy, business, engineering, physics and mathematics. The evolution of these fields depends, in part, on the improvements in vector optimization in mathematical programming. The aim of this Ebook is to present the latest developments in vector optimization. The contributions have been written by some of the most eminent researchers in this field of mathematical programming. The Ebook is considered essential for researchers and students in this field.

  7. Digitized hand-wrist radiographs: comparison of subjective and software-derived image quality at various compression ratios.

    Science.gov (United States)

    McCord, Layne K; Scarfe, William C; Naylor, Rachel H; Scheetz, James P; Silveira, Anibal; Gillespie, Kevin R

    2007-05-01

    The objectives of this study were to compare the effect of JPEG 2000 compression of hand-wrist radiographs on observer image quality qualitative assessment and to compare with a software-derived quantitative image quality index. Fifteen hand-wrist radiographs were digitized and saved as TIFF and JPEG 2000 images at 4 levels of compression (20:1, 40:1, 60:1, and 80:1). The images, including rereads, were viewed by 13 orthodontic residents who determined the image quality rating on a scale of 1 to 5. A quantitative analysis was also performed by using a readily available software based on the human visual system (Image Quality Measure Computer Program, version 6.2, Mitre, Bedford, Mass). ANOVA was used to determine the optimal compression level (P quality. When we used quantitative indexes, the JPEG 2000 images had lower quality at all compression ratios compared with the original TIFF images. There was excellent correlation (R2 >0.92) between qualitative and quantitative indexes. Image Quality Measure indexes are more sensitive than subjective image quality assessments in quantifying image degradation with compression. There is potential for this software-based quantitative method in determining the optimal compression ratio for any image without the use of subjective raters.

  8. Medicinsk Optimering

    DEFF Research Database (Denmark)

    Birkholm, Klavs

    2010-01-01

    En undersøgelse af anvendelsen af medicin til optimering af koncentration, hukommelse og følelsestonus. Efterfulgt af etiske overvejelser og anbefalinger til det politiske system......En undersøgelse af anvendelsen af medicin til optimering af koncentration, hukommelse og følelsestonus. Efterfulgt af etiske overvejelser og anbefalinger til det politiske system...

  9. Structural optimization

    CERN Document Server

    MacBain, Keith M

    2009-01-01

    Intends to supplement the engineer's box of analysis and design tools making optimization as commonplace as the finite element method in the engineering workplace. This title introduces structural optimization and the methods of nonlinear programming such as Lagrange multipliers, Kuhn-Tucker conditions, and calculus of variations.

  10. Adiabatic liquid piston compressed air energy storage

    Energy Technology Data Exchange (ETDEWEB)

    Petersen, Tage [Danish Technological Institute, Aarhus (Denmark); Elmegaard, B. [Technical Univ. of Denmark. DTU Mechanical Engineering, Kgs. Lyngby (Denmark); Schroeder Pedersen, A. [Technical Univ. of Denmark. DTU Energy Conversion, Risoe Campus, Roskilde (Denmark)

    2013-01-15

    This project investigates the potential of a Compressed Air Energy Storage system (CAES system). CAES systems are used to store mechanical energy in the form of compressed air. The systems use electricity to drive the compressor at times of low electricity demand with the purpose of converting the mechanical energy into electricity at times of high electricity demand. Two such systems are currently in operation; one in Germany (Huntorf) and one in the USA (Macintosh, Alabama). In both cases, an underground cavern is used as a pressure vessel for the storage of the compressed air. Both systems are in the range of 100 MW electrical power output with several hours of production stored as compressed air. In this range, enormous volumes are required, which make underground caverns the only economical way to design the pressure vessel. Both systems use axial turbine compressors to compress air when charging the system. The compression leads to a significant increase in temperature, and the heat generated is dumped into the ambient. This energy loss results in a low efficiency of the system, and when expanding the air, the expansion leads to a temperature drop reducing the mechanical output of the expansion turbines. To overcome this, fuel is burned to heat up the air prior to expansion. The fuel consumption causes a significant cost for the storage. Several suggestions have been made to store compression heat for later use during expansion and thereby avoid the use of fuel (so called Adiabatic CAES units), but no such units are in operation at present. The CAES system investigated in this project uses a different approach to avoid compression heat loss. The system uses a pre-compressed pressure vessel full of air. A liquid is pumped into the bottom of the vessel when charging and the same liquid is withdrawn through a turbine when discharging. In this case, the liquid works effectively as a piston compressing the gas in the vessel, hence the name &apos

  11. Topology Optimization

    DEFF Research Database (Denmark)

    A. Kristensen, Anders Schmidt; Damkilde, Lars

    2007-01-01

    . A way to solve the initial design problem namely finding a form can be solved by so-called topology optimization. The idea is to define a design region and an amount of material. The loads and supports are also fidefined, and the algorithm finds the optimal material distribution. The objective function...... dictates the form, and the designer can choose e.g. maximum stiness, maximum allowable stresses or maximum lowest eigenfrequency. The result of the topology optimization is a relatively coarse map of material layout. This design can be transferred to a CAD system and given the necessary geometrically...... refinements, and then remeshed and reanalysed in other to secure that the design requirements are met correctly. The output of standard topology optimization has seldom well-defined, sharp contours leaving the designer with a tedious interpretation, which often results in less optimal structures. In the paper...

  12. Dispositional Optimism

    Science.gov (United States)

    Carver, Charles S.; Scheier, Michael F.

    2014-01-01

    Optimism is a cognitive construct (expectancies regarding future outcomes) that also relates to motivation: optimistic people exert effort, whereas pessimistic people disengage from effort. Study of optimism began largely in health contexts, finding positive associations between optimism and markers of better psychological and physical health. Physical health effects likely occur through differences in both health-promoting behaviors and physiological concomitants of coping. Recently, the scientific study of optimism has extended to the realm of social relations: new evidence indicates that optimists have better social connections, partly because they work harder at them. In this review, we examine the myriad ways this trait can benefit an individual, and our current understanding of the biological basis of optimism. PMID:24630971

  13. Compressed-sensing application - Pre-stack kirchhoff migration

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2013-01-01

    Least-squares migration is a linearized form of waveform inversion that aims to enhance the spatial resolution of the subsurface reflectivity distribution and reduce the migration artifacts due to limited recording aperture, coarse sampling of sources and receivers, and low subsurface illumination. Least-squares migration, however, due to the nature of its minimization process, tends to produce smoothed and dispersed versions of the reflectivity of the subsurface. Assuming that the subsurface reflectivity distribution is sparse, we propose the addition of a non-quadratic L1-norm penalty term on the model space in the objective function. This aims to preserve the sparse nature of the subsurface reflectivity series and enhance resolution. We further use a compressed-sensing algorithm to solve the linear system, which utilizes the sparsity assumption to produce highly resolved migrated images. Thus, the Kirchhoff migration implementation is formulated as a Basis Pursuit denoise (BPDN) problem to obtain the sparse reflectivity model. Applications on synthetic data show that reflectivity models obtained using this compressed-sensing algorithm are highly accurate with optimal resolution.

  14. Advanced and standardized evaluation of neurovascular compression syndromes

    Science.gov (United States)

    Hastreiter, Peter; Vega Higuera, Fernando; Tomandl, Bernd; Fahlbusch, Rudolf; Naraghi, Ramin

    2004-05-01

    Caused by a contact between vascular structures and the root entry or exit zone of cranial nerves neurovascular compression syndromes are combined with different neurological diseases (trigeminal neurolagia, hemifacial spasm, vertigo, glossopharyngeal neuralgia) and show a relation with essential arterial hypertension. As presented previously, the semi-automatic segmentation and 3D visualization of strongly T2 weighted MR volumes has proven to be an effective strategy for a better spatial understanding prior to operative microvascular decompression. After explicit segmentation of coarse structures, the tiny target nerves and vessels contained in the area of cerebrospinal fluid are segmented implicitly using direct volume rendering. However, based on this strategy the delineation of vessels in the vicinity of the brainstem and those at the border of the segmented CSF subvolume are critical. Therefore, we suggest registration with MR angiography and introduce consecutive fusion after semi-automatic labeling of the vascular information. Additionally, we present an approach of automatic 3D visualization and video generation based on predefined flight paths. Thereby, a standardized evaluation of the fused image data is supported and the visualization results are optimally prepared for intraoperative application. Overall, our new strategy contributes to a significantly improved 3D representation and evaluation of vascular compression syndromes. Its value for diagnosis and surgery is demonstrated with various clinical examples.

  15. Magnetic resonance image compression using scalar-vector quantization

    Science.gov (United States)

    Mohsenian, Nader; Shahri, Homayoun

    1995-12-01

    A new coding scheme based on the scalar-vector quantizer (SVQ) is developed for compression of medical images. SVQ is a fixed-rate encoder and its rate-distortion performance is close to that of optimal entropy-constrained scalar quantizers (ECSQs) for memoryless sources. The use of a fixed-rate quantizer is expected to eliminate some of the complexity issues of using variable-length scalar quantizers. When transmission of images over noisy channels is considered, our coding scheme does not suffer from error propagation which is typical of coding schemes which use variable-length codes. For a set of magnetic resonance (MR) images, coding results obtained from SVQ and ECSQ at low bit-rates are indistinguishable. Furthermore, our encoded images are perceptually indistinguishable from the original, when displayed on a monitor. This makes our SVQ based coder an attractive compression scheme for picture archiving and communication systems (PACS), currently under consideration for an all digital radiology environment in hospitals, where reliable transmission, storage, and high fidelity reconstruction of images are desired.

  16. Design Issues of the Pre-Compression Rings of Iter

    Science.gov (United States)

    Knaster, J.; Baker, W.; Bettinali, L.; Jong, C.; Mallick, K.; Nardi, C.; Rajainmaki, H.; Rossi, P.; Semeraro, L.

    2010-04-01

    The pre-compression system is the keystone of ITER. A centripetal force of ˜30 MN will be applied at cryogenic conditions on top and bottom of each TF coil. It will prevent the `breathing effect' caused by the bursting forces occurring during plasma operation that would affect the machine design life of 30000 cycles. Different alternatives have been studied throughout the years. There are two major design requirements limiting the engineering possibilities: 1) the limited available space and 2) the need to hamper eddy currents flowing in the structures. Six unidirectionally wound glass-fibre composite rings (˜5 m diameter and ˜300 mm cross section) are the final design choice. The rings will withstand the maximum hoop stresses machine operation. The present paper summarizes the pre-compression ring R&D carried out during several years. In particular, we will address the composite choice and mechanical characterization, assessment of creep or stress relaxation phenomena, sub-sized rings testing and the optimal ring fabrication processes that have led to the present final design.

  17. Compressed Natural Gas Technology for Alternative Fuel Power Plants

    Science.gov (United States)

    Pujotomo, Isworo

    2018-02-01

    Gas has great potential to be converted into electrical energy. Indonesia has natural gas reserves up to 50 years in the future, but the optimization of the gas to be converted into electricity is low and unable to compete with coal. Gas is converted into electricity has low electrical efficiency (25%), and the raw materials are more expensive than coal. Steam from a lot of wasted gas turbine, thus the need for utilizing exhaust gas results from gas turbine units. Combined cycle technology (Gas and Steam Power Plant) be a solution to improve the efficiency of electricity. Among other Thermal Units, Steam Power Plant (Combined Cycle Power Plant) has a high electrical efficiency (45%). Weakness of the current Gas and Steam Power Plant peak burden still using fuel oil. Compressed Natural Gas (CNG) Technology may be used to accommodate the gas with little land use. CNG gas stored in the circumstances of great pressure up to 250 bar, in contrast to gas directly converted into electricity in a power plant only 27 bar pressure. Stored in CNG gas used as a fuel to replace load bearing peak. Lawyer System on CNG conversion as well as the power plant is generally only used compressed gas with greater pressure and a bit of land.

  18. Cyclic compression maintains viability and induces chondrogenesis of human mesenchymal stem cells in fibrin gel scaffolds.

    Science.gov (United States)

    Pelaez, Daniel; Huang, Chun-Yuh Charles; Cheung, Herman S

    2009-01-01

    Mechanical loading has long been shown to modulate cartilage-specific extracellular matrix synthesis. With joint motion, cartilage can experience mechanical loading in the form of compressive, tensile or shearing load, and hydrostatic pressure. Recent studies have demonstrated the capacity of unconfined cyclic compression to induce chondrogenic differentiation of human mesenchymal stem cell (hMSC) in agarose culture. However, the use of a nonbiodegradable material such as agarose limits the applicability of these constructs. Of the possible biocompatible materials available for tissue engineering, fibrin is a natural regenerative scaffold, which possesses several desired characteristics including a controllable degradation rate and low immunogenicity. The objective of the present study was to determine the capability of fibrin gels for supporting chondrogenesis of hMSCs under cyclic compression. To optimize the system, three concentrations of fibrin gel (40, 60, and 80 mg/mL) and three different stimulus frequencies (0.1, 0.5, and 1.0 Hz) were used to examine the effects of cyclic compression on viability, proliferation and chondrogenic differentiation of hMSCs. Our results show that cyclic compression (10% strain) at frequencies >0.5 Hz and gel concentration of 40 mg/mL fibrinogen appears to maintain cellular viability within scaffolds. Similarly, variations in gel component concentration and stimulus frequency can be modified such that a significant chondrogenic response can be achieved by hMSC in fibrin constructs after 8 h of compression spread out over 2 days. This study demonstrates the suitability of fibrin gel for supporting the cyclic compression-induced chondrogenesis of mesenchymal stem cells.

  19. Parallelization of one image compression method. Wavelet, Transform, Vector Quantization and Huffman Coding

    International Nuclear Information System (INIS)

    Moravie, Philippe

    1997-01-01

    Today, in the digitized satellite image domain, the needs for high dimension increase considerably. To transmit or to stock such images (more than 6000 by 6000 pixels), we need to reduce their data volume and so we have to use real-time image compression techniques. The large amount of computations required by image compression algorithms prohibits the use of common sequential processors, for the benefits of parallel computers. The study presented here deals with parallelization of a very efficient image compression scheme, based on three techniques: Wavelets Transform (WT), Vector Quantization (VQ) and Entropic Coding (EC). First, we studied and implemented the parallelism of each algorithm, in order to determine the architectural characteristics needed for real-time image compression. Then, we defined eight parallel architectures: 3 for Mallat algorithm (WT), 3 for Tree-Structured Vector Quantization (VQ) and 2 for Huffman Coding (EC). As our system has to be multi-purpose, we chose 3 global architectures between all of the 3x3x2 systems available. Because, for technological reasons, real-time is not reached at anytime (for all the compression parameter combinations), we also defined and evaluated two algorithmic optimizations: fix point precision and merging entropic coding in vector quantization. As a result, we defined a new multi-purpose multi-SMIMD parallel machine, able to compress digitized satellite image in real-time. The definition of the best suited architecture for real-time image compression was answered by presenting 3 parallel machines among which one multi-purpose, embedded and which might be used for other applications on board. (author) [fr

  20. A comparative experimental study on engine operating on premixed charge compression ignition and compression ignition mode

    Directory of Open Access Journals (Sweden)

    Bhiogade Girish E.

    2017-01-01

    Full Text Available New combustion concepts have been recently developed with the purpose to tackle the problem of high emissions level of traditional direct injection Diesel engines. A good example is the premixed charge compression ignition combustion. A strategy in which early injection is used causing a burning process in which the fuel burns in the premixed condition. In compression ignition engines, soot (particulate matter and NOx emissions are an extremely unsolved issue. Premixed charge compression ignition is one of the most promising solutions that combine the advantages of both spark ignition and compression ignition combustion modes. It gives thermal efficiency close to the compression ignition engines and resolves the associated issues of high NOx and particulate matter, simultaneously. Premixing of air and fuel preparation is the challenging part to achieve premixed charge compression ignition combustion. In the present experimental study a diesel vaporizer is used to achieve premixed charge compression ignition combustion. A vaporized diesel fuel was mixed with the air to form premixed charge and inducted into the cylinder during the intake stroke. Low diesel volatility remains the main obstacle in preparing premixed air-fuel mixture. Exhaust gas re-circulation can be used to control the rate of heat release. The objective of this study is to reduce exhaust emission levels with maintaining thermal efficiency close to compression ignition engine.

  1. Modeling the mechanical and compression properties of polyamide/elastane knitted fabrics used in compression sportswear

    NARCIS (Netherlands)

    Maqsood, Muhammad

    2016-01-01

    A compression sportswear fabric should have excellent stretch and recovery properties in order to improve the performance of the sportsman. The objective of this study was to investigate the effect of elastane linear density and loop length on the stretch, recovery, and compression properties of the

  2. Effect of evaporator temperature on vapor compression refrigeration system

    Directory of Open Access Journals (Sweden)

    Abdullah A.A.A. Al-Rashed

    2011-12-01

    Full Text Available This paper presents a comparable evaluation of R600a (isobutane, R290 (propane, R134a, R22, for R410A, and R32 an optimized finned-tube evaporator, and analyzes the evaporator effect on the system coefficient of performance (COP. Results concerning the response of a refrigeration system simulation software to an increase in the amount of oil flowing with the refrigerant are presented. It is shown that there is optima of the apparent overheat value, for which either the exchanged heat or the refrigeration coefficient of performance (COP is maximized: consequently, it is not possible to optimize both the refrigeration COP and the evaporator effect. The obtained evaporator optimization results were incorporated in a conventional analysis of the vapor compression system. For a theoretical cycle analysis without accounting for evaporator effects, the COP spread for the studied refrigerants was as high as 11.7%. For cycle simulations including evaporator effects, the COP of R290 was better than that of R22 by up to 3.5%, while the remaining refrigerants performed approximately within a 2% COP band of the R22 baseline for the two condensing temperatures considered.

  3. Compression Characteristics of Solid Wastes as Backfill Materials

    OpenAIRE

    Meng Li; Jixiong Zhang; Rui Gao

    2016-01-01

    A self-made large-diameter compression steel chamber and a SANS material testing machine were chosen to perform a series of compression tests in order to fully understand the compression characteristics of differently graded filling gangue samples. The relationship between the stress-deformation modulus and stress-compression degree was analyzed comparatively. The results showed that, during compression, the deformation modulus of gangue grew linearly with stress, the overall relationship bet...

  4. Developing a dynamic control system for mine compressed air networks

    OpenAIRE

    Van Heerden, S.W.; Pelzer, R.; Marais, J.H.

    2014-01-01

    Mines in general, make use of compressed air systems for daily operational activities. Compressed air on mines is traditionally distributed via compressed air ring networks where multiple shafts are supplied with compressed air from an integral system. These compressed air networks make use of a number of compressors feeding the ring from various locations in the network. While these mines have sophisticated control systems to control these compressors, they are not dynamic systems. Compresso...

  5. Space charge effects and aberrations on electron pulse compression in a spherical electrostatic capacitor.

    Science.gov (United States)

    Yu, Lei; Li, Haibo; Wan, Weishi; Wei, Zheng; Grzelakowski, Krzysztof P; Tromp, Rudolf M; Tang, Wen-Xin

    2017-12-01

    The effects of space charge, aberrations and relativity on temporal compression are investigated for a compact spherical electrostatic capacitor (α-SDA). By employing the three-dimensional (3D) field simulation and the 3D space charge model based on numerical General Particle Tracer and SIMION, we map the compression efficiency for a wide range of initial beam size and single-pulse electron number and determine the optimum conditions of electron pulses for the most effective compression. The results demonstrate that both space charge effects and aberrations prevent the compression of electron pulses into the sub-ps region if the electron number and the beam size are not properly optimized. Our results suggest that α-SDA is an effective compression approach for electron pulses under the optimum conditions. It may serve as a potential key component in designing future time-resolved electron sources for electron diffraction and spectroscopy experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  7. Results of subscale MTF compression experiments

    Science.gov (United States)

    Howard, Stephen; Mossman, A.; Donaldson, M.; Fusion Team, General

    2016-10-01

    In magnetized target fusion (MTF) a magnetized plasma torus is compressed in a time shorter than its own energy confinement time, thereby heating to fusion conditions. Understanding plasma behavior and scaling laws is needed to advance toward a reactor-scale demonstration. General Fusion is conducting a sequence of subscale experiments of compact toroid (CT) plasmas being compressed by chemically driven implosion of an aluminum liner, providing data on several key questions. CT plasmas are formed by a coaxial Marshall gun, with magnetic fields supported by internal plasma currents and eddy currents in the wall. Configurations that have been compressed so far include decaying and sustained spheromaks and an ST that is formed into a pre-existing toroidal field. Diagnostics measure B, ne, visible and x-ray emission, Ti and Te. Before compression the CT has an energy of 10kJ magnetic, 1 kJ thermal, with Te of 100 - 200 eV, ne 5x1020 m-3. Plasma was stable during a compression factor R0/R >3 on best shots. A reactor scale demonstration would require 10x higher initial B and ne but similar Te. Liner improvements have minimized ripple, tearing and ejection of micro-debris. Plasma facing surfaces have included plasma-sprayed tungsten, bare Cu and Al, and gettering with Ti and Li.

  8. Lightweight SIP/SDP compression scheme (LSSCS)

    Science.gov (United States)

    Wu, Jian J.; Demetrescu, Cristian

    2001-10-01

    In UMTS new IP based services with tight delay constraints will be deployed over the W-CDMA air interface such as IP multimedia and interactive services. To integrate the wireline and wireless IP services, 3GPP standard forum adopted the Session Initiation Protocol (SIP) as the call control protocol for the UMTS Release 5, which will implement next generation, all IP networks for real-time QoS services. In the current form the SIP protocol is not suitable for wireless transmission due to its large message size which will need either a big radio pipe for transmission or it will take far much longer to transmit than the current GSM Call Control (CC) message sequence. In this paper we present a novel compression algorithm called Lightweight SIP/SDP Compression Scheme (LSSCS), which acts at the SIP application layer and therefore removes the information redundancy before it is sent to the network and transport layer. A binary octet-aligned header is added to the compressed SIP/SDP message before sending it to the network layer. The receiver uses this binary header as well as the pre-cached information to regenerate the original SIP/SDP message. The key features of the LSSCS compression scheme are presented in this paper along with implementation examples. It is shown that this compression algorithm makes SIP transmission efficient over the radio interface without losing the SIP generality and flexibility.

  9. Compressed Air/Vacuum Transportation Techniques

    Science.gov (United States)

    Guha, Shyamal

    2011-03-01

    General theory of compressed air/vacuum transportation will be presented. In this transportation, a vehicle (such as an automobile or a rail car) is powered either by compressed air or by air at near vacuum pressure. Four version of such transportation is feasible. In all versions, a ``c-shaped'' plastic or ceramic pipe lies buried a few inches under the ground surface. This pipe carries compressed air or air at near vacuum pressure. In type I transportation, a vehicle draws compressed air (or vacuum) from this buried pipe. Using turbine or reciprocating air cylinder, mechanical power is generated from compressed air (or from vacuum). This mechanical power transferred to the wheels of an automobile (or a rail car) drives the vehicle. In type II-IV transportation techniques, a horizontal force is generated inside the plastic (or ceramic) pipe. A set of vertical and horizontal steel bars is used to transmit this force to the automobile on the road (or to a rail car on rail track). The proposed transportation system has following merits: virtually accident free; highly energy efficient; pollution free and it will not contribute to carbon dioxide emission. Some developmental work on this transportation will be needed before it can be used by the traveling public. The entire transportation system could be computer controlled.

  10. Iris Recognition: The Consequences of Image Compression

    Directory of Open Access Journals (Sweden)

    Bishop DanielA

    2010-01-01

    Full Text Available Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  11. Iris Recognition: The Consequences of Image Compression

    Science.gov (United States)

    Ives, Robert W.; Bishop, Daniel A.; Du, Yingzi; Belcher, Craig

    2010-12-01

    Iris recognition for human identification is one of the most accurate biometrics, and its employment is expanding globally. The use of portable iris systems, particularly in law enforcement applications, is growing. In many of these applications, the portable device may be required to transmit an iris image or template over a narrow-bandwidth communication channel. Typically, a full resolution image (e.g., VGA) is desired to ensure sufficient pixels across the iris to be confident of accurate recognition results. To minimize the time to transmit a large amount of data over a narrow-bandwidth communication channel, image compression can be used to reduce the file size of the iris image. In other applications, such as the Registered Traveler program, an entire iris image is stored on a smart card, but only 4 kB is allowed for the iris image. For this type of application, image compression is also the solution. This paper investigates the effects of image compression on recognition system performance using a commercial version of the Daugman iris2pi algorithm along with JPEG-2000 compression, and links these to image quality. Using the ICE 2005 iris database, we find that even in the face of significant compression, recognition performance is minimally affected.

  12. Wave energy devices with compressible volumes.

    Science.gov (United States)

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John

    2014-12-08

    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m 3 and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  13. Lagrangian statistics in compressible isotropic homogeneous turbulence

    Science.gov (United States)

    Yang, Yantao; Wang, Jianchun; Shi, Yipeng; Chen, Shiyi

    2011-11-01

    In this work we conducted the Direct Numerical Simulation (DNS) of a forced compressible isotropic homogeneous turbulence and investigated the flow statistics from the Lagrangian point of view, namely the statistics is computed following the passive tracers trajectories. The numerical method combined the Eulerian field solver which was developed by Wang et al. (2010, J. Comp. Phys., 229, 5257-5279), and a Lagrangian module for tracking the tracers and recording the data. The Lagrangian probability density functions (p.d.f.'s) have then been calculated for both kinetic and thermodynamic quantities. In order to isolate the shearing part from the compressing part of the flow, we employed the Helmholtz decomposition to decompose the flow field (mainly the velocity field) into the solenoidal and compressive parts. The solenoidal part was compared with the incompressible case, while the compressibility effect showed up in the compressive part. The Lagrangian structure functions and cross-correlation between various quantities will also be discussed. This work was supported in part by the China's Turbulence Program under Grant No.2009CB724101.

  14. The compressed word problem for groups

    CERN Document Server

    Lohrey, Markus

    2014-01-01

    The Compressed Word Problem for Groups provides a detailed exposition of known results on the compressed word problem, emphasizing efficient algorithms for the compressed word problem in various groups. The author presents the necessary background along with the most recent results on the compressed word problem to create a cohesive self-contained book accessible to computer scientists as well as mathematicians. Readers will quickly reach the frontier of current research which makes the book especially appealing for students looking for a currently active research topic at the intersection of group theory and computer science. The word problem introduced in 1910 by Max Dehn is one of the most important decision problems in group theory. For many groups, highly efficient algorithms for the word problem exist. In recent years, a new technique based on data compression for providing more efficient algorithms for word problems, has been developed, by representing long words over group generators in a compres...

  15. Experimental investigation and empirical modelling of FDM process for compressive strength improvement

    Directory of Open Access Journals (Sweden)

    Anoop K. Sood

    2012-01-01

    Full Text Available Fused deposition modelling (FDM is gaining distinct advantage in manufacturing industries because of its ability to manufacture parts with complex shapes without any tooling requirement and human interface. The properties of FDM built parts exhibit high dependence on process parameters and can be improved by setting parameters at suitable levels. Anisotropic and brittle nature of build part makes it important to study the effect of process parameters to the resistance to compressive loading for enhancing service life of functional parts. Hence, the present work focuses on extensive study to understand the effect of five important parameters such as layer thickness, part build orientation, raster angle, raster width and air gap on the compressive stress of test specimen. The study not only provides insight into complex dependency of compressive stress on process parameters but also develops a statistically validated predictive equation. The equation is used to find optimal parameter setting through quantum-behaved particle swarm optimization (QPSO. As FDM process is a highly complex one and process parameters influence the responses in a non linear manner, compressive stress is predicted using artificial neural network (ANN and is compared with predictive equation.

  16. Microarray BASICA: Background Adjustment, Segmentation, Image Compression and Analysis of Microarray Images

    Directory of Open Access Journals (Sweden)

    Jianping Hua

    2004-01-01

    Full Text Available This paper presents microarray BASICA: an integrated image processing tool for background adjustment, segmentation, image compression, and analysis of cDNA microarray images. BASICA uses a fast Mann-Whitney test-based algorithm to segment cDNA microarray images, and performs postprocessing to eliminate the segmentation irregularities. The segmentation results, along with the foreground and background intensities obtained with the background adjustment, are then used for independent compression of the foreground and background. We introduce a new distortion measurement for cDNA microarray image compression and devise a coding scheme by modifying the embedded block coding with optimized truncation (EBCOT algorithm (Taubman, 2000 to achieve optimal rate-distortion performance in lossy coding while still maintaining outstanding lossless compression performance. Experimental results show that the bit rate required to ensure sufficiently accurate gene expression measurement varies and depends on the quality of cDNA microarray images. For homogeneously hybridized cDNA microarray images, BASICA is able to provide from a bit rate as low as 5 bpp the gene expression data that are 99% in agreement with those of the original 32 bpp images.

  17. Structural changes in latosols of the cerrado region: II - soil compressive behavior and modeling of additional compaction

    Directory of Open Access Journals (Sweden)

    Eduardo da Costa Severiano

    2011-06-01

    Full Text Available Currently in Brazil, as in other parts of the world, the concern is great with the increase of degraded agricultural soil, which is mostly related to the occurrence of soil compaction. Although soil texture is recognized as a very important component in the soil compressive behaviors, there are few studies that quantify its influence on the structural changes of Latosols in the Brazilian Cerrado region. This study aimed to evaluate structural changes and the compressive behavior of Latosols in Rio Verde, Goiás, through the modeling of additional soil compaction. The study was carried out using five Latosols with very different textures, under different soil compaction levels. Water retention and soil compression curves, and bearing capacity models were determined from undisturbed samples collected on the B horizons. Results indicated that clayey and very clayey Latosols were more susceptible to compression than medium-textured soils. Soil compression curves at density values associate with edaphic functions were used to determine the beneficial pressure (σ b , i.e., pressure with optimal water retention, and critical pressure (σcrMAC, i.e., pressure with macroporosity below critical levels. These pressure values were higher than the preconsolidation pressure (σp, and therefore characterized as additional compaction. Based on the compressive behavior of these Latosols, it can be concluded that the combined preconsolidation pressure, beneficial pressure and critical pressure allow a better understanding of compression processes of Latosols.

  18. Compression method of anastomosis of large intestines by implants with memory of shape: alternative to traditional sutures

    Directory of Open Access Journals (Sweden)

    F. Sh. Aliev

    2015-01-01

    Full Text Available Research objective. To prove experimentally the possibility of forming a compression colonic anastomoses using nickel-titanium devices in comparison with traditional methods of anastomosis. Materials and methods. In experimental studies the quality of the compression anastomosis of the colon in comparison with sutured and stapled anastomoses was performed. There were three experimental groups in mongrel dogs formed: in the 1st series (n = 30 compression anastomoses nickel-titanium implants were formed; in the 2nd (n = 25 – circular stapling anastomoses; in the 3rd (n = 25 – ligature way to Mateshuk– Lambert. In the experiment the physical durability, elasticity, and biological tightness, morphogenesis colonic anastomoses were studied. Results. Optimal sizes of compression devices are 32 × 18 and 28 × 15 mm with a wire diameter of 2.2 mm, the force of winding compression was 740 ± 180 g/mm2. Compression suture has a higher physical durability compared to stapled (W = –33.0; p < 0.05 and sutured (W = –28.0; p < 0.05, higher elasticity (p < 0.05 in all terms of tests and biological tightness since 3 days (p < 0.001 after surgery. The regularities of morphogenesis colonic anastomoses allocated by 4 periods of the regeneration of intestinal suture. Conclusion. Obtained experimental data of the use of compression anastomosis of the colon by the nickel-titanium devices are the convincing arguments for their clinical application. 

  19. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Science.gov (United States)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.

    2008-12-01

    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  20. Two divergent paths: compression vs. non-compression in deep venous thrombosis and post thrombotic syndrome

    Directory of Open Access Journals (Sweden)

    Eduardo Simões Da Matta

    Full Text Available Abstract Use of compression therapy to reduce the incidence of postthrombotic syndrome among patients with deep venous thrombosis is a controversial subject and there is no consensus on use of elastic versus inelastic compression, or on the levels and duration of compression. Inelastic devices with a higher static stiffness index, combine relatively small and comfortable pressure at rest with pressure while standing strong enough to restore the “valve mechanism” generated by plantar flexion and dorsiflexion of the foot. Since the static stiffness index is dependent on the rigidity of the compression system and the muscle strength within the bandaged area, improvement of muscle mass with muscle-strengthening programs and endurance training should be encouraged. Therefore, in the acute phase of deep venous thrombosis events, anticoagulation combined with inelastic compression therapy can reduce the extension of the thrombus. Notwithstanding, prospective studies evaluating the effectiveness of inelastic therapy in deep venous thrombosis and post-thrombotic syndrome are needed.