Sample records for maximum compression ratio

  1. Knock-Limited Performance of Triptane and Xylidines Blended with 28-R Aviation Fuel at High Compression Ratios and Maximum-Economy Spark Setting

    Held, Louis F.; Pritchard, Ernest I.


    An investigation was conducted to evaluate the possibilities of utilizing the high-performance characteristics of triptane and xylidines blended with 28-R fuel in order to increase fuel economy by the use of high compression ratios and maximum-economy spark setting. Full-scale single-cylinder knock tests were run with 20 deg B.T.C. and maximum-economy spark settings at compression ratios of 6.9, 8.0, and 10.0, and with two inlet-air temperatures. The fuels tested consisted of triptane, four triptane and one xylidines blend with 28-R, and 28-R fuel alone. Indicated specific fuel consumption at lean mixtures was decreased approximately 17 percent at a compression ratio of 10.0 and maximum-economy spark setting, as compared to that obtained with a compression ratio of 6.9 and normal spark setting. When compression ratio was increased from 6.9 to 10.0 at an inlet-air temperature of 150 F, normal spark setting, and a fuel-air ratio of 0.065, 55-percent triptane was required with 28-R fuel to maintain the knock-limited brake power level obtained with 28-R fuel at a compression ratio of 6.9. Brake specific fuel consumption was decreased 17.5 percent at a compression ratio of 10.0 relative to that obtained at a compression ratio of 6.9. Approximately similar results were noted at an inlet-air temperature of 250 F. For concentrations up through at least 20 percent, triptane can be more efficiently used at normal than at maximum-economy spark setting to maintain a constant knock-limited power output over the range of compression ratios tested.

  2. Compression Ratio Adjuster

    Akkerman, J. W.


    New mechanism alters compression ratio of internal-combustion engine according to load so that engine operates at top fuel efficiency. Ordinary gasoline, diesel and gas engines with their fixed compression ratios are inefficient at partial load and at low-speed full load. Mechanism ensures engines operate as efficiently under these conditions as they do at highload and high speed.

  3. Envera Variable Compression Ratio Engine

    Charles Mendler


    the compression ratio can be raised (to as much as 18:1) providing high engine efficiency. It is important to recognize that for a well designed VCR engine cylinder pressure does not need to be higher than found in current production turbocharged engines. As such, there is no need for a stronger crankcase, bearings and other load bearing parts within the VCR engine. The Envera VCR mechanism uses an eccentric carrier approach to adjust engine compression ratio. The crankshaft main bearings are mounted in this eccentric carrier or 'crankshaft cradle' and pivoting the eccentric carrier 30 degrees adjusts compression ratio from 9:1 to 18:1. The eccentric carrier is made up of a casting that provides rigid support for the main bearings, and removable upper bearing caps. Oil feed to the main bearings transits through the bearing cap fastener sockets. The eccentric carrier design was chosen for its low cost and rigid support of the main bearings. A control shaft and connecting links are used to pivot the eccentric carrier. The control shaft mechanism features compression ratio lock-up at minimum and maximum compression ratio settings. The control shaft method of pivoting the eccentric carrier was selected due to its lock-up capability. The control shaft can be rotated by a hydraulic actuator or an electric motor. The engine shown in Figures 3 and 4 has a hydraulic actuator that was developed under the current program. In-line 4-cylinder engines are significantly less expensive than V engines because an entire cylinder head can be eliminated. The cost savings from eliminating cylinders and an entire cylinder head will notably offset the added cost of the VCR and supercharging. Replacing V6 and V8 engines with in-line VCR 4-cylinder engines will provide high fuel economy at low cost. Numerous enabling technologies exist which have the potential to increase engine efficiency. The greatest efficiency gains are realized when the right combination of advanced and new

  4. The maximum force in a column under constant speed compression

    Kuzkin, Vitaly A


    Dynamic buckling of an elastic column under compression at constant speed is investigated assuming the first-mode buckling. Two cases are considered: (i) an imperfect column (Hoff's statement), and (ii) a perfect column having an initial lateral deflection. The range of parameters, where the maximum load supported by a column exceeds Euler static force is determined. In this range, the maximum load is represented as a function of the compression rate, slenderness ratio, and imperfection/initial deflection. Considering the results we answer the following question: "How slowly the column should be compressed in order to measure static load-bearing capacity?" This question is important for the proper setup of laboratory experiments and computer simulations of buckling. Additionally, it is shown that the behavior of a perfect column having an initial deflection differ significantlys form the behavior of an imperfect column. In particular, the dependence of the maximum force on the compression rate is non-monotoni...

  5. Determination of Optimum Compression Ratio: A Tribological Aspect

    L. Yüksek


    Full Text Available Internal combustion engines are the primary energy conversion machines both in industry and transportation. Modern technologies are being implemented to engines to fulfill today's low fuel consumption demand. Friction energy consumed by the rubbing parts of the engines are becoming an important parameter for higher fuel efficiency. Rate of friction loss is primarily affected by sliding speed and the load acting upon rubbing surfaces. Compression ratio is the main parameter that increases the peak cylinder pressure and hence normal load on components. Aim of this study is to investigate the effect of compression ratio on total friction loss of a diesel engine. A variable compression ratio diesel engine was operated at four different compression ratios which were "12.96", "15:59", "18:03", "20:17". Brake power and speed was kept constant at predefined value while measuring the in- cylinder pressure. Friction mean effective pressure ( FMEP data were obtained from the in cylinder pressure curves for each compression ratio. Ratio of friction power to indicated power of the engine was increased from 22.83% to 37.06% with varying compression ratio from 12.96 to 20:17. Considering the thermal efficiency , FMEP and maximum in- cylinder pressure optimum compression ratio interval of the test engine was determined as 18.8 ÷ 19.6.

  6. Experimental study on prediction model for maximum rebound ratio

    LEI Wei-dong; TENG Jun; A.HEFNY; ZHAO Jian; GUAN Jiong


    The proposed prediction model for estimating the maximum rebound ratio was applied to a field explosion test, Mandai test in Singapore.The estimated possible maximum Deak particle velocities(PPVs)were compared with the field records.Three of the four available field-recorded PPVs lie exactly below the estimated possible maximum values as expected.while the fourth available field-recorded PPV lies close to and a bit higher than the estimated maximum possible PPV The comparison results show that the predicted PPVs from the proposed prediction model for the maximum rebound ratio match the field.recorded PPVs better than those from two empirical formulae.The very good agreement between the estimated and field-recorded values validates the proposed prediction model for estimating PPV in a rock mass with a set of ipints due to application of a two dimensional compressional wave at the boundary of a tunnel or a borehole.

  7. High precision Hugoniot measurements of D2 near maximum compression

    Benage, John; Knudson, Marcus; Desjarlais, Michael


    The Hugoniot response of liquid deuterium has been widely studied due to its general importance and to the significant discrepancy in the inferred shock response obtained from early experiments. With improvements in dynamic compression platforms and experimental standards these results have converged and show general agreement with several equation of state (EOS) models, including quantum molecular dynamics (QMD) calculations within the Generalized Gradient Approximation (GGA). This approach to modeling the EOS has also proven quite successful for other materials and is rapidly becoming a standard approach. However, small differences remain among predictions obtained using different local and semi-local density functionals; these small differences show up in the deuterium Hugoniot at ~ 30-40 GPa near the region of maximum compression. Here we present experimental results focusing on that region of the Hugoniot and take advantage of advancements in the platform and standards, resulting in data with significantly higher precision than that obtained in previous studies. These new data may prove to distinguish between the subtle differences predicted by the various density functionals. Results of these experiments will be presented along with comparison to various QMD calculations. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  8. Effects of compression ratio on the combustion characteristics of a homogeneous charge compression ignition engine

    SONG Ruizhi; HU Tiegang; ZHOU Longbao; LIU Shenghua; LI Wei


    The effects of homogeneous charge compression ignition (HCCI) engine compression ratio on its combustion characteristics were studied experimentally on a modified TY1100 single cylinder engine fueled with dimethyl ether.The results show that dimethyl ether (DME) HCCI engine can work stably and can realize zero nitrogen oxides (NOx)emission and smokeless combustion under the compression ratio of both 10.7 and 14.The combustion process has obvious two stage combustion characteristics at ε = 10.7(εrefers to compression ratio),and the combustion beginning point is decided by the compression temperature,which varies very little with the engine load;the combustion beginning point is closely related to the engine load (concentration of mixture) with the increase in the compression temperature,and it moves forward versus crank angle with the increase in the engine load at ε = 14;the combustion durations are shortened with the increase in the engine load under both compression ratios.

  9. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Kaganovich, Igor D., E-mail: [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Massidda, Scott; Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Vay, Jean-Luc [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)


    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the


    Radivoje B Pešić


    Full Text Available The compression ratio strongly affects the working process and provides an exceptional degree of control over engine performance. In conventional internal combustion engines, the compression ratio is fixed and their performance is therefore a compromise between conflicting requirements. One fundamental problem is that drive units in the vehicles must successfully operate at variable speeds and loads and in different ambient conditions. If a diesel engine has a fixed compression ratio, a minimal value must be chosen that can achieve a reliable self-ignition when starting the engine in cold start conditions. In diesel engines, variable compression ratio provides control of peak cylinder pressure, improves cold start ability and low load operation, enabling the multi-fuel capability, increase of fuel economy and reduction of emissions. This paper contains both theoretical and experimental investigation of the impact that automatic variable compression ratios has on working process parameters in experimental diesel engine. Alternative methods of implementing variable compression ratio are illustrated and critically examined.

  11. Impact of Various Compression Ratio on the Compression Ignition Engine with Diesel and Jatropha Biodiesel

    Sivaganesan, S.; Chandrasekaran, M.; Ruban, M.


    The present experimental investigation evaluates the effects of using blends of diesel fuel with 20% concentration of Methyl Ester of Jatropha biodiesel blended with various compression ratio. Both the diesel and biodiesel fuel blend was injected at 23º BTDC to the combustion chamber. The experiment was carried out with three different compression ratio. Biodiesel was extracted from Jatropha oil, 20% (B20) concentration is found to be best blend ratio from the earlier experimental study. The engine was maintained at various compression ratio i.e., 17.5, 16.5 and 15.5 respectively. The main objective is to obtain minimum specific fuel consumption, better efficiency and lesser Emission with different compression ratio. The results concluded that full load show an increase in efficiency when compared with diesel, highest efficiency is obtained with B20MEOJBA with compression ratio 17.5. It is noted that there is an increase in thermal efficiency as the blend ratio increases. Biodiesel blend has performance closer to diesel, but emission is reduced in all blends of B20MEOJBA compared to diesel. Thus this work focuses on the best compression ratio and suitability of biodiesel blends in diesel engine as an alternate fuel.

  12. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex


    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕb. In the presence of large voltage errors, δU≫ΔEb, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  13. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    Massidda, Scott [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Kaganovich, Igor D., E-mail: [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Startsev, Edward A.; Davidson, Ronald C. [Plasma Physics Laboratory, Princeton University, Princeton, NJ 08543 (United States); Lidia, Steven M.; Seidl, Peter [Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Friedman, Alex [Lawrence Livermore National Laboratory, 7000 East Avenue, Livermore, CA 94550 (United States)


    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, {Delta}{Epsilon}{sub b}. In the presence of large voltage errors, {delta}U Double-Nested-Greater-Than {Delta}E{sub b}, the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  14. Maximum likelihood estimation for semiparametric density ratio model.

    Diao, Guoqing; Ning, Jing; Qin, Jing


    In the statistical literature, the conditional density model specification is commonly used to study regression effects. One attractive model is the semiparametric density ratio model, under which the conditional density function is the product of an unknown baseline density function and a known parametric function containing the covariate information. This model has a natural connection with generalized linear models and is closely related to biased sampling problems. Despite the attractive features and importance of this model, most existing methods are too restrictive since they are based on multi-sample data or conditional likelihood functions. The conditional likelihood approach can eliminate the unknown baseline density but cannot estimate it. We propose efficient estimation procedures based on the nonparametric likelihood. The nonparametric likelihood approach allows for general forms of covariates and estimates the regression parameters and the baseline density simultaneously. Therefore, the nonparametric likelihood approach is more versatile than the conditional likelihood approach especially when estimation of the conditional mean or other quantities of the outcome is of interest. We show that the nonparametric maximum likelihood estimators are consistent, asymptotically normal, and asymptotically efficient. Simulation studies demonstrate that the proposed methods perform well in practical settings. A real example is used for illustration.

  15. Influence of the compression ratio on Stirling and Otto cycle

    Koscak-Kolin, S.; Golub, M.; Kolin, I. [Zagreb Univ. (Croatia); Naso, V.; Lucentini, M. [Universita degli Studi La Sapienza, Rome (Italy)


    The Stirling engine (1815) is more than half a century older from the Otto engine (1867). Nevertheless, in spite of the considerably longer development period, compression ratio of Stirling engines remains nearly the same as it was in its very beginning. As a contrast to this, compression ratio of Otto engines progressively increases, always reaching higher and higher power. Finally, modern Otto engines are considerably stronger than contemporary Stirling engines of the same size. However, by means of thermodynamical analysis of the old indicator diagrams, the rate of growth could be mathematically expressed in the shape of polytropic equation. In such a way the proper direction for a significant improvement of the Stirling engine could be recognized. (orig.)

  16. Hige Compression Ratio Turbo Gasoline Engine Operation Using Alcohol Enhancement

    Heywood, John [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Jo, Young Suk [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Lewis, Raymond [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bromberg, Leslie [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Heywood, John [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)


    The overall objective of this project was to quantify the potential for improving the performance and efficiency of gasoline engine technology by use of alcohols to suppress knock. Knock-free operation is obtained by direct injection of a second “anti-knock” fuel such as ethanol, which suppresses knock when, with gasoline fuel, knock would occur. Suppressing knock enables increased turbocharging, engine downsizing, and use of higher compression ratios throughout the engine’s operating map. This project combined engine testing and simulation to define knock onset conditions, with different mixtures of gasoline and alcohol, and with this information quantify the potential for improving the efficiency of turbocharged gasoline spark-ignition engines, and the on-vehicle fuel consumption reductions that could then be realized. The more focused objectives of this project were therefore to: Determine engine efficiency with aggressive turbocharging and downsizing and high compression ratio (up to a compression ratio of 13.5:1) over the engine’s operating range; Determine the knock limits of a turbocharged and downsized engine as a function of engine speed and load; Determine the amount of the knock-suppressing alcohol fuel consumed, through the use of various alcohol-gasoline and alcohol-water gasoline blends, for different driving cycles, relative to the gasoline consumed; Determine implications of using alcohol-boosted engines, with their higher efficiency operation, in both light-duty and medium-duty vehicle sectors.

  17. A Study on the Effects of Compression Ratio, Engine Speed and Equivalence Ratio on HCCI Combustion of DME

    Pedersen, Troels Dyhr; Schramm, Jesper


    An experimental study has been carried out on the homogeneous charge compression ignition (HCCI) combustion of Dimethyl Ether (DME). The study was performed as a parameter variation of engine speed and compression ratio on excess air ratios of approximately 2.5, 3 and 4. The compression ratio...... was adjusted in steps to find suitable regions of operation, and the effect of engine speed was studied at 1000, 2000 and 3000 RPM. It was found that leaner excess air ratios require higher compression ratios to achieve satisfactory combustion. Engine speed also affects operation significantly....

  18. A Study on the Effects of Compression Ratio, Engine Speed and Equivalence Ratio on HCCI Combustion of DME

    Pedersen, Troels Dyhr; Schramm, Jesper


    An experimental study has been carried out on the homogeneous charge compression ignition (HCCI) combustion of Dimethyl Ether (DME). The study was performed as a parameter variation of engine speed and compression ratio on excess air ratios of approximately 2.5, 3 and 4. The compression ratio...... was adjusted in steps to find suitable regions of operation, and the effect of engine speed was studied at 1000, 2000 and 3000 RPM. It was found that leaner excess air ratios require higher compression ratios to achieve satisfactory combustion. Engine speed also affects operation significantly....

  19. Effect Of Compression Ratio On The Performance Of Diesel Engine At Different Loads.

    Abhishek Reddy G


    Full Text Available Variable compression ratio (VCR technology has long been recognized as a method for improving the automobile engine performance, efficiency, fuel economy with reduced emission. The main feature of the VCR engine is to operate at different compression ratio, by changing the combustion chamber volume, depending on the vehicle performance needs .The need to improve the performance characteristics of the IC Engine has necessitated the present research. Increasing the compression ratio to improve on the performance is an option. The compression ratio is a factor that influences the performance characteristics of internal combustion engines. This work is an experimental investigation of the influence of the compression ratio on the brake power, brake thermal efficiency, brake mean effective pressure and specific fuel consumption of the Kirloskar variable compression ratio duel fuel engine. Compression Ratios of 14, 15, 16 and 18 and engine loads of 3kg to 12 kg, in increments of 3kg, were utilized for Diesel.

  20. Computation of compressible flows with high density ratio and pressure ratio

    CHEN Rong-san


    The WENO method, RKDG method, RKDG method with original ghost fluid method, and RKDG method with modified ghost fluid method are applied to single-medium and two-medium air-air, air-liquid compressible flows with high density and pressure ratios. We also provide a numerical comparison and analysis for the above methods. Numerical results show that, compared with the other methods, the RKDG method with modified ghost fluid method can obtain high resolution results and the correct position of the shock, and the computed solutions are converged to the physical solutions as the mesh is refined.

  1. Design of reinforced concrete walls casted in place for the maximum normal stress of compression

    T. C. Braguim

    Full Text Available It is important to evaluate which designing models are safe and appropriate to structural analysis of buildings constructed in Concrete Wall system. In this work it is evaluated, through comparison of maximum normal stress of compression, a simple numerical model, which represents the walls with frame elements, with another much more robust and refined, which represents the walls with shells elements. The designing of the normal stress of compression it is done for both cases, based on NBR 16055, to conclude if the wall thickness initially adopted, it is enough or not.

  2. 30 CFR 7.87 - Test to determine the maximum fuel-air ratio.


    ... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Test to determine the maximum fuel-air ratio. 7... Use in Underground Coal Mines § 7.87 Test to determine the maximum fuel-air ratio. (a) Test procedure... several speed/torque conditions to determine the concentrations of CO and NOX, dry basis, in the...


    Yakup SEKMEN


    Full Text Available Performance of the spark ignition engines may be increased by changing the geometrical compression ratio according to the amount of charging in cylinders. The designed geometrical compression ratio can be realized as an effective compression ratio under the full load and full open throttle conditions since the effective compression ratio changes with the amount of charging into the cylinder in spark ignition engines. So, this condition of the spark ignition engines forces designers to change their geometrical compression ratio according to the amount of charging into the cylinder for improvement of performance and fuel economy. In order to improve the combustion efficiency, fuel economy, power output, exhaust emissions at partial loads, compression ratio must be increased; but, under high load and low speed conditions to prevent probable knock and hard running the compression ratio must be decreased gradually. In this paper, relation of the performance parameters to compression ratio such as power, torque, specific fuel consumption, cylindir pressure, exhaust gas temperature, combustion chamber surface area/volume ratio, thermal efficiency, spark timing etc. in spark ignition engines have been investigated and using of engines with variable compression ratio is suggested to fuel economy and more clear environment.

  4. Genbit Compress Tool(GBC): A Java-Based Tool to Compress DNA Sequences and Compute Compression Ratio(bits/base) of Genomes

    Rajeswari, P Raja; Kumar, V K; 10.5121/ijcsit.2010.2313


    We present a Compression Tool, "GenBit Compress", for genetic sequences based on our new proposed "GenBit Compress Algorithm". Our Tool achieves the best compression ratios for Entire Genome (DNA sequences) . Significantly better compression results show that GenBit compress algorithm is the best among the remaining Genome compression algorithms for non-repetitive DNA sequences in Genomes. The standard Compression algorithms such as gzip or compress cannot compress DNA sequences but only expand them in size. In this paper we consider the problem of DNA compression. It is well known that one of the main features of DNA Sequences is that they contain substrings which are duplicated except for a few random Mutations. For this reason most DNA compressors work by searching and encoding approximate repeats. We depart from this strategy by searching and encoding only exact repeats. our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. As long as 8 lakh characters can be give...

  5. Maximum mutual information vector quantization of log-likelihood ratios for memory efficient HARQ implementations

    Danieli, Matteo; Forchhammer, Søren; Andersen, Jakob Dahl


    -likelihood ratios (LLR) in order to combine information sent across different transmissions due to requests. To mitigate the effects of ever-increasing data rates that call for larger HARQ memory, vector quantization (VQ) is investigated as a technique for temporary compression of LLRs on the terminal. A capacity...

  6. Effect of consolidation ratios on maximum dynamic shear modulus of sands

    Yuan Xiaoming; Sun Jing; Sun Rui


    The dynamic shear modulus (DSM) is the most basic soil parameter in earthquake or other dynamic loading conditions and can be obtained through testing in the field or in the laboratory. The effect of consolidation ratios on the maximum DSM for two types of sand is investigated by using resonant column tests. And, an increment formula to obtain the maximum DSM for cases of consolidation ratio kc>1 is presented. The results indicate that the maximum DSM rises rapidly when kc is near 1 and then slows down, which means that the power function of the consolidation ratio increment kc-1 can be used to describe the variation of the maximum DSM due to kc>1. The results also indicate that the increase in the maximum DSM due to kc>1 is significantly larger than that predicted by Hardin and Black's formula.

  7. Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity

    Ortiz, A; Puso, M A; Sukumar, N


    Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.

  8. A tabulation of pipe length to diameter ratios as a function of Mach number and pressure ratios for compressible flow

    Dixon, G. V.; Barringer, S. R.; Gray, C. E.; Leatherman, A. D.


    Computer programs and resulting tabulations are presented of pipeline length-to-diameter ratios as a function of Mach number and pressure ratios for compressible flow. The tabulations are applicable to air, nitrogen, oxygen, and hydrogen for compressible isothermal flow with friction and compressible adiabatic flow with friction. Also included are equations for the determination of weight flow. The tabulations presented cover a wider range of Mach numbers for choked, adiabatic flow than available from commonly used engineering literature. Additional information presented, but which is not available from this literature, is unchoked, adiabatic flow over a wide range of Mach numbers, and choked and unchoked, isothermal flow for a wide range of Mach numbers.

  9. Effects of piston speed, compression ratio and cylinder geometry on system performance of a liquid piston

    Mutlu Mustafa


    Full Text Available Energy storage systems are being more important to compensate irregularities of renewable energy sources and yields more profitable to invest. Compressed air energy storage (CAES systems provide sufficient of system usability, then large scale plants are found around the world. The compression process is the most critical part of these systems and different designs must be developed to improve efficiency such as liquid piston. In this study, a liquid piston is analyzed with CFD tools to look into the effect of piston speed, compression ratio and cylinder geometry on compression efficiency and required work. It is found that, increasing piston speeds do not affect the piston work but efficiency decreases. Piston work remains constant at higher than 0.05 m/s piston speeds but the efficiency decreases from 90.9 % to 74.6 %. Using variable piston speeds has not a significant improvement on the system performance. It is seen that, the effect of compression ratio is increasing with high piston speeds. The required power, when the compression ratio is 80, is 2.39 times greater than the power when the compression ratio is 5 at 0.01 m/s piston speed and 2.87 times greater at 0.15 m/s. Cylinder geometry is also very important because, efficiency, power and work alter by L/D, D and cylinder volume respectively.

  10. Maximum Deformation Ratio of Droplets of Water-Based Paint Impact on a Flat Surface

    Weiwei Xu


    Full Text Available In this research, the maximum deformation ratio of water-based paint droplets impacting and spreading onto a flat solid surface was investigated numerically based on the Navier–Stokes equation coupled with the level set method. The effects of droplet size, impact velocity, and equilibrium contact angle are taken into account. The maximum deformation ratio increases as droplet size and impact velocity increase, and can scale as We1/4, where We is the Weber number, for the case of the effect of the droplet size. Finally, the effect of equilibrium contact angle is investigated, and the result shows that spreading radius decreases with the increase in equilibrium contact angle, whereas the height increases. When the dimensionless time t* < 0.3, there is a linear relationship between the dimensionless spreading radius and the dimensionless time to the 1/2 power. For the case of 80° ≤ θe ≤ 120°, where θe is the equilibrium contact angle, the simulation result of the maximum deformation ratio follows the fitting result. The research on the maximum deformation ratio of water-based paint is useful for water-based paint applications in the automobile industry, as well as in the biomedical industry and the real estate industry. Please check all the part in the whole passage that highlighted in blue whether retains meaning before.

  11. Identification of Maximum Road Friction Coefficient and Optimal Slip Ratio Based on Road Type Recognition

    GUAN Hsin; WANG Bo; LU Pingping; XU Liang


    The identification of maximum road friction coefficient and optimal slip ratio is crucial to vehicle dynamics and control. However, it is always not easy to identify the maximum road friction coefficient with high robustness and good adaptability to various vehicle operating conditions. The existing investigations on robust identification of maximum road friction coefficient are unsatisfactory. In this paper, an identification approach based on road type recognition is proposed for the robust identification of maximum road friction coefficient and optimal slip ratio. The instantaneous road friction coefficient is estimated through the recursive least square with a forgetting factor method based on the single wheel model, and the estimated road friction coefficient and slip ratio are grouped in a set of samples in a small time interval before the current time, which are updated with time progressing. The current road type is recognized by comparing the samples of the estimated road friction coefficient with the standard road friction coefficient of each typical road, and the minimum statistical error is used as the recognition principle to improve identification robustness. Once the road type is recognized, the maximum road friction coefficient and optimal slip ratio are determined. The numerical simulation tests are conducted on two typical road friction conditions(single-friction and joint-friction) by using CarSim software. The test results show that there is little identification error between the identified maximum road friction coefficient and the pre-set value in CarSim. The proposed identification method has good robustness performance to external disturbances and good adaptability to various vehicle operating conditions and road variations, and the identification results can be used for the adjustment of vehicle active safety control strategies.

  12. Influence of the Saturation Ratio on Concrete Behavior under Triaxial Compressive Loading

    Xuan-Dung Vu


    Full Text Available When a concrete structure is subjected to an impact, the material is subjected to high triaxial compressive stresses. Furthermore, the water saturation ratio in massive concrete structures may reach nearly 100% at the core, whereas the material dries quickly on the skin. The impact response of a massive concrete wall may thus depend on the state of water saturation in the material. This paper presents some triaxial tests performed at a maximum confining pressure of 600 MPa on concrete representative of a nuclear power plant containment building. Experimental results show the concrete constitutive behavior and its dependence on the water saturation ratio. It is observed that as the degree of saturation increases, a decrease in the volumetric strains as well as in the shear strength is observed. The coupled PRM constitutive model does not accurately reproduce the response of concrete specimens observed during the test. The differences between experimental and numerical results can be explained by both the influence of the saturation state of concrete and the effect of deviatoric stresses, which are not accurately taken into account. The PRM model was modified in order to improve the numerical prediction of concrete behavior under high stresses at various saturation states.

  13. Effect of raw material ratios on the compressive strength of magnesium potassium phosphate chemically bonded ceramics.

    Wang, Ai-juan; Yuan, Zhi-long; Zhang, Jiao; Liu, Lin-tao; Li, Jun-ming; Liu, Zheng


    The compressive strength of magnesium potassium phosphate chemically bonded ceramics is important in biomedical field. In this work, the compressive strength of magnesium potassium phosphate chemically bonded ceramics was investigated with different liquid-to-solid and MgO-to-KH2PO4 ratios. X-ray diffractometer was applied to characterize its phase composition. The microstructure was imaged using a scanning electron microscope. The results showed that the compressive strength of the chemically bonded ceramics increased with the decrease of liquid-to-solid ratio due to the change of the packing density and the crystallinity of hydrated product. However, with the increase of MgO-to-KH2PO4 weight ratio, its compressive strength increased firstly and then decreased. The low compressive strength in lower MgO-to-KH2PO4 ratio might be explained by the existence of the weak phase KH2PO4. However, the low value of compressive strength with the higher MgO-to-KH2PO4 ratio might be caused by lack of the joined phase in the hydrated product. Besides, it has been found that the microstructures were different in these two cases by the scanning electron microscope. Colloidal structure appeared for the samples with lower liquid-to-solid and higher MgO-to-KH2PO4 ratios possibly because of the existence of amorphous hydrated products. The optimization of both liquid-to-solid and MgO-to-KH2PO4 ratios was important to improve the compressive strength of magnesium potassium phosphate chemically bonded ceramics.

  14. Combustion and Emission Characteristics of Variable Compression Ignition Engine Fueled with Jatropha curcas Ethyl Ester Blends at Different Compression Ratio

    Rajneesh Kumar


    Full Text Available Engine performance and emission characteristics of unmodified biodiesel fueled diesel engines are highly influenced by their ignition and combustion behavior. In this study, emission and combustion characteristics were studied when the engine operated using the different blends (B10, B20, B30, and B40 and normal diesel fuel (B0 as well as when varying the compression ratio from 16.5 : 1 to 17.5 : 1 to 18.5 : 1. The change of compression ratio from 16.5 : 1 to 18.5 : 1 resulted in 27.1%, 27.29%, 26.38%, 28.48%, and 34.68% increase in cylinder pressure for the blends B0, B10, B20, B30, and B40, respectively, at 75% of rated load conditions. Higher peak heat release rate increased by 23.19%, 14.03%, 26.32%, 21.87%, and 25.53% for the blends B0, B10, B20, B30, and B40, respectively, at 75% of rated load conditions, when compression ratio was increased from16.5 : 1 to 18.5 : 1. The delay period decreased by 21.26%, CO emission reduced by 14.28%, and NOx emission increased by 22.84% for B40 blends at 75% of rated load conditions, when compression ratio was increased from 16.5 : 1 to 18.5 : 1. It is concluded that Jatropha oil ester can be used as fuel in diesel engine by blending it with diesel fuel.

  15. Linking trading ratio with TMDL (total maximum daily load) allocation matrix and uncertainty analysis.

    Zhang, H X


    An innovative approach for total maximum daily load (TMDL) allocation and implementation is the watershed-based pollutant trading. Given the inherent scientific uncertainty for the tradeoffs between point and nonpoint sources, setting of trading ratios can be a contentious issue and was already listed as an obstacle by several pollutant trading programs. One of the fundamental reasons that a trading ratio is often set higher (e.g. greater than 2) is to allow for uncertainty in the level of control needed to attain water quality standards, and to provide a buffer in case traded reductions are less effective than expected. However, most of the available studies did not provide an approach to explicitly address the determination of trading ratio. Uncertainty analysis has rarely been linked to determination of trading ratio.This paper presents a practical methodology in estimating "equivalent trading ratio (ETR)" and links uncertainty analysis with trading ratio determination from TMDL allocation process. Determination of ETR can provide a preliminary evaluation of "tradeoffs" between various combination of point and nonpoint source control strategies on ambient water quality improvement. A greater portion of NPS load reduction in overall TMDL load reduction generally correlates with greater uncertainty and thus requires greater trading ratio. The rigorous quantification of trading ratio will enhance the scientific basis and thus public perception for more informed decision in overall watershed-based pollutant trading program.

  16. Optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits.

    Ozkan, Fahri; Tuna, M Cihat; Baylar, Ahmet; Ozturk, Mualla


    Oxygen is an important component of water quality and its ability to sustain life. Water aeration is the process of introducing air into a body of water to increase its oxygen saturation. Water aeration can be accomplished in a variety of ways, for instance, closed-conduit aeration. High-speed flow in a closed conduit involves air-water mixture flow. The air flow results from the subatmospheric pressure downstream of the gate. The air entrained by the high-speed flow is supplied by the air vent. The air entrained into the flow in the form of a large number of bubbles accelerates oxygen transfer and hence also increases aeration efficiency. In the present work, the optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits was studied experimentally. Results showed that aeration efficiency increased with the air-demand ratio to a certain point and then aeration efficiency did not change with a further increase of the air-demand ratio. Thus, there was an optimum value for the air-demand ratio, depending on the Froude number, which provides maximum aeration efficiency. Furthermore, a design formula for aeration efficiency was presented relating aeration efficiency to the air-demand ratio and Froude number.

  17. Maximum-principle-satisfying space-time conservation element and solution element scheme applied to compressible multifluids

    Shen, Hua


    A maximum-principle-satisfying space-time conservation element and solution element (CE/SE) scheme is constructed to solve a reduced five-equation model coupled with the stiffened equation of state for compressible multifluids. We first derive a sufficient condition for CE/SE schemes to satisfy maximum-principle when solving a general conservation law. And then we introduce a slope limiter to ensure the sufficient condition which is applicative for both central and upwind CE/SE schemes. Finally, we implement the upwind maximum-principle-satisfying CE/SE scheme to solve the volume-fraction-based five-equation model for compressible multifluids. Several numerical examples are carried out to carefully examine the accuracy, efficiency, conservativeness and maximum-principle-satisfying property of the proposed approach.

  18. Some investigations in design of low cost variable compression ratio two stroke petrol engine

    Srinivas, A; rao, P Venkateswar; Reddy, M Penchal


    Historically two stroke engine petrol engines find wide applications in construction of two wheelers worldwide, however due to stringent environmental laws enforced universally; these engines are fading in numbers. In spite of the tight norms, internationally these engines are still used in agriculture, gensets etc. Several designs of variable compression ratio two stroke engines are commercially available for analysis purpose. In this present investigation a novel method of changing the compression ratio is proposed, applied, studied and analyzed. The clearance volume of the engine is altered by introducing a metal plug into the combustion chamber. This modification permitted to have four different values of clearance value keeping in view of the studies required the work is brought out as two sections. The first part deals with the design, modification, engine fabrication and testing at different compression ratios for the study of performance of the engine. The second part deals with the combustion in engi...


    Chi Jingxiu; Zhang Jianwu; Xu Xiaorong


    Spectrum sensing is the fundamental task for Cognitive Radio (CR).To overcome the challenge of high sampling rate in traditional spectral estimation methods,Compressed Sensing (CS) theory is developed.A sparsity and compression ratio joint adjustment algorithm for compressed spectrum sensing in CR network is investigated,with the hypothesis that the sparsity level is unknown as priori knowledge at CR terminals.As perfect spectrum reconstruction is not necessarily required during spectrum detection process,the proposed algorithm only performs a rough estimate of sparsity level.Meanwhile,in order to further reduce the sensing measurement,different compression ratios for CR terminals with varying Signal-to-Noise Ratio (SNR) are considered.The proposed algorithm,which optimizes the compression ratio as well as the estimated sparsity level,can greatly reduce the sensing measurement without degrading the detection performance.It also requires less steps of iteration for convergence.Corroborating simulation results are presented to testify the effectiveness of the proposed algorithm for collaborative spectrum sensing.

  20. The non-compressibility ratio for accurate diagnosis of lower extremity deep vein thrombosis

    Caecilia Marliana


    Full Text Available Background Accurate identification of patients with deep vein thrombosis (DVT is critical, as untreated cases can be fatal. It is well established that the specificity of the clinical signs and symptoms of DVT is low. Therefore, clinicians rely on additional tests to make this diagnosis. There are three modalities for DVT diagnosis; clinical scoring, laboratory investigations, and radiology. The objective of this study was to determine the correlation of plasma D-dimer concentration with the ultrasonographic non-compressibility ratio in patients with DVT in the lower extremities. Methods This research was a cross-sectional observational study. The sample comprised 25 subjects over 30 years of age with clinically diagnosed DVT in the lower extremities. In all subjects, D-dimer determination using latex enhanced turbidimetric test was performed, as well as ultrasonographic non-compressibility ratio assessment of the lower extremities. Data were analyzed using Pearson’s correlation at significance level of 0.05. Results Mean plasma D-dimer concentration was 2953.00 ± 2054.44 mg/L. The highest mean non-compressibility ratio (59.96 ± 35.98% was found in the superficial femoral vein and the lowest mean non-compressibility ratio (42.68 ± 33.71% in the common femoral vein. There was a moderately significant correlation between plasma D-dimer level and non-compressibility ratio in the popliteal vein (r=0.582; p=0.037. In the other veins of the lower extremities, no significant correlation was found. Conclusion The sonographic non-compressibility ratio is an objective test for quick and accurate diagnosis of lower extremity DVT and for evaluation of DVT severity.

  1. Does an Arithmetic Coding Followed by Run-length Coding Enhance the Compression Ratio?

    Mohammed Otair


    Full Text Available Compression is a technique to minimize the quantity of image without excessively decreasing the quality of the image. Then, the translating of compressed image is much more efficient and rapidly than original image. Arithmetic and Huffman coding are mostly used techniques in the entropy coding. This study tries to prove that RLC may be added after Arithmetic coding as an extra processing step which may therefore be coded efficiently without any further degradation of the image quality. So, the main purpose of this study is to answer the following question "Which entropy coding, arithmetic with RLC or Huffman with RLC, is more suitable from the compression ratio perspective?" Finally, experimental results show that an Arithmetic followed by RLC coding yields better compression performance than Huffman with RLC coding.

  2. Sparse maximum harmonics-to-noise-ratio deconvolution for weak fault signature detection in bearings

    Miao, Yonghao; Zhao, Ming; Lin, Jing; Xu, Xiaoqiang


    De-noising and enhancement of the weak fault signature from the noisy signal are crucial for fault diagnosis, as features are often very weak and masked by the background noise. Deconvolution methods have a significant advantage in counteracting the influence of the transmission path and enhancing the fault impulses. However, the performance of traditional deconvolution methods is greatly affected by some limitations, which restrict the application range. Therefore, this paper proposes a new deconvolution method, named sparse maximum harmonics-noise-ratio deconvolution (SMHD), that employs a novel index, the harmonics-to-noise ratio (HNR), to be the objective function for iteratively choosing the optimum filter coefficients to maximize HNR. SMHD is designed to enhance latent periodic impulse faults from heavy noise signals by calculating the HNR to estimate the period. A sparse factor is utilized to further suppress the noise and improve the signal-to-noise ratio of the filtered signal in every iteration step. In addition, the updating process of the sparse threshold value and the period guarantees the robustness of SMHD. On this basis, the new method not only overcomes the limitations associated with traditional deconvolution methods, minimum entropy deconvolution (MED) and maximum correlated kurtosis deconvolution (MCKD), but visual inspection is also better, even if the fault period is not provided in advance. Moreover, the efficiency of the proposed method is verified by simulations and bearing data from different test rigs. The results show that the proposed method is effective in the detection of various bearing faults compared with the original MED and MCKD.

  3. Three-dimensional characteristics of solar coronal shocks determined from observations; Geometry, Kinematics, and Compression ratio

    Kwon, Ryun Young; Vourlidas, Angelos


    We investigate the three-dimensional (3D) characteristics of coronal shocks associated with Coronal Mass Ejections (CMEs), in terms of geometry, kinematics, and density compression ratio, employing a new method we have developed. The method uses multi-viewpoint observations from the STEREO-A, -B and SOHO coronagraphs. The 3D structure and kinematics of coronal shock waves and the driving CMEs are derived separately using a forward modeling method. We analyze two CMEs that are observed as halos by the three spacecraft, and the peak speeds are over 2000 km s-1. From the 3D modeling, we find (1) the coronal shock waves are spherical apparently enclosing the Sun, in which the angular widths are much wider than those of CMEs (92° and 252° versus 58° and 91°), indicating shock waves are propagating away from the CMEs in the azimuthal directions, and (2) the speeds of the shock waves around the CME noses are comparable to those of the CME noses, but the speeds at the lateral flanks seem to be limited to the local fast magnetosonic speed. Applying our new method, we determine electron densities in the shock sheaths, the downstream-upstream density ratios, and the Mach numbers. We find (1) the sheath electron densities decrease with height in general but have the maximum near the CME noses, (2) the density ratios and Mach numbers also seem to depend on the position angle from the CME nose to the far-flank but are more or less constant in time, while the sheath electron densities and speeds decrease with time, because of the reduced local Alfven speed with height, and (3) the shocks could be supercritical in a wider spatial range, and it lasts longer, than those of what have been reported in the past. We conclude that the shock wave associated with an energetic CME is a phenomenon that is becoming a non-driven (blast-type), nearly freely propagating wave at the flank from a driven (bow- and/or piston-type) wave near the CME nose.

  4. Normalized maximum intensity time ratio maps and morphological descriptors for assessment of malignancy in MR mammography.

    Ertas, Gokhan; Gulcur, H Ozcan; Tunaci, Mehtap


    Effectiveness of morphological descriptors based on normalized maximum intensity-time ratio (nMITR) maps generated using a 3 x 3 pixel moving mask on dynamic contrast-enhanced magnetoresistance (MR) mammograms are studied for assessment of malignancy. After a rough indication of volume of interest on the nMITR maps, lesions are automatically segmented. Two-dimensional (2D) convexity, normalized complexity, extent, and eccentricity as well as three-dimensional (3D) versions of these descriptors and contact surface area ratio are computed. On a data set consisting of dynamic contrast-enhanced MR DCE-MR mammograms from 51 women that contain 26 benign and 32 malignant lesions, 3D convexity, complexity, and extent are found to reflect aggressiveness of malignancy better than 2D descriptors. Contact surface area ratio which is easily adaptable to different imaging resolutions is found to be the most significant and accurate descriptor (75% sensitivity, 88% specificity, 89% positive predictive values, and 74% negative predictive values).

  5. Minimum Specific Fuel Consumption of a Liquid-Cooled Multicylinder Aircraft Engine as Affected by Compression Ratio and Engine Operating Conditions

    Brun, Rinaldo J.; Feder, Melvin S.; Harries, Myron L.


    An investigation was conducted on a 12-cylinder V-type liquid-cooled aircraft engine of 1710-cubic-inch displacement to determine the minimum specific fuel consumption at constant cruising engine speed and compression ratios of 6.65, 7.93, and 9.68. At each compression ratio, the effect.of the following variables was investigated at manifold pressures of 28, 34, 40, and 50 inches of mercury absolute: temperature of the inlet-air to the auxiliary-stage supercharger, fuel-air ratio, and spark advance. Standard sea-level atmospheric pressure was maintained at the auxiliary-stage supercharger inlet and the exhaust pressure was atmospheric. Advancing the spark timing from 34 deg and 28 deg B.T.C. (exhaust and intake, respectively) to 42 deg and 36 deg B.T.C. at a compression ratio of 6.65 resulted in a decrease of approximately 3 percent in brake specific fuel consumption. Further decreases in brake specific fuel consumption of 10.5 to 14.1 percent (depending on power level) were observed as the compression ratio was increased from 6.65 to 9.68, maintaining at each compression ratio the spark advance required for maximum torque at a fuel-air ratio of 0.06. This increase in compression ratio with a power output of 0.585 horsepower per cubic inch required a change from . a fuel- lend of 6-percent triptane with 94-percent 68--R fuel at a compression ratio of 6.65 to a fuel blend of 58-percent, triptane with 42-percent 28-R fuel at a compression ratio of 9.68 to provide for knock-free engine operation. As an aid in the evaluation of engine mechanical endurance, peak cylinder pressures were measured on a single-cylinder engine at several operating conditions. Peak cylinder pressures of 1900 pounds per square inch can be expected at a compression ratio of 9.68 and an indicated mean effective pressure of 320 pounds per square inch. The engine durability was considerably reduced at these conditions.

  6. Describing Adequacy of cure with maximum hardness ratios and non-linear regression.

    Bouschlicher, Murray; Berning, Kristen; Qian, Fang


    Knoop Hardness (KH) ratios (HR) > or = 80% are commonly used as criteria for the adequate cure of a composite. These per-specimen HRs can be misleading, as both numerator and denominator may increase concurrently, prior to reaching an asymptotic, top-surface maximum hardness value (H(MAX)). Extended cure times were used to establish H(MAX) and descriptive statistics, and non-linear regression analysis were used to describe the relationship between exposure duration and HR and predict the time required for HR-H(MAX) = 80%. Composite samples 2.00 x 5.00 mm diameter (n = 5/grp) were cured for 10 seconds, 20 seconds, 40 seconds, 60 seconds, 90 seconds, 120 seconds, 180 seconds and 240 seconds in a 2-composite x 2-light curing unit design. A microhybrid (Point 4, P4) or microfill resin (Heliomolar, HM) composite was cured with a QTH or LED light curing unit and then stored in the dark for 24 hours prior to KH testing. Non-linear regression was calculated with: H = (H(MAX)-c)(1-e(-kt)) +c, H(MAX) = maximum hardness (a theoretical asymptotic value), c = constant (t = 0), k = rate constant and t = exposure duration describes the relationship between radiant exposure (irradiance x time) and HRs. Exposure durations for HR-H(MAX) = 80% were calculated. Two-sample t-tests for pairwise comparisons evaluated relative performance of the light curing units for similar surface x composite x exposure (10-90s). A good measure of goodness-of-fit of the non-linear regression, r2, ranged from 0.68-0.95. (mean = 0.82). Microhybrid (P4) exposure to achieve HR-H(MAX = 80% was 21 seconds for QTH and 34 seconds for the LED light curing unit. Corresponding values for microfill (HM) were 71 and 74 seconds, respectively. P4 HR-H(MAX) of LED vs QTH was statistically similar for 10 to 40 seconds, while HM HR-H(MAX) of LED was significantly lower than QTH for 10 to 40 seconds. It was concluded that redefined hardness ratios based on maximum hardness used in conjunction with non-linear regression

  7. Behavior of High Water-cement Ratio Concrete under Biaxial Compression after Freeze-thaw Cycles

    SHANG Huaishuai; SONG Yupu; OU Jinping


    The high water-cement ratio concrete specimens under biaxial compression that completed in a triaxial testing machine were experimentally studied.Strength and deformations of plain concrete specimens after 0,25,50 cycles of freeze-thaw.Influences of freeze-thaw cycles and stress ratio on the peak stress and deformation of this point were analyzed aecording to the experimental results.Based on the test data,the failure criterion expressed in terms of principal stress after difierent cycles of freeze-thaw,and the failure criterion with consideration of the influence of freeze-thaw cycle and sffess ratio were proposed respectively.

  8. Non-uniformly under-sampled multi-dimensional spectroscopic imaging in vivo: maximum entropy versus compressed sensing reconstruction.

    Burns, Brian; Wilson, Neil E; Furuyama, Jon K; Thomas, M Albert


    The four-dimensional (4D) echo-planar correlated spectroscopic imaging (EP-COSI) sequence allows for the simultaneous acquisition of two spatial (ky, kx) and two spectral (t2, t1) dimensions in vivo in a single recording. However, its scan time is directly proportional to the number of increments in the ky and t1 dimensions, and a single scan can take 20–40 min using typical parameters, which is too long to be used for a routine clinical protocol. The present work describes efforts to accelerate EP-COSI data acquisition by application of non-uniform under-sampling (NUS) to the ky–t1 plane of simulated and in vivo EP-COSI datasets then reconstructing missing samples using maximum entropy (MaxEnt) and compressed sensing (CS). Both reconstruction problems were solved using the Cambridge algorithm, which offers many workflow improvements over other l1-norm solvers. Reconstructions of retrospectively under-sampled simulated data demonstrate that the MaxEnt and CS reconstructions successfully restore data fidelity at signal-to-noise ratios (SNRs) from 4 to 20 and 5× to 1.25× NUS. Retrospectively and prospectively 4× under-sampled 4D EP-COSI in vivo datasets show that both reconstruction methods successfully remove NUS artifacts; however, MaxEnt provides reconstructions equal to or better than CS. Our results show that NUS combined with iterative reconstruction can reduce 4D EP-COSI scan times by 75% to a clinically viable 5 min in vivo, with MaxEnt being the preferred method. 2013 John Wiley & Sons, Ltd.

  9. Analysis of Large-Strain Extrusion Machining with Different Chip Compression Ratios

    Wen Jun Deng


    Full Text Available Large-Strain Extrusion Machining (LSEM is a novel-introduced process for deforming materials to very high plastic strains to produce ultra-fine nanostructured materials. Before the technique can be exploited, it is important to understand the deformation behavior of the workpiece and its relationship to the machining parameters and friction conditions. This paper reports finite-element method (FEM analysis of the LSEM process to understand the evolution of temperature field, effective strain, and strain rate under different chip compression ratios. The cutting and thrust forces are also analyzed with respect to time. The results show that LSEM can produce very high strains by changing in the value of chip compression ratio, thereby enabling the production of nanostructured materials. The shape of the chip produced by LSEM can also be geometrically well constrained.

  10. Effect of compression ratio on the performance, combustion and emission from a diesel engine using palm biodiesel

    Datta, Ambarish; Mandal, Bijan Kumar


    The authors have simulated a single cylinder diesel engine using Diesel-RK software to investigate the performance, emission and combustion characteristics of the engine using palm biodiesel and petro-diesel. The simulation has been carried out for three compression ratios of 16, 17 and 18 at constant speed of 1500 rpm. The analysis of simulation results show that brake thermal efficiency decreases and brake specific fuel consumption increases with the use of palm biodiesel instead of diesel. The thermal efficiency increases and the brake specific fuel consumption decreases with the increase of compression ratio. The higher compression ratio results in higher in-cylinder pressure and higher heat release rate as well as lower ignition delay. The NOx and CO2 emissions increase at higher compression ratio due to the higher pressure and temperature. On the other hand, the specific PM emission and smoke opacity are less at higher compression ratio.

  11. Formation of compressed flat electron beams with high transverse-emittance ratios

    Zhu, J. [Fermilab; Institute of Fluid Physics, CAEP, China; Piot, P. [Northern Illinois University; Fermilab; Mihalcea, D. [Northern Illinois University; Prokop, C. R. [Northern Illinois University


    Flat beams—beams with asymmetric transverse emittances—have important applications in novel light-source concepts and advanced-acceleration schemes and could possibly alleviate the need for damping rings in lepton colliders. Over the last decade, a flat beam generation technique based on the conversion of an angular-momentum-dominated beam was proposed and experimentally tested. In this paper we explore the production of compressed flat beams. We especially investigate and optimize the flat beam transformation for beams with substantial fractional energy spread. We use as a simulation example the photoinjector of Fermilab’s Advanced Superconducting Test Accelerator. The optimizations of the flat beam generation and compression at Advanced Superconducting Test Accelerator were done via start-to-end numerical simulations for bunch charges of 3.2 nC, 1.0 nC, and 20 pC at ~37 MeV. The optimized emittances of flat beams with different bunch charges were found to be 0.25 μm (emittance ratio is ~400), 0.13 μm, 15 nm before compression, and 0.41 μm, 0.20 μm, 16 nm after full compression, respectively, with peak currents as high as 5.5 kA for a 3.2-nC flat beam. These parameters are consistent with requirements needed to excite wakefields in asymmetric dielectric-lined waveguides or produce significant photon flux using small-gap micro-undulators.

  12. Impact and Mitigation of Multiantenna Analog Front-End Mismatch in Transmit Maximum Ratio Combining

    Liu, Jian; Khaled, Nadia; Petré, Frederik; Bourdoux, André; Barel, Alain


    Transmit maximum ratio combining (MRC) allows to extend the range of wireless local area networks (WLANs) by exploiting spatial diversity and array gains. These gains, however, depend on the availability of the channel state information (CSI). In this perspective, an open-loop approach in time-division-duplex (TDD) systems relies on channel reciprocity between up- and downlink to acquire the CSI. Although the propagation channel can be assumed to be reciprocal, the radio-frequency (RF) transceivers may exhibit amplitude and phase mismatches between the up- and downlink. In this contribution, we present a statistical analysis to assess the impact of these mismatches on the performance of transmit-MRC. Furthermore, we propose a novel mixed-signal calibration scheme to mitigate these mismatches, which allows to reduce the implementation loss to as little as a few tenths of a dB. Finally, we also demonstrate the feasibility of the proposed calibration scheme in a real-time wireless MIMO-OFDM prototyping platform.

  13. Overlap maximum matching ratio (OMMR):a new measure to evaluate overlaps of essential modules

    Xiao-xia ZHANG; Qiang-hua XIAO; Bin LI; Sai HU; Hui-jun XIONG; Bi-hai ZHAO


    Protein complexes are the basic units of macro-molecular organizations and help us to understand the cell’s mechanism. The development of the yeast two-hybrid, tandem affinity purification, and mass spectrometry high-throughput proteomic techniques supplies a large amount of protein-protein interaction data, which make it possible to predict overlapping complexes through computational methods. Research shows that overlapping complexes can contribute to identifying essential proteins, which are necessary for the organism to survive and reproduce, and for life’s activities. Scholars pay more attention to the evaluation of protein complexes. However, few of them focus on predicted overlaps. In this paper, an evaluation criterion called overlap maximum matching ratio (OMMR) is proposed to analyze the similarity between the identified overlaps and the benchmark overlap modules. Comparison of essential proteins and gene ontology (GO) analysis are also used to assess the quality of overlaps. We perform a comprehensive comparison of serveral overlapping complexes prediction approaches, using three yeast protein-protein interaction (PPI) networks. We focus on the analysis of overlaps identified by these algorithms. Experimental results indicate the important of overlaps and reveal the relationship between overlaps and identification of essential proteins.

  14. Studying the effect of compression ratio on an engine fueled with waste oil produced biodiesel/diesel fuel

    Mohammed EL_Kassaby


    Full Text Available Wasted cooking oil from restaurants was used to produce neat (pure biodiesel through transesterification, and then used to prepare biodiesel/diesel blends. The effect of blending ratio and compression ratio on a diesel engine performance has been investigated. Emission and combustion characteristics was studded when the engine operated using the different blends (B10, B20, B30, and B50 and normal diesel fuel (B0 as well as when varying the compression ratio from 14 to 16 to 18. The result shows that the engine torque for all blends increases as the compression ratio increases. The bsfc for all blends decreases as the compression ratio increases and at all compression ratios bsfc remains higher for the higher blends as the biodiesel percent increase. The change of compression ratio from 14 to 18 resulted in, 18.39%, 27.48%, 18.5%, and 19.82% increase in brake thermal efficiency in case of B10, B20, B30, and B50 respectively. On an average, the CO2 emission increased by 14.28%, the HC emission reduced by 52%, CO emission reduced by 37.5% and NOx emission increased by 36.84% when compression ratio was increased from 14 to 18. In spite of the slightly higher viscosity and lower volatility of biodiesel, the ignition delay seems to be lower for biodiesel than for diesel. On average, the delay period decreased by 13.95% when compression ratio was increased from 14 to 18. From this study, increasing the compression ratio had more benefits with biodiesel than that with pure diesel.

  15. Maximum mass ratio of AM CVn-type binary systems and maximum white dwarf mass in ultra-compact X-ray binaries

    Arbutina Bojan


    Full Text Available AM CVn-type stars and ultra-compact X-ray binaries are extremely interesting semi-detached close binary systems in which the Roche lobe filling component is a white dwarf transferring mass to another white dwarf, neutron star or a black hole. Earlier theoretical considerations show that there is a maximum mass ratio of AM CVn-type binary systems (qmax ≈ 2/3 below which the mass transfer is stable. In this paper we derive slightly different value for qmax and more interestingly, by applying the same procedure, we find the maximum expected white dwarf mass in ultra-compact X-ray binaries.

  16. Body Fineness Ratio as a Predictor of Maximum Prolonged-Swimming Speed in Coral Reef Fishes

    Walker, Jeffrey A.; Alfaro, Michael E.; Noble, Mae M.; Fulton, Christopher J.


    The ability to sustain high swimming speeds is believed to be an important factor affecting resource acquisition in fishes. While we have gained insights into how fin morphology and motion influences swimming performance in coral reef fishes, the role of other traits, such as body shape, remains poorly understood. We explore the ability of two mechanistic models of the causal relationship between body fineness ratio and endurance swimming-performance to predict maximum prolonged-swimming speed (Umax) among 84 fish species from the Great Barrier Reef, Australia. A drag model, based on semi-empirical data on the drag of rigid, submerged bodies of revolution, was applied to species that employ pectoral-fin propulsion with a rigid body at Umax. An alternative model, based on the results of computer simulations of optimal shape in self-propelled undulating bodies, was applied to the species that swim by body-caudal-fin propulsion at Umax. For pectoral-fin swimmers, Umax increased with fineness, and the rate of increase decreased with fineness, as predicted by the drag model. While the mechanistic and statistical models of the relationship between fineness and Umax were very similar, the mechanistic (and statistical) model explained only a small fraction of the variance in Umax. For body-caudal-fin swimmers, we found a non-linear relationship between fineness and Umax, which was largely negative over most of the range of fineness. This pattern fails to support either predictions from the computational models or standard functional interpretations of body shape variation in fishes. Our results suggest that the widespread hypothesis that a more optimal fineness increases endurance-swimming performance via reduced drag should be limited to fishes that swim with rigid bodies. PMID:24204575

  17. Body fineness ratio as a predictor of maximum prolonged-swimming speed in coral reef fishes.

    Walker, Jeffrey A; Alfaro, Michael E; Noble, Mae M; Fulton, Christopher J


    The ability to sustain high swimming speeds is believed to be an important factor affecting resource acquisition in fishes. While we have gained insights into how fin morphology and motion influences swimming performance in coral reef fishes, the role of other traits, such as body shape, remains poorly understood. We explore the ability of two mechanistic models of the causal relationship between body fineness ratio and endurance swimming-performance to predict maximum prolonged-swimming speed (Umax ) among 84 fish species from the Great Barrier Reef, Australia. A drag model, based on semi-empirical data on the drag of rigid, submerged bodies of revolution, was applied to species that employ pectoral-fin propulsion with a rigid body at U max. An alternative model, based on the results of computer simulations of optimal shape in self-propelled undulating bodies, was applied to the species that swim by body-caudal-fin propulsion at Umax . For pectoral-fin swimmers, Umax increased with fineness, and the rate of increase decreased with fineness, as predicted by the drag model. While the mechanistic and statistical models of the relationship between fineness and Umax were very similar, the mechanistic (and statistical) model explained only a small fraction of the variance in Umax . For body-caudal-fin swimmers, we found a non-linear relationship between fineness and Umax , which was largely negative over most of the range of fineness. This pattern fails to support either predictions from the computational models or standard functional interpretations of body shape variation in fishes. Our results suggest that the widespread hypothesis that a more optimal fineness increases endurance-swimming performance via reduced drag should be limited to fishes that swim with rigid bodies.

  18. A miniature Rotary Compressor with a 1:10 compression ratio

    Dmitriev, Olly; Tabota, Eugene; Arbon EurIng, Ian; FIMechE, CEng


    Micro compressors have applications in medical devices, robotics and “nanosatellites”. The problem of active cooling for photo detectors in “nano-satellites” becomes more important because the majority of space missions target Earth observation, and passive cooling does not provide the required temperatures to achieve the desired SNR levels. Reciprocating compressors used in cryocoolers cause vibrations. VERT Rotors has built an ultralow-vibration rotary compressor with 40mm-long screws, and our prototype delivered 1:10 compression ratio. This “nano” compressor is a non-conventional conical type consisting of an Inner conical screw rotor revolving inside an Outer screw rotor.

  19. Optimal Chest Compression Rate and Compression to Ventilation Ratio in Delivery Room Resuscitation: Evidence from Newborn Piglets and Neonatal Manikins

    Solevåg, Anne Lee; Schmölzer, Georg M.


    Cardiopulmonary resuscitation (CPR) duration until return of spontaneous circulation (ROSC) influences survival and neurologic outcomes after delivery room (DR) CPR. High quality chest compressions (CC) improve cerebral and myocardial perfusion. Improved myocardial perfusion increases the likelihood of a faster ROSC. Thus, optimizing CC quality may improve outcomes both by preserving cerebral blood flow during CPR and by reducing the recovery time. CC quality is determined by rate, CC to ventilation (C:V) ratio, and applied force, which are influenced by the CC provider. Thus, provider performance should be taken into account. Neonatal resuscitation guidelines recommend a 3:1 C:V ratio. CCs should be delivered at a rate of 90/min synchronized with ventilations at a rate of 30/min to achieve a total of 120 events/min. Despite a lack of scientific evidence supporting this, the investigation of alternative CC interventions in human neonates is ethically challenging. Also, the infrequent occurrence of extensive CPR measures in the DR make randomized controlled trials difficult to perform. Thus, many biomechanical aspects of CC have been investigated in animal and manikin models. Despite mathematical and physiological rationales that higher rates and uninterrupted CC improve CPR hemodynamics, studies indicate that provider fatigue is more pronounced when CC are performed continuously compared to when a pause is inserted after every third CC as currently recommended. A higher rate (e.g., 120/min) is also more fatiguing, which affects CC quality. In post-transitional piglets with asphyxia-induced cardiac arrest, there was no benefit of performing continuous CC at a rate of 90/min. Not only rate but duty cycle, i.e., the duration of CC/total cycle time, is a known determinant of CC effectiveness. However, duty cycle cannot be controlled with manual CC. Mechanical/automated CC in neonatal CPR has not been explored, and feedback systems are under-investigated in this

  20. Minute ventilation at different compression to ventilation ratios, different ventilation rates, and continuous chest compressions with asynchronous ventilation in a newborn manikin

    Solevåg Anne L


    Full Text Available Abstract Background In newborn resuscitation the recommended rate of chest compressions should be 90 per minute and 30 ventilations should be delivered each minute, aiming at achieving a total of 120 events per minute. However, this recommendation is based on physiological plausibility and consensus rather than scientific evidence. With focus on minute ventilation (Mv, we aimed to compare today’s standard to alternative chest compression to ventilation (C:V ratios and different ventilation rates, as well as to continuous chest compressions with asynchronous ventilation. Methods Two investigators performed cardiopulmonary resuscitation on a newborn manikin with a T-piece resuscitator and manual chest compressions. The C:V ratios 3:1, 9:3 and 15:2, as well as continuous chest compressions with asynchronous ventilation (120 compressions and 40 ventilations per minute were performed in a randomised fashion in series of 10 × 2 minutes. In addition, ventilation only was performed at three different rates (40, 60 and 120 ventilations per minute, respectively. A respiratory function monitor measured inspiration time, tidal volume and ventilation rate. Mv was calculated for the different interventions and the Mann–Whitney test was used for comparisons between groups. Results Median Mv per kg in ml (interquartile range was significantly lower at the C:V ratios of 9:3 (140 (134–144 and 15:2 (77 (74–83 as compared to 3:1 (191(183–199. With ventilation only, there was a correlation between ventilation rate and Mv despite a negative correlation between ventilation rate and tidal volumes. Continuous chest compressions with asynchronous ventilation gave higher Mv as compared to coordinated compressions and ventilations at a C:V ratio of 3:1. Conclusions In this study, higher C:V ratios than 3:1 compromised ventilation dynamics in a newborn manikin. However, higher ventilation rates, as well as continuous chest compressions with asynchronous


    R. D. EKNATH


    Full Text Available In recent 10 years biodiesel fuel was studied extensively as an alternative fuel. Most of researchers reported performance and emission of biodiesel and their blends with constant compression ratio. Also all the research was conducted with use of single biodiesel and its blend. Few reports are observed with the use of variable compression ratio and blends of more than one biodiesel. Main aim of the present study is to analyse the effect of compression ratio on the performance and emission of dual blends of biodiesel. In the present study Blends of Jatropha and Karanja with Diesel fuel was tested on single cylinder VCR DI diesel engine for compression ratio 16 and 18. High density of biodiesel fuel causes longer delay period for Jatropha fuel was observed compare with Karanja fuel. However blending of two biodiesel K20J40D results in to low mean gas temperature which is the main reason for low NOx emission.

  2. Influence of Compression Ratio on the Performance and Emission Characteristics of Annona Methyl Ester Operated DI Diesel Engine

    Senthil Ramalingam


    Full Text Available This study aims to find the optimum performance and emission characteristics of single cylinder variable compression ratio (VCR engine with different blends of Annona methyl ester (AME as fuel. The performance parameters such as specific fuel consumption (SFC, brake thermal efficiency (BTE, and emission levels of HC, CO, Smoke, and NOx were compared with the diesel fuel. It is found that, at compression ratio of 17: 1 for A20 blended fuel (20% AME + 80% Diesel shows better performance and lower emission level which is very close to neat diesel fuel. The engine was operated with different values of compression ratio (15, 16, and 17 to find out best possible combination for operating engine with blends of AME. It is also found that the increase of compression ratio increases the BTE and reduces SFC and has lower emission without any engine in design modifications.

  3. Quantitative visually lossless compression ratio determination of JPEG2000 in digitized mammograms.

    Georgiev, Verislav T; Karahaliou, Anna N; Skiadopoulos, Spyros G; Arikidis, Nikos S; Kazantzi, Alexandra D; Panayiotakis, George S; Costaridou, Lena I


    The current study presents a quantitative approach towards visually lossless compression ratio (CR) threshold determination of JPEG2000 in digitized mammograms. This is achieved by identifying quantitative image quality metrics that reflect radiologists' visual perception in distinguishing between original and wavelet-compressed mammographic regions of interest containing microcalcification clusters (MCs) and normal parenchyma, originating from 68 images from the Digital Database for Screening Mammography. Specifically, image quality of wavelet-compressed mammograms (CRs, 10:1, 25:1, 40:1, 70:1, 100:1) is evaluated quantitatively by means of eight image quality metrics of different computational principles and qualitatively by three radiologists employing a five-point rating scale. The accuracy of the objective metrics is investigated in terms of (1) their correlation (r) with qualitative assessment and (2) ROC analysis (A z index), employing pooled radiologists' rating scores as ground truth. The quantitative metrics mean square error, mean absolute error, peak signal-to-noise ratio, and structural similarity demonstrated strong correlation with pooled radiologists' ratings (r, 0.825, 0.823, -0.825, and -0.826, respectively) and the highest area under ROC curve (A z , 0.922, 0.920, 0.922, and 0.922, respectively). For each quantitative metric, the highest accuracy values of corresponding ROC curves were used to define metric cut-off values. The metrics cut-off values were subsequently used to suggest a visually lossless CR threshold, estimated to be between 25:1 and 40:1 for the dataset analyzed. Results indicate the potential of the quantitative metrics approach in predicting visually lossless CRs in case of MCs in mammography.

  4. Compression Dispersion Efficiency of Recycled Aggregate Concrete Struts At Different Load Concentration Ratios

    Dr. Rakesh Kumar, Dr.P.K Mehta,Devbrat Singh, Anup Kumar Pandey, Sarvesh Kumar


    Full Text Available Infrastructure development activities in India have increased many folds in recent times. This has resulted in increase in the demand of construction materials like cement, coarse aggregate, fine aggregate etc. Huge quantities of concrete wastes are produced due to demolition of old structures. If recycled aggregate from this waste is used for construction purpose, it will not only make the structures economical and eco-friendly butwill also solve the problem of waste disposal.Recycling old waste concrete by crushing and grading into coarse aggregates for use in new structural concrete is drawing the attention of engineers, environmentalists and researchers since last three decades. In this paper, an attempt has been made to study the compression dispersion behaviour of struts of natural coarse aggregate (NCA and recycle coarse aggregate (RCA at different load concentration ratio and aspect ratio. For the study, struts of 450 mm height and 75mm thickness with varying widths starting from 75mm to 450mm, using NCA and RCA concrete, were cast. The testing of struts was carriedout on loading frame of capacity 500 kN. The struts were tested to failure under in-plane compressive load applied through symmetrically placed steel plate (75×75×10 mm at top andbottom of the struts.

  5. Performance and exhaust emission characteristics of variable compression ratio diesel engine fuelled with esters of crude rice bran oil.

    Vasudeva, Mohit; Sharma, Sumeet; Mohapatra, S K; Kundu, Krishnendu


    As a substitute to petroleum-derived diesel, biodiesel has high potential as a renewable and environment friendly energy source. For petroleum importing countries the choice of feedstock for biodiesel production within the geographical region is a major influential factor. Crude rice bran oil is found to be good and viable feedstock for biodiesel production. A two step esterification is carried out for higher free fatty acid crude rice bran oil. Blends of 10, 20 and 40 % by vol. crude rice bran biodiesel are tested in a variable compression ratio diesel engine at compression ratio 15, 16, 17 and 18. Engine performance and exhaust emission parameters are examined. Cylinder pressure-crank angle variation is also plotted. The increase in compression ratio from 15 to 18 resulted in 18.6 % decrease in brake specific fuel consumption and 14.66 % increase in brake thermal efficiency on an average. Cylinder pressure increases by 15 % when compression ratio is increased. Carbon monoxide emission decreased by 22.27 %, hydrocarbon decreased by 38.4 %, carbon dioxide increased by 17.43 % and oxides of nitrogen as NOx emission increased by 22.76 % on an average when compression ratio is increased from 15 to 18. The blends of crude rice bran biodiesel show better results than diesel with increase in compression ratio.

  6. Hybrid Energy Storage System Based on Compressed Air and Super-Capacitors with Maximum Efficiency Point Tracking (MEPT)

    Lemofouet, Sylvain; Rufer, Alfred

    This paper presents a hybrid energy storage system mainly based on Compressed Air, where the storage and withdrawal of energy are done within maximum efficiency conditions. As these maximum efficiency conditions impose the level of converted power, an intermittent time-modulated operation mode is applied to the thermodynamic converter to obtain a variable converted power. A smoothly variable output power is achieved with the help of a supercapacitive auxiliary storage device used as a filter. The paper describes the concept of the system, the power-electronic interfaces and especially the Maximum Efficiency Point Tracking (MEPT) algorithm and the strategy used to vary the output power. In addition, the paper introduces more efficient hybrid storage systems where the volumetric air machine is replaced by an oil-hydraulics and pneumatics converter, used under isothermal conditions. Practical results are also presented, recorded from a low-power air motor coupled to a small DC generator, as well as from a first prototype of the hydro-pneumatic system. Some economical considerations are also made, through a comparative cost evaluation of the presented hydro-pneumatic systems and a lead acid batteries system, in the context of a stand alone photovoltaic home application. This evaluation confirms the cost effectiveness of the presented hybrid storage systems.

  7. The Effect of Alkaline Activator Ratio on the Compressive Strength of Fly Ash-Based Geopolymer Paste

    Lăzărescu, A. V.; Szilagyi, H.; Baeră, C.; Ioani, A.


    Alkaline activation of fly ash is a particular procedure in which ash resulting from a power plant combined with a specific alkaline activator creates a solid material when dried at a certain temperature. In order to obtain desirable compressive strengths, the mix design of fly ash based geopolymer pastes should be explored comprehensively. To determine the preliminary compressive strength for fly ash based geopolymer paste using Romanian material source, various ratios of Na2SiO3 solution/ NaOH solution were produced, keeping the fly ash/alkaline activator ratio constant. All the mixes were then cured at 70 °C for 24 hours and tested at 2 and 7 days, respectively. The aim of this paper is to present the preliminary compressive strength results for producing fly ash based geopolymer paste using Romanian material sources, the effect of alkaline activators ratio on the compressive strength and studying the directions for future research.

  8. Does limited gear ratio driven higher training cadence in junior cycling reflect in maximum effort sprint?

    Rannama, Indrek; Port, Kristjan; Bazanov, Boriss


    Maximum gears for youth category riders are limited. As a result, youth category riders are regularly compelled to ride in a high cadence regime. The aim of this study was to investigate if regular work at high cadence regime due to limited transmission in youth category riders reflects in effectual cadence at the point of maximal power generation during the 10 second sprint effort. 24 junior and youth national team cyclist’s average maximal peak power at various cadence regimes was registere...




    Full Text Available As the population of the world increases consumption of the energy also increases tremendously. With the current consumption rate if it has been quoted that there will be great shortage of petroleum products in upcoming decades, it will not be wrong. For this reason people are looking for alternative fuels. As ethanol is the main bio-product in the many industries now-a-days, it is better to develop the engine which can work on pure ethanol or one can add ethanol in the petrol or diesel and use the blends of that. For this purpose, it is necessary to check the performance characteristics and emissions of the blends of ethanol and also necessary to compare with the pure form of fuels. Again it is necessary to check the effect of compression ratio on the blends of ethanol. So in this paper the same has been conducted at basic level.

  10. Efficiency and exhaust gas analysis of variable compression ratio spark ignition engine fuelled with alternative fuels

    N. Seshaiah


    Full Text Available Considering energy crises and pollution problems today, investigations have been concentrated on decreasing fuel consumption by using alternative fuels and on lowering the concentration of toxic components in combustion products. In the present work, the variable compression ratio spark ignition engine designed to run on gasoline has been tested with pure gasoline, LPG (Isobutene, and gasoline blended with ethanol 10%, 15%, 25% and 35% by volume. Also, the gasoline mixed with kerosene at 15%, 25% and 35% by volume without any engine modifications has been tested and presented the result. Brake thermal and volumetric efficiency variation with brake load is compared and presented. CO and CO2 emissions have been also compared for all tested fuels.

  11. Modeling the Plasma Flow in the Inner Heliosheath with a Spatially Varying Compression Ratio

    Nicolaou, G.; Livadiotis, G.


    We examine a semi-analytical non-magnetic model of the termination shock location previously developed by Exarhos & Moussas. In their study, the plasma flow beyond the shock is considered incompressible and irrotational, thus the flow potential is analytically derived from the Laplace equation. Here we examine the characteristics of the downstream flow in the heliosheath in order to resolve several inconsistencies existing in the Exarhos & Moussas model. In particular, the model is modified in order to be consistent with the Rankine-Hugoniot jump conditions and the geometry of the termination shock. It is shown that a shock compression ratio varying along the latitude can lead to physically correct results. We describe the new model and present several simplified examples for a nearly spherical, strong termination shock. Under those simplifications, the upstream plasma is nearly adiabatic for large (˜100 AU) heliosheath thickness.

  12. Cycle-by-cycle Variations in a Direct Injection Hydrogen Enriched Compressed Natural Gas Engine Employing EGR at Relative Air-Fuel Ratios.

    Olalekan Wasiu Saheed


    Full Text Available Since the pressure development in a combustion chamber is uniquely related to the combustion process, substantial variations in the combustion process on a cycle-by-cycle basis are occurring. To this end, an experimental study of cycle-by-cycle variation in a direct injection spark ignition engine fueled with natural gas-hydrogen blends combined with exhaust gas recirculation at relative air-fuel ratios was conducted. The impacts of relative air-fuel ratios (i.e. λ = 1.0, 1.2, 1.3 and 1.4 which represent stoichiometric, moderately lean, lean and very lean mixtures respectively, hydrogen fractions and EGR rates were studied. The results showed that increasing the relative air-fuel ratio increases the COVIMEP. The behavior is more pronounced at the larger relative air-fuel ratios. More so, for a specified EGR rate; increasing the hydrogen fractions decreases the maximum COVIMEP value just as increasing in EGR rates increases the maximum COVIMEP value. (i.e. When percentage EGR rates is increased from 0% to 17% and 20% respectively. The maximum COVIMEP value increases from 6.25% to 6.56% and 8.30% respectively. Since the introduction of hydrogen gas reduces the cycle-by-cycle combustion variation in engine cylinder; thus it can be concluded that addition of hydrogen into direct injection compressed natural gas engine employing EGR at various relative air-fuel ratios is a viable approach to obtain an improved combustion quality which correspond to lower coefficient of variation in imep, (COVIMEP in a direct injection compressed natural gas engine employing EGR at relative air-fuel ratios.




    Full Text Available The study targets at finding the effects of Engine Design parameter (Compression ratio on the Performance with regard to Brake Specific Fuel Consumption and brake thermal efficiency, Combustion parameter viz. Cylinder pressure, Hear Release rate (HRR, Rate of Pressure Rise (RPR and emission of CO, CO2, HC , NOx with diesel as a fuel. The Study was carried out at different compression ratios (14-17 to find the optimum value at which lesser emissions and better performance and combustion characteristics are obtained. It was found that as the compression ratio is increased the Brake thermal efficiency and brake power increases and brake specific fuel consumption is slightly reduced. The combustion parameters CP, HRR, RPR all increase with increase with increase in compression ratio. The Emission of CO2 and NOx increases steeply at high compression ratio. A combustion Model of the engine is created in StarCD software and the experimental and the theoretical Cylinder pressure values are validated.

  14. Maximum likelihood estimates with order restrictions on probabilities and odds ratios: A geometric programming approach

    D. L. Bricker


    Full Text Available The problem of assigning cell probabilities to maximize a multinomial likelihood with order restrictions on the probabilies and/or restrictions on the local odds ratios is modeled as a posynomial geometric program (GP, a class of nonlinear optimization problems with a well-developed duality theory and collection of algorithms. (Local odds ratios provide a measure of association between categorical random variables. A constrained multinomial MLE example from the literature is solved, and the quality of the solution is compared with that obtained by the iterative method of El Barmi and Dykstra, which is based upon Fenchel duality. Exploiting the proximity of the GP model of MLE problems to linear programming (LP problems, we also describe as an alternative, in the absence of special-purpose GP software, an easily implemented successive LP approximation method for solving this class of MLE problems using one of the readily available LP solvers.

  15. Comparison of Tissue-Maximum Ratio and Output Factors ESTRO booklet with 6 for Siemens Primus accelerator Mevatron; Comparacion de Tissue-Maximum Ratio y Output Factors con el ESTYRO booklet 6 para un acelerador Siemens Primus Mevatron

    Lupiani Castellanos, J.; Quinones Rodriguez, L. A.; Richarte Reina, J. M.; Ramos Caballero, L. J.; Angulo Pain, E.; Castro Ramierez, I. J.; Iborra Oquendo, M. A.; Urena Llinares, A.


    The ESTRO Booklet 6 gives the numerical data collected in four different sizes and different accelerators for different beam qualities. Although the end of this guide is the calculation and verification of monitor units, the data we have used Siemens Primus accelerator Mevatron 6 MV photons to perform quality control of the experimental measurements for the tissue-maximum ratio (TMR) and the output factor (OF) in air yen dummy.

  16. Optimization of structural parameters for spatial flexible redundant manipulators with maximum ratio of load to mass

    ZHANG Xu-ping; YU Yue-qing


    Optimization of structural parameters aimed at improving the load carrying capacity of spatial flexible redundant manipulators is presented in this paper. In order to increase the ratio of load to mass of robots, the cross-sectional parameters and constructional parameters are optimized respectively. The cross-sectional and configurational parameters are optimized simultaneously. The numerical simulation of a 4R spatial manipulator is performed. The results show that the load capacity of robots has been greatly improved through the optimization strategies proposed in this paper.

  17. Gradient Compression Garments as a Countermeasure to Post-Space Flight Orthostatic Intolerance: Potential Interactions with the Maximum Absorbency Garment

    Lee, S. M. C.; Laurie, S. S.; Macias, B. R.; Willig, M.; Johnson, K.; Stenger, M. B.


    Astronauts and cosmonauts may experience symptoms of orthostatic intolerance during re-entry, landing, and for several days post-landing following short- and long-duration spaceflight. Presyncopal symptoms have been documented in approximately 20% of short-duration and greater than 60% of long-duration flyers on landing day specifically during 5-10 min of controlled (no countermeasures employed at the time of testing) stand tests or 80 deg head-up tilt tests. Current operational countermeasures to orthostatic intolerance include fluid loading prior to and whole body cooling during re-entry as well as compression garments that are worn during and for up to several days after landing. While both NASA and the Russian space program have utilized compression garments to protect astronauts and cosmonauts traveling on their respective vehicles, a "next-generation" gradient compression garment (GCG) has been developed and tested in collaboration with a commercial partner to support future space flight missions. Unlike previous compression garments used operationally by NASA that provide a single level of compression across only the calves, thighs, and lower abdomen, the GCG provides continuous coverage from the feet to below the pectoral muscles in a gradient fashion (from approximately 55 mm Hg at the feet to approximately 16 mmHg across the abdomen). The efficacy of the GCG has been demonstrated previously after a 14-d bed rest study without other countermeasures and after short-duration Space Shuttle missions. Currently the GCG is being tested during a stand test following long-duration missions (6 months) to the International Space Station. While results to date have been promising, interactions of the GCG with other space suit components have not been examined. Specifically, it is unknown whether wearing the GCG over NASA's Maximum Absorbency Garment (MAG; absorbent briefs worn for the collection of urine and feces while suited during re-entry and landing) will

  18. Physical Layer Authentication Enhancement Using Maximum SNR Ratio Based Cooperative AF Relaying

    Jiazi Liu


    Full Text Available Physical layer authentication techniques developed in conventional macrocell wireless networks face challenges when applied in the future fifth-generation (5G wireless communications, due to the deployment of dense small cells in a hierarchical network architecture. In this paper, we propose a novel physical layer authentication scheme by exploiting the advantages of amplify-and-forward (AF cooperative relaying, which can increase the coverage and convergence of the heterogeneous networks. The essence of the proposed scheme is to select the best relay among multiple AF relays for cooperation between legitimate transmitter and intended receiver in the presence of a spoofer. To achieve this goal, two best relay selection schemes are developed by maximizing the signal-to-noise ratio (SNR of the legitimate link to the spoofing link at the destination and relays, respectively. In the sequel, we derive closed-form expressions for the outage probabilities of the effective SNR ratios at the destination. With the help of the best relay, a new test statistic is developed for making an authentication decision, based on normalized channel difference between adjacent end-to-end channel estimates at the destination. The performance of the proposed authentication scheme is compared with that in a direct transmission in terms of outage and spoofing detection.

  19. Effect of strain rate and water-to-cement ratio on compressive mechanical behavior of cement mortar

    周继凯; 葛利梅


    Effects of strain rate and water-to-cement ratio on the dynamic compressive mechanical behavior of cement mortar are investigated by split Hopkinson pressure bar (SHPB) tests. 124 specimens are subjected to dynamic uniaxial compressive loadings. Strain rate sensitivity of the materials is measured in terms of failure modes, stress−strain curves, compressive strength, dynamic increase factor (DIF) and critical strain at peak stress. A significant change in the stress−strain response of the materials with each order of magnitude increase in strain rate is clearly seen from test results. The slope of the stress−strain curve after peak value for low water-to-cement ratio is steeper than that of high water-to-cement ratio mortar. The compressive strength increases with increasing strain rate. With increase in strain rate, the dynamic increase factor (DIF) increases. However, this increase in DIF with increase in strain rate does not appear to be a function of the water-to-cement ratio. The critical compressive strain increases with the strain rate.




    Full Text Available An experimental study was conducted on a four stroke single cylinder compression ignition engine to determine the performance, combustion and exhaust emission characteristics under different compression ratio using an alternate fuel. The raw oil from the jatropha seed was subjected to transesterification process and is supplied to the engine as jatropha methyl ester (JME blended with diesel. The blends used in our paper are 10%, 20% and 30%. We found that the performance of the engine under VCR is maximum at 20% blend for CR18. The fuelconsumption is also found to be increased with, a higher proportion of jatropha curcas oil in the blend. But BSFC is low at 20% JME-D. Emission was found to be optimum at CR18 for all blends of the methyl ester. At high engine load, the peak cylinder pressure was found to be higher for 20% JME-D under compression ratio 18. Using STAR CD software, three dimensional simulations are deployed and the results generated are compared against experimental output.

  1. Performance and emission of generator Diesel engine using methyl esters of palm oil and diesel blends at different compression ratio

    Aldhaidhawi, M.; Chiriac, R.; Bădescu, V.; Pop, H.; Apostol, V.; Dobrovicescu, A.; Prisecaru, M.; Alfaryjat, A. A.; Ghilvacs, M.; Alexandru, A.


    This study proposes engine model to predicate the performance and exhaust gas emissions of a single cylinder four stroke direct injection engine which was fuelled with diesel and palm oil methyl ester of B7 (blends 7% palm oil methyl ester with 93% diesel by volume) and B10. The experiment was conducted at constant engine speed of 3000 rpm and different engine loads operations with compression ratios of 18:1, 20:1 and 22:1. The influence of the compression ratio and fuel typeson specific fuel consumption and brake thermal efficiency has been investigated and presented. The optimum compression ratio which yields better performance has been identified. The result from the present work confirms that biodiesel resulting from palm oil methyl ester could represent a superior alternative to diesel fuel when the engine operates with variable compression ratios. The blends, when used as fuel, result in a reduction of the brake specific fuel consumption and brake thermal efficiency, while NOx emissions was increased when the engine is operated with biodiesel blends.

  2. To Improvement in Image Compression ratio using Artificial Neural Network Technique

    Shabbir Ahmad


    Full Text Available Compression of data in any form is a large and active field as well as a big business. This paper presents a neural network based technique that may be applied to data compression. This paper breaks down large images into smaller windows and eliminates redundant information. Finally, the technique uses a neural network trained by direct solution methods. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are discussed as well as more recent methods for the compression of data presents a neural network based technique that may be applied to data compression. The proposed technique and images. Intelligent methods for data compression are reviewed including the use of Back propagation and Kohonen neural networks. The proposed technique has been implemented in C on the SP2 and tested on digital mammograms and other images. The results obtained are presented in this paper.

  3. Impact of amplitude jitter and signal-to-noise ratio on the nonlinear spectral compression in optical fibres

    Boscolo, Sonia; Fatome, Julien; Finot, Christophe


    We numerically study the effects of amplitude fluctuations and signal-to-noise ratio degradation of the seed pulses on the spectral compression process arising from nonlinear propagation in an optical fibre. The unveiled quite good stability of the process against these pulse degradation factors is assessed in the context of optical regeneration of intensity-modulated signals, by combining nonlinear spectral compression with centered bandpass optical filtering. The results show that the proposed nonlinear processing scheme indeed achieves mitigation of the signal's amplitude noise. However, in the presence of a jitter of the temporal duration of the pulses, the performance of the device deteriorates. © 2016 Elsevier

  4. On the low SNR capacity of maximum ratio combining over rician fading channels with full channel state information

    Benkhelifa, Fatma


    In this letter, we study the ergodic capacity of a maximum ratio combining (MRC) Rician fading channel with full channel state information (CSI) at the transmitter and at the receiver. We focus on the low Signal-to-Noise Ratio (SNR) regime and we show that the capacity scales as L ΩK+L SNRx log(1SNR), where Ω is the expected channel gain per branch, K is the Rician fading factor, and L is the number of diversity branches. We show that one-bit CSI feedback at the transmitter is enough to achieve this capacity using an on-off power control scheme. Our framework can be seen as a generalization of recently established results regarding the fading-channels capacity characterization in the low-SNR regime. © 2012 IEEE.

  5. Investigation on effect of equivalence ratio and engine speed on homogeneous charge compression ignition combustion using chemistry based CFD code

    Ghafouri Jafar


    Full Text Available Combustion in a large-bore natural gas fuelled diesel engine operating under Homogeneous Charge Compression Ignition mode at various operating conditions is investigated in the present paper. Computational Fluid Dynamics model with integrated chemistry solver is utilized and methane is used as surrogate of natural gas fuel. Detailed chemical kinetics mechanism is used for simulation of methane combustion. The model results are validated using experimental data by Aceves, et al. (2000, conducted on the single cylinder Volvo TD100 engine operating at Homogeneous Charge Compression Ignition conditions. After verification of model predictions using in-cylinder pressure histories, the effect of varying equivalence ratio and engine speed on combustion parameters of the engine is studied. Results indicate that increasing engine speed provides shorter time for combustion at the same equivalence ratio such that at higher engine speeds, with constant equivalence ratio, combustion misfires. At lower engine speed, ignition delay is shortened and combustion advances. It was observed that increasing the equivalence ratio retards the combustion due to compressive heating effect in one of the test cases at lower initial pressure. Peak pressure magnitude is increased at higher equivalence ratios due to higher energy input.

  6. Prediction of CO Concentration and Maximum Smoke Temperature beneath Ceiling in Tunnel Fire with Different Aspect Ratio

    S. Gannouni


    Full Text Available In a tunnel fire, the production of smoke and toxic gases remains the principal prejudicial factors to users. The heat is not considered as a major direct danger to users since temperatures up to man level do not reach tenable situations that after a relatively long time except near the fire source. However, the temperatures under ceiling can exceed the thresholds conditions and can thus cause structural collapse of infrastructure. This paper presents a numerical analysis of smoke hazard in tunnel fires with different aspect ratio by large eddy simulation. Results show that the CO concentration increases as the aspect ratio decreases and decreases with the longitudinal ventilation velocity. CFD predicted maximum smoke temperatures are compared to the calculated values using the model of Li et al. and then compared with those given by the empirical equation proposed by kurioka et al. A reasonable good agreement has been obtained. The backlayering length decreases as the ventilation velocity increases and this decrease fell into good exponential decay. The dimensionless interface height and the region of bad visibility increases with the aspect ratio of the tunnel cross-sectional geometry.

  7. Numerical Study of the Effect of the Sample Aspect Ratio on the Ductility of Bulk Metallic Glasses (BMGs) Under Compression

    Jiang, Yunpeng


    In this article, a systematic numerical study was conducted to study the detailed shear banding evolution in bulk metallic glasses (BMGs) with various sample aspect ratios under uniaxial compression, and whereby the effect of the sample aspect ratio on the compressive ductility was elucidated. A finite strain viscoelastic model was employed to describe the shear banding nucleation, growth, and coalescence in BMG samples with the help of Anand and Su's theory, which was incorporated into the ABAQUS finite element method code as a user material subroutine VUMAT. The present numerical method was first verified by comparing with the corresponding experimental results, and then parameter analysis was performed to discuss the impact of microstructure parameters on the predicted results. The present modeling will shed some light on enhancing the toughness of BMG structures in the engineering applications.

  8. Variation of kQclin,Qmsrfclin,fmsr for the small-field dosimetric parameters percentage depth dose, tissue-maximum ratio, and off-axis ratio

    Francescon, Paolo; Beddar, Sam; Satariano, Ninfa; Das, Indra J.


    Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of kQclin,Qmsrfclin,fmsr for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of kQclin,Qmsrfclin,fmsr enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD measurements for fields greater than those produced using a 10-mm collimator. However, with the detector stem parallel to the beam axis, the microchambers could be used for TMR measurements for all

  9. Investigations of effects of pilot injection with change in level of compression ratio in a common rail diesel engine

    Gajarlawar Nilesh


    Full Text Available These day diesel engines are gaining lots of attention as prime movers for various source of transportation. It offers better drive ability, very good low end torque and importantly the lower CO2 emission. Diesel engines are bridging the gap between gasoline and diesel engines. Better noise vibration and harshness levels of gasoline engine are realized to great extent in diesel engine, thanks to common rail direct injection system. Common rail injection system is now well known entity. Its unique advantage is flexible in operation. In common rail injection system, number of injection prior and after main injection at different injection pressure is possible. Due to multiple injections, gain in emission reduction as well as noise has been already experienced and demonstrated by researcher in the past. However, stringent emission norms for diesel engine equipped vehicle demands for further lower emission of oxides of nitrogen (NOx and particulate matter (PM. In the present paper, authors attempted to study the effect of multiple injections in combination with two level of compression ratio. The aim was to study the combustion behavior with the reduced compression ratio which is going to be tried out as low temperature combustion concept in near future. The results were compared with the current level of compression ratio. Experiments were carried out in 2.2L cubic capacity engine with two levels of compression ratios. Pilot injection separation and quantities were varied keeping the main injection, rail pressure, boost pressure and EGR rate constant. Cylinder pressure traces and gross heat release rates were measured and analyzed to understand the combustion behavior.

  10. The Effect of Compression Ratio, Fuel Octane Rating, and Ethanol Content on Spark-Ignition Engine Efficiency.

    Leone, Thomas G; Anderson, James E; Davis, Richard S; Iqbal, Asim; Reese, Ronald A; Shelby, Michael H; Studzinski, William M


    Light-duty vehicles (LDVs) in the United States and elsewhere are required to meet increasingly challenging regulations on fuel economy and greenhouse gas (GHG) emissions as well as criteria pollutant emissions. New vehicle trends to improve efficiency include higher compression ratio, downsizing, turbocharging, downspeeding, and hybridization, each involving greater operation of spark-ignited (SI) engines under higher-load, knock-limited conditions. Higher octane ratings for regular-grade gasoline (with greater knock resistance) are an enabler for these technologies. This literature review discusses both fuel and engine factors affecting knock resistance and their contribution to higher engine efficiency and lower tailpipe CO2 emissions. Increasing compression ratios for future SI engines would be the primary response to a significant increase in fuel octane ratings. Existing LDVs would see more advanced spark timing and more efficient combustion phasing. Higher ethanol content is one available option for increasing the octane ratings of gasoline and would provide additional engine efficiency benefits for part and full load operation. An empirical calculation method is provided that allows estimation of expected vehicle efficiency, volumetric fuel economy, and CO2 emission benefits for future LDVs through higher compression ratios for different assumptions on fuel properties and engine types. Accurate "tank-to-wheel" estimates of this type are necessary for "well-to-wheel" analyses of increased gasoline octane ratings in the context of light duty vehicle transportation.

  11. Resolution and signal-to-noise ratio improvement in confocal fluorescence microscopy using array detection and maximum-likelihood processing

    Kakade, Rohan; Walker, John G.; Phillips, Andrew J.


    Confocal fluorescence microscopy (CFM) is widely used in biological sciences because of its enhanced 3D resolution that allows image sectioning and removal of out-of-focus blur. This is achieved by rejection of the light outside a detection pinhole in a plane confocal with the illuminated object. In this paper, an alternative detection arrangement is examined in which the entire detection/image plane is recorded using an array detector rather than a pinhole detector. Using this recorded data an attempt is then made to recover the object from the whole set of recorded photon array data; in this paper maximum-likelihood estimation has been applied. The recovered object estimates are shown (through computer simulation) to have good resolution, image sectioning and signal-to-noise ratio compared with conventional pinhole CFM images.

  12. The influence of high-octane fuel blends on the performance of a two-stroke SI engine with knock-limited-compression ratio

    Poola, Ramesh B.; Bhasker, T.; Nagalingam, B.; Gopalakrishnan, K. V.

    The use of alcohol-gasoline blends enables the favorable features of alcohols to be utilized in spark ignition (SI) engines while avoiding the shortcomings of their application as straight fuels. Eucalyptus and orange oils possess high octane values and are also good potential alternative fuels for SI engines. The high octane value of these fuels can enhance the octane value of the fuel when it is blended with low-octane gasoline. In the present work, 20 percent by volume of orange oil, eucalyptus oil, methanol and ethanol were blended separately with gasoline, and the performance, combustion and exhaust emission characteristics were evaluated at two different compression ratios. The phase separation problems arising from the alcohol-gasoline blends were minimized by adding eucalyptus oil as a cosolvent. Test results indicate that the compression ratio can be raised from 7.4 to 9 without any detrimental effect, due to the higher octane rating of the fuel blends. Knock-limited maximum brake output also increases due to extension of the knock limit. The knock limit is extended by methanol-eucalyptus-ethanol-orange oil blends, in descending order.

  13. modified water-cement ratio law for compressive strength of rice ...


    Chemical analysis of RHA produced under controlled temperature of 600°C was carried out. A ... Test results show that the compressive strength of hardened RHA concrete ..... by [22]. However the loss on ignition (LOI) of 13.33 is lower.

  14. Optimisation énergétique des chambres de combustion à haut taux de compression Energy Optimization of High-Compression-Ratio Combustion Chambers

    Douaud A.


    Full Text Available Une synthèse des études entreprises à l'institut Français du Pétrole pour la compréhension des phénomènes de combustion, de transferts thermiques, de cliquetis et leur maîtrise pour l'optimisation du rendement de chambre à haut taux de compression conduit à proposer deux thèmes de réalisation : - chambre calme à double allumage; - chambre turbulente à effet de chasse. Les avantages de principe et les contraintes associés à la mise en oeuvre de chaque type de chambre sont examinés. A synthesis of research undertaken at the Institut Français du Pétrole on understanding combustion, heat-transfer and knock phenomena and on mastering them to optimize the efficiency of high-compression-ratio combustion chambers has led to the proposing of two topics of implementation:(a calm chamber with dual ignition;(b turbulent chamber with squish effect. The advantages of the principle and the constraints connected to the implementation of each type of chamber are examined.

  15. Technique for Selecting Optimum Fan Compression Ratio based on the Effective Power Plant Parameters

    I. I. Kondrashov


    Full Text Available Nowadays, civilian aircrafts occupy the major share of global aviation industry market. As to medium and long - haul aircrafts, turbofans with separate exhaust streams are widely used. Here, fuel efficiency is the main criterion of this engine. The paper presents the research results of the mutual influence of fan pressure ratio and bypass ratio on the effective specific fuel consumption. Shows the increasing bypass ratio to be a rational step for reducing the fuel consumption. Also considers the basic features of engines with a high bypass ratio. Among the other working process parameters, fan pressure ratio and bypass ratio are the most relevant for consideration as they are the most structural variables at a given level of technical excellence. The paper presents the dependence of the nacelle drag coefficient on the engine bypass ratio. For computation were adopted the projected parameters of prospective turbofans to be used in the power plant of the 180-seat medium-haul aircraft. Computation of the engine cycle was performed in Mathcad using these data, with fan pressure ratio and bypass ratio being varied. The combustion chamber gas temperature, the overall pressure ratio and engine thrust remained constant. Pressure loss coefficients, the efficiency of the engine components and the amount of air taken for cooling also remained constant. The optimal parameters corresponding to the minimum effective specific fuel consumption were found as the result of computation. The paper gives recommendations for adjusting optimal parameters, depending on the considered external factors, such as weight of engine and required fuel reserve. The obtained data can be used to estimate parameters of future turbofan engines with high bypass ratio.

  16. Using the Maximum X-ray Flux Ratio and X-ray Background to Predict Solar Flare Class

    Winter, Lisa M


    We present the discovery of a relationship between the maximum ratio of the flare flux (namely, 0.5-4 Ang to the 1-8 Ang flux) and non-flare background (namely, the 1-8 Ang background flux), which clearly separates flares into classes by peak flux level. We established this relationship based on an analysis of the Geostationary Operational Environmental Satellites (GOES) X-ray observations of ~ 50,000 X, M, C, and B flares derived from the NOAA/SWPC flares catalog. Employing a combination of machine learning techniques (K-nearest neighbors and nearest-centroid algorithms) we show a separation of the observed parameters for the different peak flaring energies. This analysis is validated by successfully predicting the flare classes for 100% of the X-class flares, 76% of the M-class flares, 80% of the C-class flares and 81% of the B-class flares for solar cycle 24, based on the training of the parametric extracts for solar flares in cycles 22-23.

  17. Effects of compression ratio on variation of stresses and residual oil of cake in pressing process of castor beans and its curve fitting

    刘汝宽; 许方雷; 肖志红; 李昌珠; 李辉; 曾凡涛; 叶红齐


    The relationships among compression ratio and stress, compression ratio and residual oil of cake in pressing process of castor beans were studied using the test equipment under different states of oilseeds and ways of pressing manners. The results show that variation of stress increases nonlinearly and residual oil rate decreases with the increase of compression ratio. Lower residual oil of cake was obtained by pressing gently and frequently. Curve fitting on both relationships had been built and parameters for the model were obtained by least square procedure and deepening research on pressing process of the castor beans for castor oil. By assuming that the value of oil production is equivalent to the value of energy consumption, the critical compression ratio of intact seeds is 6.2 while that of crushed seeds is 3.6.

  18. Compilation of minimum and maximum isotope ratios of selected elements in naturally occurring terrestrial materials and reagents

    Coplen, T.B.; Hopple, J.A.; Böhlke, J.K.; Peiser, H.S.; Rieder, S.E.; Krouse, H.R.; Rosman, K.J.R.; Ding, T.; Vocke, R.D.; Revesz, K.M.; Lamberty, A.; Taylor, P.; De Bievre, P.


    Documented variations in the isotopic compositions of some chemical elements are responsible for expanded uncertainties in the standard atomic weights published by the Commission on Atomic Weights and Isotopic Abundances of the International Union of Pure and Applied Chemistry. This report summarizes reported variations in the isotopic compositions of 20 elements that are due to physical and chemical fractionation processes (not due to radioactive decay) and their effects on the standard atomic weight uncertainties. For 11 of those elements (hydrogen, lithium, boron, carbon, nitrogen, oxygen, silicon, sulfur, chlorine, copper, and selenium), standard atomic weight uncertainties have been assigned values that are substantially larger than analytical uncertainties because of common isotope abundance variations in materials of natural terrestrial origin. For 2 elements (chromium and thallium), recently reported isotope abundance variations potentially are large enough to result in future expansion of their atomic weight uncertainties. For 7 elements (magnesium, calcium, iron, zinc, molybdenum, palladium, and tellurium), documented isotope-abundance variations in materials of natural terrestrial origin are too small to have a significant effect on their standard atomic weight uncertainties. This compilation indicates the extent to which the atomic weight of an element in a given material may differ from the standard atomic weight of the element. For most elements given above, data are graphically illustrated by a diagram in which the materials are specified in the ordinate and the compositional ranges are plotted along the abscissa in scales of (1) atomic weight, (2) mole fraction of a selected isotope, and (3) delta value of a selected isotope ratio. There are no internationally distributed isotopic reference materials for the elements zinc, selenium, molybdenum, palladium, and tellurium. Preparation of such materials will help to make isotope ratio measurements among

  19. Effects of Nanosilica on Compressive Strength and Durability Properties of Concrete with Different Water to Binder Ratios

    Forood Torabian Isfahani


    Full Text Available The effects of the addition of different nanosilica dosages (0.5%, 1%, and 1.5% with respect to cement on compressive strength and durability properties of concrete with water/binder ratios 0.65, 0.55, and 0.5 were investigated. Water sorptivity, apparent chloride diffusion coefficient, electrical resistivity, and carbonation coefficient of concrete were measured. The results showed that compressive strength significantly improved in case of water/binder = 0.65, while for water/binder = 0.5 no change was found. Increasing nanosilica content, the water sorptivity decreased only for water/binder = 0.55. The addition of 0.5% nanosilica decreased the apparent chloride diffusion coefficient for water/binder = 0.65 and 0.55; however, higher nanosilica dosages did not decrease it with respect to reference value. The resistivity was elevated by 0.5% nanosilica for all water/binder ratios and by 1.5% nanosilica only for water/binder = 0.5. The carbonation coefficient was not notably affected by increasing nanosilica dosages and even adverse effect was observed for water/binder = 0.65. Further information of microstructure was also provided through characterization techniques such as X-ray diffraction, thermal gravimetric analysis, mercury intrusion porosimetry, and scanning electron microscopy. The effectiveness of a certain nanosilica dosage addition into lower strength mixes was more noticeable, while, for the higher strength mix, the effectiveness was less.

  20. Knock-Limited Performance of Triptane and 28-R Fuel Blends as Affected by Changes in Compression Ratio and in Engine Operating Variables

    Brun, Rinaldo J.; Feder, Melvin S.; Fisher, William F.


    A knock-limited performance investigation was conducted on blends of triptane and 28-P fuel with a 12-cylinder, V-type, liquid-cooled aircraft engine of 1710-cubic-inch displacement at three compression ratios: 6.65, 7.93, and 9.68. At each compression ratio, the effect of changes in temperature of the inlet air to the auxiliary-stage supercharger and in fuel-air ratio were investigated at engine speeds of 2280 and. 3000 rpm. The results show that knock-limited engine performance, as improved by the use of triptane, allowed operation at both take-off and cruising power at a compression ratio of 9.68. At an inlet-air temperature of 60 deg F, an engine speed of 3000 rpm ; and a fuel-air ratio of 0,095 (approximately take-off conditions), a knock-limited engine output of 1500 brake horsepower was possible with 100-percent 28-R fuel at a compression ratio of 6.65; 20-percent triptane was required for the same power output at a compression ratio of 7.93, and 75 percent at a compression ratio of 9.68 allowed an output of 1480 brake horsepower. Knock-limited power output was more sensitive to changes in fuel-air ratio as the engine speed was increased from 2280 to 3000 rpm, as the compression ratio is raised from 6.65 to 9.68, or as the inlet-air temperature is raised from 0 deg to 120 deg F.

  1. Sonoelastographic evaluation with the determination of compressibility ratio for symmetrical prostatic regions in the diagnosis of clinically significant prostate cancer

    Artur Przewor


    Full Text Available Aim: Sonoelastography is a technique that assesses tissue hardness/compressibility. Utility and sensitivity of the method in prostate cancer diagnostics were assessed compared to the current gold standard in prostate cancer diagnostics i.e. systematic biopsy. Material and methods: The study involved 84 patients suspected of prostate cancer based on elevated PSA levels or abnormal per rectal examination findings. Sonoelastography was used to evaluate the prostate gland. In the case of regions with hardness two-fold greater than that of symmetric prostate area (strain ratio >2, targeted biopsy was used; which was followed by an ultrasound-guided 8- or 10-core systematic biopsy (regardless of sonoelastography-indicated sites as a reference point. Results: The mean age of patients was 69 years. PSA serum levels ranged between 1.02 and 885 ng/dl. The mean prostate volume was 62 ml (19–149 ml. Prostate cancer was found in 39 out of 84 individuals. Statistically significant differences in strain ratios between cancers and benign lesions were shown. Sonoelastography guided biopsy revealed 30 lesions – overall sensitivity 77% (sensitivity of the method – 81%. Sonoelastographic sensitivity increased depending on cancer stage according to the Gleason grading system: 6–60%, 7–75%, 8–83%, 9/10–100%. The estimated sensitivity of systematic biopsy was 92%. Conclusions: Sonoelastography shows higher diagnostic sensitivity in prostate cancer diagnostics compared to conventional imaging techniques, i.e. grey-scale TRUS, Doppler ultrasound. It allows to reduce the number of collected tissue cores, and thus limit the incidence of complications as well as the costs involved. Sonoelastography using the determination of compressibility ratio for symmetrical prostatic regions may prove useful in the detection of clinically significant prostate cancer.

  2. Theoretical modeling of combustion characteristics and performance parameters of biodiesel in DI diesel engine with variable compression ratio

    Mohamed F. Al-Dawody, S. K. Bhatti


    Full Text Available Increasing of costly and depleting fossil fuels are prompting researchers to use edible as well as non-edible vegetable oils as a promising alternative to petro-diesel fuels. A comprehensive computer code using ”Quick basic” language was developed for the diesel engine cycle to study the combustion and performance characteristics of a single cylinder, four stroke, direct injection diesel engine with variable compression ratio. The engine operates on diesel fuel and 20% (mass basis of biodiesel (derived from soybean oil blended with diesel. Combustion characteristics such as cylinder pressure, heat release fraction, heat transfer and performance characteristics such as brake power; and brake specific fuel consumption (BSFC were analyzed. On the basis of the first law of thermodynamics the properties at each degree crank angle was calculated. Wiebe function is used to calculate the instantaneous heat release rate. The computed results are validated through the results obtained in the simulation Diesel-rk software.

  3. Theoretical modeling of combustion characteristics and performance parameters of biodiesel in DI diesel engine with variable compression ratio

    Al-Dawody, Mohamed F.; Bhatti, S.K. [Department of Mechanical Engineering, Andhra University (India)


    Increasing of costly and depleting fossil fuels are prompting researchers to use edible as well as non-edible vegetable oils as a promising alternative to petro-diesel fuels. A comprehensive computer code using ''Quick basic'' language was developed for the diesel engine cycle to study the combustion and performance characteristics of a single cylinder, four stroke, direct injection diesel engine with variable compression ratio. The engine operates on diesel fuel and 20% (mass basis) of biodiesel (derived from soybean oil) blended with diesel. Combustion characteristics such as cylinder pressure, heat release fraction, heat transfer and performance characteristics such as brake power; and brake specific fuel consumption (BSFC) were analyzed. On the basis of the first law of thermodynamics the properties at each degree crank angle was calculated. Wiebe function is used to calculate the instantaneous heat release rate. The computed results are validated through the results obtained in the simulation Diesel-rk software.

  4. Environmental Assessment of a Diesel Engine Under Variable Stroke Length and Constant Compression Ratio

    Jehad A.A. Yamin


    Full Text Available In the light of the energy crisis and the stringent environmental regulations, diesel engines are offering good hope for automotive vehicles. However, lot of work is needed to reduce the diesel exhaust emissions and give the way for full utilization of the diesel fuel’s excellent characteristics. This paper presents a theoretical study on the effect of variable stroke length technique on the emissions of a four-stroke, water-cooled direct injections diesel engine with the help of experimentally verified computer software designed mainly for diesel engines. The emission levels were studied over the speed range (1000 rpm to 3000 rpm and stroke lengths (120 mm to 200 rpm and were compared with those of the original engine design. The simulation results clearly indicate the advantages and utility of variable stroke technique in the reduction of the exhaust emission levels. A reduction of about 10% to 75% was achieved for specific particulate matter over the entire speed range and bore-to-stroke ratio studied. Further, a reduction of about 10% to 59% was achieved for the same range. As for carbon dioxide, a reduction of 0% to 37% was achieved. On the other hand, a less percent change was achieved for the case of nitrogen dioxide and nitrogen oxides as indicated by the results. This study clearly shows the advantage of VSE over fixed stroke engines. This study showed that the variable stroke technique proved a good way to curb the diesel exhaust emissions and hence helped making these engines more environmentally friendly.

  5. Optimizing Chest Compression to Rescue Ventilation Ratios During One-Rescuer CPR by Professionals and Lay Persons: Children are Not Just Little Adults

    Babbs, Charles F.; Nadkarni, Vinay


    Objective: To estimate the optimum ratio of chest compressions to ventilations for onerescuer CPR that maximizes systemic oxygen delivery in children. Method: Equations describing oxygen delivery and blood flow during CPR as functions of the number of compressions and the number of ventilations delivered over time were adapted from the former work of Babbs and Kern. These equations were solved explicitly as a function of body weight, using scaling algorithms based upon principles of developme...

  6. Effect of compression, digital noise reduction and directionality on envelope difference index, log-likelihood ratio and perceived quality

    Chinnaraj Geetha


    Full Text Available The aim of the present study was to evaluate the use of the envelope difference index (EDI and log-likelihood ratio (LLR to quantify the independent and interactive effects of wide dynamic range compression, digital noise reduction and directionality, and to carry out selfrated quality measures. A recorded sentence embedded in speech spectrum noise at +5 dB signal to noise ratio was presented to a four channel digital hearing aid and the output was recorded with different combinations of algorithms at 30, 45 and 70 dB HL levels of presentation through a 2 cc coupler. EDI and LLR were obtained in comparison with the original signal using MATLAB software. In addition, thirty participants with normal hearing sensitivity rated the output on the loudness and clarity parameters of quality. The results revealed that the temporal changes happening at the output is independent of the number of algorithms activated together in a hearing aid. However, at a higher level of presentation, temporal cues are better preserved if all of these algorithms are deactivated. The spectral components speech tend to get affected by the presentation level. The results also indicate the importance of quality rating as this helps in considering whether the spectral and/or temporal deviations created in the hearing aid are desirable or not.

  7. Ultrasound beamforming using compressed data.

    Li, Yen-Feng; Li, Pai-Chi


    The rapid advancements in electronics technologies have made software-based beamformers for ultrasound array imaging feasible, thus facilitating the rapid development of high-performance and potentially low-cost systems. However, one challenge to realizing a fully software-based system is transferring data from the analog front end to the software back end at rates of up to a few gigabits per second. This study investigated the use of data compression to reduce the data transfer requirements and optimize the associated trade-off with beamforming quality. JPEG and JPEG2000 compression techniques were adopted. The acoustic data of a line phantom were acquired with a 128-channel array transducer at a center frequency of 3.5 MHz, and the acoustic data of a cyst phantom were acquired with a 64-channel array transducer at a center frequency of 3.33 MHz. The receive-channel data associated with each transmit event are separated into 8 × 8 blocks and several tiles before JPEG and JPEG2000 data compression is applied, respectively. In one scheme, the compression was applied to raw RF data, while in another only the amplitude of baseband data was compressed. The maximum compression ratio of RF data compression to produce an average error of lower than 5 dB was 15 with JPEG compression and 20 with JPEG2000 compression. The image quality is higher with baseband amplitude data compression than with RF data compression; although the maximum overall compression ratio (compared with the original RF data size), which was limited by the data size of uncompressed phase data, was lower than 12, the average error in this case was lower than 1 dB when the compression ratio was lower than 8.

  8. 物料压缩率对长距离绞龙输送的影响%Effect of product compression ratio in the long transfer screw conveyor

    崔斌; 孔震; 徐飞翔; 赵雪峰; 韩俊巍


    In this article we present the concepts of product compression ratio, and analyse the effect of product compression ratio in the long transfer screw conveyor, also give the solution for the jam which cause by the compression ratio in the long transfer screw conveyor.%文章提出了物料压缩率的概念,针对物料压缩率的特性对长距离绞龙的影响作出了分析,并且对长距离绞龙由于物料压缩率问题引起的堵料提出了解决方案。

  9. Comparison of the emissions and SFC for 10:1 and 12:1 compression ratio 1.8 litre SI engines using lean mixtures

    Andrews, G.; Osses, M.; Desai, M.; Haralambidis, E. [Leeds Univ. (United Kingdom); Ounzain, A.; Robertson, G. [Ford Motor Co. Ltd., Dagenham (United Kingdom)


    A standard 10:1 compression ratio Ford Zetec engine was modified to a 12:1 compression ratio and investigated over the lean combustion region with a comparison with the base 10:1 compression ratio engine. All the comparisons were carried out at the same power output as for stoichiometric operation, using throttle adjustments to achieve the increased power as the mixture was made leaner. The aim of the higher compression ratio was to increase the power in the lean combustion region and thus to extend the lean burning limit for the same power output. The lean limit was then set as the wide open throttle lean combustion for a desired power output. The power outputs studied were 10 and 15KW at 1500 r/min, which is typical of the low power urban drive cycle in the EC emissions test cycle. Additional reductions in NOx using spark timing control was also investigated for lean mixtures. The lean burning capability at different conditions was investigated at the minimum fuel consumption spark timing and at 5% above the minimum SFC, but still below the SFC for stoichiometric operation. The borderline detonation limit (BLDL) as detected using a cylinder pressure transducer and spark timing loops were limited by the BLDL timing. The higher compression ratio was shown to extend the lean burning limit from 22/1 to 26/1 at constant 10KW power output. This was accompanied by an increase by NOx, hydrocarbons and CO. The extended lean limit was not effectively useable at 10 KW power due to the large increase in hydrocarbons for mixtures leaner than 22/1. Thus the lower NOx emissions in this region could not be exploited and there was little advantage, from a lean combustion viewpoint, of operation at 12/1 compression ratio. However, at the 15KW power output condition there were clear SFC advantages and smaller NOx reductions for the higher CR lean burn engine. (author)

  10. Analysis and implementation of electric power waveform data compression method with high compression ratio%高压缩比电力系统波形数据压缩方法的实现与性能分析

    党三磊; 肖勇; 杨劲锋; 申妍华


    在对常用数据压缩、编码方法分析的基础上,充分利用电力系统波形数据的周期性、有界性和冗余性等特点,同时分别选用游程编码和EZW编码,在DSP平台上实现了基于DCT变换、提升小波变换的压缩方法.文章对两种压缩方法实现、性能和还原效果方面进行了全面分析,认为基于提升小波与EZW编码的压缩方法可记录数据突变特征,具有压缩比和还原精度可调等特点,更适合于压缩大量电力系统故障波形数据压缩.%Power quality monitor and waveform recorder are very important equipments for security and stability a-nalysis of the electric power system. In those equipments, the core technology is power system waveform data compression method with high compression ratio. In this paper, commonly used data compression and coding method are studied firstly. Taking advantage of characteristics of the power system waveform data such as periodic, bounded and redundancy, compression methods based on DCT transform and lifting wavelet transform are imple-mentated on the DSP platform. Then, implementation, performance and reduction effect of the two compression methods are comprehensively analyzed. It is found that the compression method based on lifting wavelet and EZW coding can record abrupt data changes and has the features of adjustable compression ratio and precision restoration. The method is more suitable for compression of large amounts of power system failure waveform data.

  11. Compression Ratio Ion Mobility Programming (CRIMP) Accumulation and Compression of Billions of Ions for Ion Mobility-Mass Spectrometry Using Traveling Waves in Structures for Lossless Ion Manipulations (SLIM)

    Deng, Liulin; Garimella, Sandilya V. B.; Hamid, Ahmed M.; Webb, Ian K.; Attah, Isaac K.; Norheim, Randolph V.; Prost, Spencer A.; Zheng, Xueyun; Sandoval, Jeremy A.; Baker, Erin S.; Ibrahim, Yehia M.; Smith, Richard D.


    We report on the implementation of a traveling wave (TW) based compression ratio ion mobility programming (CRIMP) approach within Structures for Lossless Ion Manipulations (SLIM) that enables both greatly enlarged trapped ion charge capacities and also their subsequent efficient compression for use in ion mobility (IM) separations. Ion accumulation is conducted in a long serpentine path TW SLIM region after which CRIMP allows the large ion populations to be ‘squeezed’. The compression process occurs at an interface between two SLIM regions, one operating conventionally and the second having an intermittently pausing or ‘stuttering’ TW, allowing the contents of multiple bins of ions from the first region to be merged into a single bin in the second region. In this initial work stationary voltages in the second region were used to block ions from exiting the first (trapping) region, and the resumption of TWs in the second region allows ions to exit, and the population to also be compressed if CRIMP is applied. In our initial evaluation we show that the number of charges trapped for a 40 s accumulation period was ~5×109, more than two orders of magnitude greater than the previously reported charge capacity using an ion funnel trap. We also show that over 1×109 ions can be accumulated with high efficiency in the present device, and that the extent of subsequent compression is only limited by the space charge capacity of the trapping region. Lower compression ratios allow increased IM peak heights without significant loss of signal, while excessively large compression ratios can lead to ion losses and other artifacts. Importantly, we show that extended ion accumulation in conjunction with CRIMP and multiple passes provides the basis for a highly desirable combination of ultra-high sensitivity and ultra-high resolution IM separations using SLIM.

  12. Class B Fire-Extinguishing Performance Evaluation of a Compressed Air Foam System at Different Air-to-Aqueous Foam Solution Mixing Ratios

    Dong-Ho Rie


    Full Text Available The purpose of this research is to evaluate the fire-extinguishing performance of a compressed air foam system at different mixing ratios of pressurized air. In this system, compressed air is injected into an aqueous solution of foam and then discharged. The experimental device uses an exclusive fire-extinguishing technology with compressed air foam that is produced based on the Canada National Laboratory and UL (Underwriters Laboratories 162 standards, with a 20-unit oil fire model (Class B applied as the fire extinguisher. Compressed air is injected through the air mixture, and results with different air-to-aqueous solution foam ratios of 1:4, 1:7, and 1:10 are studied. In addition, comparison experiments between synthetic surfactant foam and a foam type which forms an aqueous film are carried out at an air-to-aqueous solution foam ratio of 1:4. From the experimental results, at identical discharging flows, it was found that the fire-extinguishing effect of the aqueous film-forming foam is greatest at an air-to-aqueous solution foam ratio of 1:7 and weakest at 1:10. Moreover, the fire-extinguishing effect of the aqueous film-forming foam in the comparison experiments between the aqueous film-forming foam and the synthetic surfactant foam is greatest.

  13. Experimental evaluation of the effect of compression ratio on performance and emission of SI engine fuelled with gasoline and n-butanol blend at different loads

    Rinu Thomas


    Full Text Available Never ending demand for efficient and less polluting engines have always inspired newer technologies. Extensive study has been done on variable compression ratio, a promising in-cylinder technology, in the recent past. The present work is an experimental investigation to examine the variation of different parameters such as brake thermal efficiency, exhaust gas temperature and emissions with respect to change in compression ratio in a single-cylinder carbureted SI engine at different loads with two different fuels. Experiments were conducted at three different compression ratios (CR = 7:1, 8.5:1 and 10:1. The fuels used in this study are pure gasoline and 20% n-butanol blend (B20 in gasoline. The results showed that brake thermal efficiency increases with CR at all loads. Further, the experimental results showed the scope of improving the part-load efficiency of SI engine by adopting the concept of variable compression ratio (VCR technology, especially when fuels with better anti-knock characteristics are used. The uncertainty analysis of the experiments based on the specifications of the equipment used is also tabulated.

  14. Tropical Atlantic SSTS at the Last Glacial Maximum derived from Sr/Ca ratios of fossil coral

    Cohen, A. L.; Saenger, C. P.


    The sensitivity of the tropics to climate change is a particularly controversial issue in paleoclimatology. At the heart of this controversy are disagreements amongst different proxy datasets regarding the amplitude of glacial-interglacial changes in temperature, particularly at the sea surface. Data obtained from the aragonitic skeletons of massive reef corals have contributed in no small measure to the debate, yielding LGM and deglacial SSTs 5-6°C cooler than today (Guilderson et al., 1994; McCulloch et al., 1999; Correge et al., 2004), that imply a high sensitivity of Earth's climate to changes in boundary conditions (Crowley, 2000). We used SIMS ion microprobe to analyze Sr/Ca ratios of small pieces of Montastrea coral retrieved from a Barbados drillcore (Guilderson et al., 2001). U/Th dates place the samples between 22 and 24 kyr BP. Localized areas of dissolution and re-growth of secondary (diagenetic) aragonite crystals were identified at centers of septa. Sr/Ca ratios of these crystals were higher than Sr/Ca ratios of original coral crystals preserved in adjacent fasciculi and yielded relatively cooler derived SSTs. The original coral crystals, recognized by their size and orientation, were selectively targeted for analysis using a 20 micron-diameter sample spot. Our calibration study using modern corals from Bermuda, St Croix (USVI) and Barbados indicates that Montastrea Sr/Ca is strongly correlated with SST and with annual extension (growth) rate (Saenger et al., 2006). Growth rate of the fossil corals was determined from measurement of daily growth bands identified in petrographic thin-sections. Application of a growth-dependent Sr/Ca-T calibration yielded Barbados SSTs that were, on average, 2.5°C cooler than today during the LGM and ~1°C cooler than today during Heinrich Event 2. Our LGM SSTs are consistent with the original CLIMAP estimates (CLIMAP, 1976) and with more recent Mg/Ca-based SSTs derived from calcitic foraminifera in the Caribbean

  15. Maximum mass ratio of am CVn-type binary systems and maximum white dwarf mass in ultra-compact x-ray binaries (addendum - Serb. Astron. J. No. 183 (2011, 63

    Arbutina B.


    Full Text Available We recalculated the maximum white dwarf mass in ultra-compact X-ray binaries obtained in an earlier paper (Arbutina 2011, by taking the effects of super-Eddington accretion rate on the stability of mass transfer into account. It is found that, although the value formally remains the same (under the assumed approximations, for white dwarf masses M2 >~0.1MCh mass ratios are extremely low, implying that the result for Mmax is likely to have little if any practical relevance.

  16. Horton Ratios Link Self-Similarity with Maximum Entropy of Eco-Geomorphological Properties in Stream Networks

    Bruce T. Milne


    Full Text Available Stream networks are branched structures wherein water and energy move between land and atmosphere, modulated by evapotranspiration and its interaction with the gravitational dissipation of potential energy as runoff. These actions vary among climates characterized by Budyko theory, yet have not been integrated with Horton scaling, the ubiquitous pattern of eco-hydrological variation among Strahler streams that populate river basins. From Budyko theory, we reveal optimum entropy coincident with high biodiversity. Basins on either side of optimum respond in opposite ways to precipitation, which we evaluated for the classic Hubbard Brook experiment in New Hampshire and for the Whitewater River basin in Kansas. We demonstrate that Horton ratios are equivalent to Lagrange multipliers used in the extremum function leading to Shannon information entropy being maximal, subject to constraints. Properties of stream networks vary with constraints and inter-annual variation in water balance that challenge vegetation to match expected resource supply throughout the network. The entropy-Horton framework informs questions of biodiversity, resilience to perturbations in water supply, changes in potential evapotranspiration, and land use changes that move ecosystems away from optimal entropy with concomitant loss of productivity and biodiversity.

  17. An experimental and numerical analysis of the influence of the inlet temperature, equivalence ratio and compression ratio on the HCCI auto-ignition process of Primary Reference Fuels in an engine

    Machrafi, Hatim [UPMC Universite Paris 06, LGPPTS, Ecole Nationale Superieure de Chimie de Paris, 11, rue de Pierre et Marie Curie, 75005 Paris (France); UPMC Universite Paris 06, Institut Jean Le Rond D' Alembert (France); Cavadiasa, Simeon [UPMC Universite Paris 06, Institut Jean Le Rond D' Alembert (France)


    In order to understand better the auto-ignition process in an HCCI engine, the influence of some important parameters on the auto-ignition is investigated. The inlet temperature, the equivalence ratio and the compression ratio were varied and their influence on the pressure, the heat release and the ignition delays were measured. The inlet temperature was changed from 25 to 70 C and the equivalence ratio from 0.18 to 0.41, while the compression ratio varied from 6 to 13.5. The fuels that were investigated were PRF40 and n-heptane. These three parameters appeared to decrease the ignition delays, with the inlet temperature having the least influence and the compression ratio the most. A previously experimentally validated reduced surrogate mechanism, for mixtures of n-heptane, iso-octane and toluene, has been used to explain observations of the auto-ignition process. The same kinetic mechanism is used to better understand the underlying chemical and physical phenomena that make the influence of a certain parameter change according to the operating conditions. This can be useful for the control of the auto-ignition process in an HCCI engine. (author)

  18. Compression Ratio Ion Mobility Programming (CRIMP) Accumulation and Compression of Billions of Ions for Ion Mobility-Mass Spectrometry Using Traveling Waves in Structures for Lossless Ion Manipulations (SLIM).

    Deng, Liulin; Garimella, Sandilya V B; Hamid, Ahmed M; Webb, Ian K; Attah, Isaac K; Norheim, Randolph V; Prost, Spencer A; Zheng, Xueyun; Sandoval, Jeremy A; Baker, Erin S; Ibrahim, Yehia M; Smith, Richard D


    We report on the implementation of a traveling wave (TW) based compression ratio ion mobility programming (CRIMP) approach within structures for lossless ion manipulations (SLIM) that enables both greatly enlarged trapped ion charge capacities and also efficient ion population compression for use in ion mobility (IM) separations. Ion accumulation is conducted in a SLIM serpentine ultralong path with extended routing (SUPER) region after which CRIMP compression allows the large ion populations to be "squeezed". The SLIM SUPER IM module has two regions, one operating with conventional traveling waves (i.e., traveling trap; TT region) and the second having an intermittently pausing or "stuttering" TW (i.e., stuttering trap; ST region). When a stationary voltage profile was used in the ST region, ions are blocked at the TT-ST interface and accumulated in the TT region and then can be released by resuming a conventional TW in the ST region. The population can also be compressed using CRIMP by the repetitive merging of ions distributed over multiple TW bins in the TT region into a single TW bin in the ST region. Ion accumulation followed by CRIMP compression provides the basis for the use of larger ion populations for IM separations. We show that over 10(9) ions can be accumulated with high efficiency in the present device and that the extent of subsequent compression is only limited by the space charge capacity of the trapping region. Approximately 5 × 10(9) charges introduced from an electrospray ionization source were trapped for a 40 s accumulation period, more than 2 orders of magnitude greater than the previously reported charge capacity of an ion funnel trap. Importantly, we show that extended ion accumulation in conjunction with CRIMP compression and multiple passes through the serpentine path provides the basis for a highly desirable combination of ultrahigh sensitivity and SLIM SUPER high-resolution IM separations.

  19. Maximum and minimum amplitudes of the moiré patterns in one- and two-dimensional binary gratings in relation to the opening ratio.

    Saveljev, Vladimir; Kim, Sung-Kyu; Lee, Hyoung; Kim, Hyun-Woo; Lee, Byoungho


    The amplitude of the moiré patterns is estimated in relation to the opening ratio in line gratings and square grids. The theory is developed; the experimental measurements are performed. The minimum and the maximum of the amplitude are found. There is a good agreement between the theoretical and experimental data. This is additionally confirmed by the visual observation. The results can be applied to the image quality improvement in autostereoscopic 3D displays, to the measurements, and to the moiré displays.

  20. Seismic Performance Evaluation Framework Considering Maximum and Residual Inter-story Drift Ratios: Application to Non-code Conforming Reinforced Concrete Buildings in Victoria, British Columbia, Canada

    Solomon eTesfamariam


    Full Text Available This paper presents a seismic performance evaluation framework using two engineering demand parameters, i.e. maximum and residual inter-story drift ratios, and with consideration of mainshock-aftershock (MSAS earthquake sequences. The evaluation is undertaken within a performance-based earthquake engineering framework in which seismic demand limits are defined with respect to the earthquake return period. A set of 2-, 4-, 8-, and 12-story non-ductile reinforced concrete buildings, located in Victoria, British Colombia, Canada, is considered as a case study. Using 50 mainshock and MSAS earthquake records (two horizontal components per record, incremental dynamic analysis is performed, and the joint probability distribution of maximum and residual inter-story drift ratios is modeled using a novel copula technique. The results are assessed both for collapse and non-collapse limit states. From the results, it can be shown that the collapse assessment of 4- to 12-story buildings is not sensitive to the consideration of MSAS seismic input, whereas for the 2-story building, a 13% difference in the median collapse capacity is caused by the MSAS. For unconditional probability of unsatisfactory seismic performance, which accounts for both collapse and non-collapse limit states, the life safety performance objective is achieved, but it fails to satisfy the collapse prevention performance objective. The results highlight the need for the consideration of seismic retrofitting for the non-ductile reinforced concrete structures.

  1. Determining the maximum cumulative ratios for mixtures observed in ground water wells used as drinking water supplies in the United States.

    Han, Xianglu; Price, Paul S


    The maximum cumulative ratio (MCR) developed in previous work is a tool to evaluate the need to perform cumulative risk assessments. MCR is the ratio of the cumulative exposures to multiple chemicals to the maximum exposure from one of the chemicals when exposures are described using a common metric. This tool is used to evaluate mixtures of chemicals measured in samples of untreated ground water as source for drinking water systems in the United States. The mixtures of chemicals in this dataset differ from those examined in our previous work both in terms of the predicted toxicity and compounds measured. Despite these differences, MCR values in this study follow patterns similar to those seen earlier. MCR values for the mixtures have a mean (range) of 2.2 (1.03-5.4) that is much smaller than the mean (range) of 16 (5-34) in the mixtures in previous study. The MCR values of the mixtures decline as Hazard Index (HI) values increase. MCR values for mixtures with larger HI values are not affected by possible contributions from chemicals that may occur at levels below the detection limits. This work provides a second example of use of the MCR tool in the evaluation of mixtures that occur in the environment.

  2. Determining the Maximum Cumulative Ratios for Mixtures Observed in Ground Water Wells Used as Drinking Water Supplies in the United States

    Xianglu Han


    Full Text Available The maximum cumulative ratio (MCR developed in previous work is a tool to evaluate the need to perform cumulative risk assessments. MCR is the ratio of the cumulative exposures to multiple chemicals to the maximum exposure from one of the chemicals when exposures are described using a common metric. This tool is used to evaluate mixtures of chemicals measured in samples of untreated ground water as source for drinking water systems in the United States. The mixtures of chemicals in this dataset differ from those examined in our previous work both in terms of the predicted toxicity and compounds measured. Despite these differences, MCR values in this study follow patterns similar to those seen earlier. MCR values for the mixtures have a mean (range of 2.2 (1.03–5.4 that is much smaller than the mean (range of 16 (5–34 in the mixtures in previous study. The MCR values of the mixtures decline as Hazard Index (HI values increase. MCR values for mixtures with larger HI values are not affected by possible contributions from chemicals that may occur at levels below the detection limits. This work provides a second example of use of the MCR tool in the evaluation of mixtures that occur in the environment.

  3. Squeezing of Ion Populations and Peaks in Traveling Wave Ion Mobility Separations and Structures for Lossless Ion Manipulations using Compression Ratio Ion Mobility Programming

    Garimella, Venkata BS; Hamid, Ahmed M.; Deng, Liulin; Ibrahim, Yehia M.; Webb, Ian K.; Baker, Erin M.; Prost, Spencer A.; Norheim, Randolph V.; Anderson, Gordon A.; Smith, Richard D.


    In this work, we report an approach for spatial and temporal gas phase ion population manipulation, and demonstrate its application for the collapse of the ion distributions in ion mobility (IM) separations into tighter packets providing higher sensitivity measurements in conjunction with mass spectrometry (MS). We do this for ions moving from a conventionally traveling wave (TW)-driven region to a region where the TW is intermittently halted or ‘stuttered’. This approach causes the ion packets spanning a number of TW-created traveling traps (TT) to be redistributed into fewer TT, resulting in spatial compression. The degree of spatial compression is controllable and determined by the ratio of stationary time of the TW in the second region to its moving time. This compression ratio ion mobility programming (CRIMP) approach has been implemented using Structures for Lossless Ion Manipulations (SLIM) in conjunction with MS. CRIMP with the SLIM-MS platform is shown to provide increased peak intensities, reduced peak widths, and improved S/N ratios with MS detection. CRIMP also provides a foundation for extremely long path length and multi-pass IM separations in SLIM providing greatly enhanced IM resolution by reducing the detrimental effects of diffusional peak broadening due to increasing peak widths.

  4. Squeezing of Ion Populations and Peaks in Traveling Wave Ion Mobility Separations and Structures for Lossless Ion Manipulations Using Compression Ratio Ion Mobility Programming.

    Garimella, Sandilya V B; Hamid, Ahmed M; Deng, Liulin; Ibrahim, Yehia M; Webb, Ian K; Baker, Erin S; Prost, Spencer A; Norheim, Randolph V; Anderson, Gordon A; Smith, Richard D


    In this work we report an approach for spatial and temporal gas-phase ion population manipulation, wherein we collapse ion distributions in ion mobility (IM) separations into tighter packets providing higher sensitivity measurements in conjunction with mass spectrometry (MS). We do this for ions moving from a conventional traveling wave (TW)-driven region to a region where the TW is intermittently halted or "stuttered". This approach causes the ion packets spanning a number of TW-created traveling traps (TT) to be redistributed into fewer TT, resulting in spatial compression. The degree of spatial compression is controllable and determined by the ratio of stationary time of the TW in the second region to its moving time. This compression ratio ion mobility programming (CRIMP) approach has been implemented using "structures for lossless ion manipulations" (SLIM) in conjunction with MS. CRIMP with the SLIM-MS platform is shown to provide increased peak intensities, reduced peak widths, and improved signal-to-noise (S/N) ratios with MS detection. CRIMP also provides a foundation for extremely long path length and multipass IM separations in SLIM providing greatly enhanced IM resolution by reducing the detrimental effects of diffusional peak broadening and increasing peak widths.

  5. Compression of a bundle of light rays.

    Marcuse, D


    The performance of ray compression devices is discussed on the basis of a phase space treatment using Liouville's theorem. It is concluded that the area in phase space of the input bundle of rays is determined solely by the required compression ratio and possible limitations on the maximum ray angle at the output of the device. The efficiency of tapers and lenses as ray compressors is approximately equal. For linear tapers and lenses the input angle of the useful rays must not exceed the compression ratio. The performance of linear tapers and lenses is compared to a particular ray compressor using a graded refractive index distribution.

  6. Influence of fuel type, dilution and equivalence ratio on the emission reduction from the auto-ignition in an Homogeneous Charge Compression Ignition engine

    Machrafi, Hatim [UPMC Universite Paris 06, ENSCP, 11 rue de Pierre et Marie Curie, 75005 Paris (France); UPMC Universite Paris 06, Institut Jean Le Rond D' Alembert, 4 place Jussieu, 75252 Paris cedex 05 (France); Universite Libre de Bruxelles, TIPs - Fluid Physics, CP165/67, 50 Avenue F.D. Roosevelt, 1050 Brussels (Belgium); Cavadias, Simeon [UPMC Universite Paris 06, ENSCP, 11 rue de Pierre et Marie Curie, 75005 Paris (France); UPMC Universite Paris 06, Institut Jean Le Rond D' Alembert, 4 place Jussieu, 75252 Paris cedex 05 (France); Amouroux, Jacques [UPMC Universite Paris 06, ENSCP, 11 rue de Pierre et Marie Curie, 75005 Paris (France)


    One technology that seems to be promising for automobile pollution reduction is the Homogeneous Charge Compression Ignition (HCCI). This technology still faces auto-ignition and emission-control problems. This paper focuses on the emission problem, since it is incumbent to realize engines that pollute less. For this purpose, this paper presents results concerning the measurement of the emissions of CO, NO{sub x}, CO{sub 2}, O{sub 2} and hydrocarbons. HCCI conditions are used, with equivalence ratios between 0.26 and 0.54, inlet temperatures of 70 C and 120 C and compression ratios of 10.2 and 13.5, with different fuel types: gasoline, gasoline surrogate, diesel, diesel surrogate and mixtures of n-heptane/toluene. The effect of dilution is considered for gasoline, while the effect of the equivalence ratio is considered for all the fuels. No significant amount of NO{sub x} has been measured. It appeared that the CO, O{sub 2} and hydrocarbon emissions were reduced by decreasing the toluene content of the fuel and by decreasing the dilution. The opposite holds for CO{sub 2}. The reduction of the hydrocarbon emission appears to compete with the reduction of the CO{sub 2} emission. Diesel seemed to produce less CO and hydrocarbons than gasoline when auto-ignited. An example of emission reduction control is presented in this paper. (author)

  7. Comparison between pulsed laser and frequency-domain photoacoustic modalities: Signal-to-noise ratio, contrast, resolution, and maximum depth detectivity

    Lashkari, Bahman; Mandelis, Andreas


    In this work, a detailed theoretical and experimental comparison between various key parameters of the pulsed and frequency-domain (FD) photoacoustic (PA) imaging modalities is developed. The signal-to-noise ratios (SNRs) of these methods are theoretically calculated in terms of transducer bandwidth, PA signal generation physics, and laser pulse or chirp parameters. Large differences between maximum (peak) SNRs were predicted. However, it is shown that in practice the SNR differences are much smaller. Typical experimental SNRs were 23.2 dB and 26.1 dB for FD-PA and time-domain (TD)-PA peak responses, respectively, from a subsurface black absorber. The SNR of the pulsed PA can be significantly improved with proper high-pass filtering of the signal, which minimizes but does not eliminate baseline oscillations. On the other hand, the SNR of the FD method can be enhanced substantially by increasing laser power and decreasing chirp duration (exposure) correspondingly, so as to remain within the maximum permissible exposure guidelines. The SNR crossover chirp duration is calculated as a function of transducer bandwidth and the conditions yielding higher SNR for the FD mode are established. Furthermore, it was demonstrated that the FD axial resolution is affected by both signal amplitude and limited chirp bandwidth. The axial resolution of the pulse is, in principle, superior due to its larger bandwidth; however, the bipolar shape of the signal is a drawback in this regard. Along with the absence of baseline oscillation in cross-correlation FD-PA, the FD phase signal can be combined with the amplitude signal to yield better axial resolution than pulsed PA, and without artifacts. The contrast of both methods is compared both in depth-wise (delay-time) and fixed delay time images. It was shown that the FD method possesses higher contrast, even after contrast enhancement of the pulsed response through filtering.

  8. Supercharged two-cycle engines employing novel single element reciprocating shuttle inlet valve mechanisms and with a variable compression ratio

    Wiesen, Bernard (Inventor)


    This invention relates to novel reciprocating shuttle inlet valves, effective with every type of two-cycle engine, from small high-speed single cylinder model engines, to large low-speed multiple cylinder engines, employing spark or compression ignition. Also permitting the elimination of out-of-phase piston arrangements to control scavenging and supercharging of opposed-piston engines. The reciprocating shuttle inlet valve (32) and its operating mechanism (34) is constructed as a single and simple uncomplicated member, in combination with the lost-motion abutments, (46) and (48), formed in a piston skirt, obviating the need for any complex mechanisms or auxiliary drives, unaffected by heat, friction, wear or inertial forces. The reciprocating shuttle inlet valve retains the simplicity and advantages of two-cycle engines, while permitting an increase in volumetric efficiency and performance, thereby increasing the range of usefulness of two-cycle engines into many areas that are now dominated by the four-cycle engine.

  9. Investigation of a 2D two-point maximum entropy regularization method for signal-to-noise ratio enhancement: application to CT polymer gel dosimetry

    Jirasek, A [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Matthews, Q [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Hilts, M [Medical Physics, BC Cancer Agency-Vancouver Island Centre, Victoria BC V8R 6V5 (Canada); Schulze, G [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Blades, M W [Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Turner, R F B [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver BC V6T 1Z4 (Canada)


    This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.

  10. Investigation of a 2D two-point maximum entropy regularization method for signal-to-noise ratio enhancement: application to CT polymer gel dosimetry.

    Jirasek, A; Matthews, Q; Hilts, M; Schulze, G; Blades, M W; Turner, R F B


    This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.

  11. Dose calculation for asymmetric fields and irregular fields with multileaf collimators. Approximation of tissue-maximum ratio and field factor using modified Day`s calculation method

    Nakata, Manabu; Okada, Takashi; Komai, Yoshinori; Nohara, Hiroki [Kyoto Univ. (Japan). Hospital


    Modern linear accelerators have four independent jaws and multileaf collimators (MLC) of 1 cm width at the isocenter. Asymmetric fields defined by such independent jaws and irregular multileaf collimated fields can be used to match adjacent fields or to spare the spinal cord in external photon beam radiotherapy. We have developed a new approximate algorithm for depth dose calculations at the collimator rotation axis. The program is based on Clarkson`s principle, and uses a more accurate modification of Day`s method for asymmetric fields. Using this method, tissue-maximum ratios (TMR) and field factors of ten kinds of asymmetric fields and ten different irregular multileaf collimated fields were calculated and compared with the measured data for 6 MV and 15 MV photon beams. The dose accuracy with the general A/Pe method was about 3%, however, with the new modified Day`s method, accuracy was within 1.7% for TMR and 1.2% for field factors. The calculated TMR and field factors were found to be in good agreement with measurements for both the 6 MV and 15 MV photon beams. (author)

  12. Use of the Maximum Cumulative Ratio As an Approach for Prioritizing Aquatic Coexposure to Plant Protection Products: A Case Study of a Large Surface Water Monitoring Database.

    Vallotton, Nathalie; Price, Paul S


    This paper uses the maximum cumulative ratio (MCR) as part of a tiered approach to evaluate and prioritize the risk of acute ecological effects from combined exposures to the plant protection products (PPPs) measured in 3 099 surface water samples taken from across the United States. Assessments of the reported mixtures performed on a substance-by-substance approach and using a Tier One cumulative assessment based on the lowest acute ecotoxicity benchmark gave the same findings for 92.3% of the mixtures. These mixtures either did not indicate a potential risk for acute effects or included one or more individual PPPs that had concentrations in excess of their benchmarks. A Tier Two assessment using a trophic level approach was applied to evaluate the remaining 7.7% of the mixtures. This assessment reduced the number of mixtures of concern by eliminating the combination of endpoint from multiple trophic levels, identified invertebrates and nonvascular plants as the most susceptible nontarget organisms, and indicated that a only a very limited number of PPPs drove the potential concerns. The combination of the measures of cumulative risk and the MCR enabled the identification of a small subset of mixtures where a potential risk would be missed in substance-by-substance assessments.

  13. Performance Analysis of Multi Spectral Band Image Compression using Discrete Wavelet Transform

    S. S. Ramakrishnan


    Full Text Available Problem statement: Efficient and effective utilization of transmission bandwidth and storage capacity have been a core area of research for remote sensing images. Hence image compression is required for multi-band satellite imagery. In addition, image quality is also an important factor after compression and reconstruction. Approach: In this investigation, the discrete wavelet transform is used to compress the Landsat5 agriculture and forestry image using various wavelets and the spectral signature graph is drawn. Results: The compressed image performance is analyzed using Compression Ratio (CR, Peak Signal to Noise Ratio (PSNR. The compressed image using dmey wavelet is selected based on its Digital Number Minimum (DNmin and Digital Number Maximum (DNmax. Then it is classified using maximum likelihood classification and the accuracy is determined using error matrix, kappa statistics and over all accuracy. Conclusion: Hence the proposed compression technique is well suited to compress the agriculture and forestry multi-band image.

  14. Flux Limiter Lattice Boltzmann Scheme Approach to Compressible Flows with Flexible Specific-Heat Ratio and Prandtl Number

    甘延标; 许爱国; 张广财; 李英骏


    We further develop the lattice Boltzmann (LB) model [Physica A 382 (2007) 502] for compressible flows from two aspects. Firstly, we modify the Bhatnagar Gross Krook (BGK) collision term in the LB equation, which makes the model suitable for simulating flows with different Prandtl numbers. Secondly, the flux limiter finite difference (FLFD) scheme is employed to calculate the convection term of the LB equation, which makes the unphysical oscillations at discontinuities be effectively suppressed and the numerical dissipations be significantly diminished. The proposed model is validated by recovering results of some well-known benchmarks, including (i) The thermal Couette flow; (ii) One- and two-dlmenslonal FLiemann problems. Good agreements are obtained between LB results and the exact ones or previously reported solutions. The flexibility, together with the high accuracy of the new model, endows the proposed model considerable potential for tracking some long-standing problems and for investigating nonlinear nonequilibrium complex systems.

  15. A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio

    Hu, Kainan; Geng, Shaojuan


    A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e. the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion...

  16. Effects of augmented trunk stabilization with external compression support on shoulder and scapular muscle activity and maximum strength during isometric shoulder abduction.

    Jang, Hyun-jeong; Kim, Suhn-yeop; Oh, Duck-won


    The aim of the present study was to investigate the effects of augmented trunk stabilization with external compression support (ECS) on the electromyography (EMG) activity of shoulder and scapular muscles and shoulder abductor strength during isometric shoulder abduction. Twenty-six women volunteered for the study. Surface EMG was used to monitor the activity of the upper trapezius (UT), lower trapezius (LT), serratus anterior (SA), and middle deltoid (MD), and shoulder abductor strength was measured using a dynamometer during three experimental conditions: (1) no external support (condition-1), (2) pelvic support (condition-2), and (3) pelvic and thoracic supports (condition-3) in an active therapeutic movement device. EMG activities were significantly lower for UT and higher for MD during condition 3 than during condition 1 (p Shoulder abductor strength was significantly higher during condition 3 than during condition 1 (p muscle effort of the UT during isometric shoulder abduction and increasing shoulder abductor strength. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. An Experimental Parametric Study of Geometric, Reynolds Number, and Ratio of Specific Heats Effects in Three-Dimensional Sidewall Compression Scramjet Inlets at Mach 6

    Holland, Scott D.; Murphy, Kelly J.


    Since mission profiles for airbreathing hypersonic vehicles such as the National Aero-Space Plane include single-stage-to-orbit requirements, real gas effects may become important with respect to engine performance. The effects of the decrease in the ratio of specific heats have been investigated in generic three-dimensional sidewall compression scramjet inlets with leading-edge sweep angles of 30 and 70 degrees. The effects of a decrease in ratio of specific heats were seen by comparing data from two facilities in two test gases: in the Langley Mach 6 CF4 Tunnel in tetrafluoromethane (where gamma=1.22) and in the Langley 15-Inch Mach 6 Air Tunnel in perfect gas air (where gamma=1.4). In addition to the simulated real gas effects, the parametric effects of cowl position, contraction ratio, leading-edge sweep, and Reynolds number were investigated in the 15-Inch Mach 6 Air Tunnel. The models were instrumented with a total of 45 static pressure orifices distributed on the sidewalls and baseplate. Surface streamline patterns were examined via oil flow, and schlieren videos were made of the external flow field. The results of these tests have significant implications to ground based testing of inlets in facilities which do not operate at flight enthalpies.

  18. Experimental comparison of Pressure ratio in Alpha and Gamma Stirling cryocoolers with identical compression space volumes and driven simultaneously by a solitary novel compact mechanism

    Sant, K. D.; Bapat, S. L.


    The cryocooler technology is advancing in different ways at a considerable pace to explore cooler applications in diversified field. Stirling cryocoolers are capable to satisfy the contemporary requirements of a low-capacity cooler. A compact mechanism that can drive Stirling cryocooler with larger stroke and thus enhance the cooler performance is the need of the hour. The increase in the stroke will lead to a higher volumetric efficiency. Hence, a cryocooler with larger stroke will experience higher mass flow rate of the working fluid, thereby increasing its ideal cooling capacity. The novel compact drive mechanism that fulfils this need is a promising option in this regards. It is capable of operating more than one cryocoolers of different Stirling configurations simultaneously. This arrangement makes it possible to compare different Stirling cryocoolers on the basis of pressure ratio obtained experimentally. The preliminary experimental results obtained in this regard are presented here. The initial experimentation is carried out on two Alpha Stirling units driven simultaneously by the novel compact mechanism. The pressure ratio obtained during the initial stages is 1.3538, which is enhanced to 1.417 by connecting the rear volumes of the compressor pistons to each other. The fact that annular leak across the expander pistons due to high pressure ratio affects the cryocooler performance, generates the need to separate the expansion space from bounce space. This introduces a Gamma configuration that is operated simultaneously with one of the existing Alpha units by same drive mechanism and having identical compression space volume. The results obtained for pressure ratio in both these units prove the concept that cooling capacity of Alpha configuration exceeds that of Gamma under similar operating conditions. This has been observed at 14 bar and 20 bar charge pressures during the preliminary experimentation. These results are presented in this paper. Thus, the

  19. Information preserving image compression for archiving NMR images.

    Li, C C; Gokmen, M; Hirschman, A D; Wang, Y


    This paper presents a result on information preserving compression of NMR images for the archiving purpose. Both Lynch-Davisson coding and linear predictive coding have been studied. For NMR images of 256 x 256 x 12 resolution, the Lynch-Davisson coding with a block size of 64 as applied to prediction error sequences in the Gray code bit planes of each image gave an average compression ratio of 2.3:1 for 14 testing images. The predictive coding with a third order linear predictor and the Huffman encoding of the prediction error gave an average compression ratio of 3.1:1 for 54 images under test, while the maximum compression ratio achieved was 3.8:1. This result is one step further toward the improvement, albeit small, of the information preserving image compression for medical applications.

  20. Mid-Latitude Pc1, 2 Pulsations Induced by Magnetospheric Compression in the Maximum and Early Recovery Phase of Geomagnetic Storms

    N. A. Zolotukhina; I.P. Kharchenko


    We investigate the properties of interplanetary inhomogeneities generating long-lasting mid-latitude Pc1, 2 geomagnetic pulsations. The data from the Wind and IMP 8 spacecrafts, and from the Mondy and Borok midlatitude magnetic observatories are used in this study. The pulsations under investigation develop in the maximum and early recovery phase of magnetic storms. The pulsations have amplitudes from a few tens to several hundred pT andlast more than seven hours. A close association of the increase (decrease) in solar wind dynamic pressure (Psw) with the onset or enhancement (attenuation or decay) of these pulsations has been established. Contrary to high-latitude phenomena, there is a distinctive feature of the interplanetary inhomogeneities that are responsible for generation of long-lasting mid-latitude Pc1, 2. It is essential that the effect of the quasi-stationary negative Bz-component of the interplanetary magnetic field on the magnetosphere extends over 4 hours. Only then are the Psw pulses able to excite the above-mentioned type of mid-latitude geomagnetic pulsations. Model calculations show that in the cases under study the plasmapause can form in the vicinity of the magnetic observatory. This implies that the existence of an intense ring current resulting from the enhanced magnetospheric convection is necessary for the Pc1, 2 excitation. Further, the existence of the plasmapause above the observation point (as a waveguide) is necessary for long-lasting Pc1 waves to arrive at the ground.

  1. Decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio.

    Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan


    A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.

  2. Green technology effect of injection pressure, timing and compression ratio in constant pressure heat addition cycle by an eco-friendly material.

    Karthikayan, S; Sankaranarayanan, G; Karthikeyan, R


    Present energy strategies focus on environmental issues, especially environmental pollution prevention and control by eco-friendly green technologies. This includes, increase in the energy supplies, encouraging cleaner and more efficient energy management, addressing air pollution, greenhouse effect, global warming, and climate change. Biofuels provide the panorama of new fiscal opportunities for people in rural area for meeting their need and also the demand of the local market. Biofuels concern protection of the environment and job creation. Renewable energy sources are self-reliance resources, have the potential in energy management with less emissions of air pollutants. Biofuels are expected to reduce dependability on imported crude oil with connected economic susceptibility, reduce greenhouse gases, other pollutants and invigorate the economy by increasing demand and prices for agricultural products. The use of neat paradise tree oil and induction of eco-friendly material Hydrogen through inlet manifold in a constant pressure heat addition cycle engine (diesel engine) with optimized engine operating parameters such as injection timing, injection pressure and compression ratio. The results shows the heat utilization efficiency for neat vegetable oil is 29% and neat oil with 15% Hydrogen as 33%. The exhaust gas temperature (EGT) for 15% of H2 share as 450°C at full load and the heat release of 80J/deg. crank angle for 15% Hydrogen energy share. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio

    Hu, Kainan; Zhang, Hongwu; Geng, Shaojuan


    A decoupled scheme based on the Hermite expansion to construct lattice Boltzmann models for the compressible Navier-Stokes equations with arbitrary specific heat ratio is proposed. The local equilibrium distribution function including the rotational velocity of particle is decoupled into two parts, i.e., the local equilibrium distribution function of the translational velocity of particle and that of the rotational velocity of particle. From these two local equilibrium functions, two lattice Boltzmann models are derived via the Hermite expansion, namely one is in relation to the translational velocity and the other is connected with the rotational velocity. Accordingly, the distribution function is also decoupled. After this, the evolution equation is decoupled into the evolution equation of the translational velocity and that of the rotational velocity. The two evolution equations evolve separately. The lattice Boltzmann models used in the scheme proposed by this work are constructed via the Hermite expansion, so it is easy to construct new schemes of higher-order accuracy. To validate the proposed scheme, a one-dimensional shock tube simulation is performed. The numerical results agree with the analytical solutions very well.

  4. Effects of the aspect ratio on the optimal tilting angle for maximum convection heat transfer across air-filled rectangular enclosures differentially heated at sides

    Cianfrini, C.; Corcione, M.; Habib, E.; Quintino, A.


    Natural convection in air-filled rectangular cavities inclined with respect to gravity, so that the heated wall is facing upwards, is studied numerically under the assumption of two-dimensional laminar flow. A computational code based on the SIMPLE-C algorithm is used for the solution of the system of the mass, momentum and energy transfer governing equations. Simulations are performed for height-to-width aspect ratios of the enclosure from 0.25 to 8, Rayleigh numbers based on the length of the heated and cooled walls from 102 to 107, and tilting angles of the enclosure from 0° to 75°. The existence of an optimal tilting angle is confirmed for any investigated configuration, at a location that increases as the Rayleigh number is decreased, and the height-to-width aspect ratio of the cavity are increased, unless the value of the Rayleigh number is that corresponding to the onset of convection or just higher. Dimensionless correlating equations are developed to predict the optimal tilting angle and the heat transfer performance of the enclosure.

  5. 基于局部边缘和变化率检测的无损图像压缩方法%A Lossless Image Compression Algorithm Based on Detection of Local Edge and Variance Ratio

    赵军; 王国胤; 吴中福; 吴渝; 李华


    In this paper,a new predictive coding algorithm is presented for lossless image compression. This algorithm considers both the local edge and the variance ratio of pixel value in prediction process. It further reduces the entropy of the predictive error image with error feedback technology. Simulation results show that the performance of this algorithm is better than not only the standard algorithm(LOCO_I)provided by JPEG_LS ,but also CALIC, which is the state-of-art in the literature of image compression.

  6. 结合最大方差比准则和PCNN模型的图像分割%Image segmentation with PCNN model and maximum of variance ratio

    辛国江; 邹北骥; 李建锋; 陈再良; 蔡美玲


    脉冲耦合神经网络(PCNN)模型在图像分割方面有着很好的应用.在各项参数确定的情况下,其分割结果的好坏取决于循环迭代次数的多少,而PCNN模型自身无法实现迭代次数的自动判定.为此提出一种结合最大方差比准则的PCNN迭代次数自动判定算法,用于实现图像的自动分割.算法利用最大方差比准则找到图像的最优分割界限,确定PCNN的迭代次数,获得最优图像分割结果,然后利用最大香农熵准则验证分割结果.实验表明:提出的算法实现了PCNN迭代次数的自动判定,提高了PCNN的迭代速度,运行效率优于基于2D-OTSU和基于交叉熵的自动分割算法,图像分割效果良好.%The Pulse Coupled Neural Network (FCNN) model is very suitable for image segmentation. With given parameters, the results of segmentation are determined only by the times of iteration. However, the PCNN model itself cannot automatically discover the optimal iteration times. Therefore, an algorithm based on the maximization of variance ratio criteria is proposed to solve this problem. The algorithm can automatically discover the best iteration times by applying the maximization of variance ratio criteria, and get the best segmentation results. Eventually, the Shannon entropy rule is used to check the segmentation results. The experimental results show that the algorithm can automatically discover the optimal iteration times, the segmentation results are satisfactory, and it improves the speed of PCNN iteration, and it is also more efficient than the automatic segmentation algorithm based 2D-OTSU and cross-entropy.

  7. 配气相位和压缩比对CNG发动机性能影响的模拟研究%A Simulation Study on Influence of Valve Timing and Compression Ratio on CNG Engine Performance

    斯海林; 姜在先; 王志洪; 何义团


    The authors analyze the regulation of valve timing and compression ratio on the CNG engine performance by establishing a CNG engine simulation model based on BOOST. It is found that the CNG engine can achieve the same power of the original engine and can effectively prevent backfire when the compression ratio is 12:1 and the valve overlap is 0°.%  利用BOOST建立CNG发动机仿真模型,研究配气相位和压缩比对发动机性能影响规律。研究表明,压缩比等于12:1、气门重叠角等于0°时,CNG发动机能够达到原发动机的功率水平,且能有效防止回火现象的发生。

  8. 轴压比对Y型偏心支撑RC框架抗震性能的影响%The effect of axial compression ratio on seismic property of Y-type eccentrically braced RC frames

    王军良; 赵宝成


    针对Y型支撑加固多高层混凝土框架工程中框架轴压比的不同,对不同轴压比的Y型偏心支撑砼框架的滞回性能进行了非线性有限元分析。结果表明,随着轴压比的增大,结构的承载力呈先上升后下降的趋势;轴压比的增大能提高结构初始刚度,同时使结构后期刚度下降加快。最后,对Y型支撑加固混凝土框架工程中关于轴压比提出设计建议。%As to the different axis pressures of column in high-level RC frames retrofitted with Y-steel bracings, this paper presents the nonlinear finite element analysis on the hysteresis behavior of the Y-eccentrically braced RC frames under the different axial compression ratio. The analysis indicates that the bearing capacity of the structure tends to fall after rising along with the increase of the axial compressive ratio; the increase of the axial compressive ratio can improve the initial stiffness of structures but also make the later stiffness drop speed up. Finally, this paper suggests the axial compression ratio in the design of the Y-eccentrically braced RC frames.

  9. Maximum Fidelity

    Kinkhabwala, Ali


    The most fundamental problem in statistics is the inference of an unknown probability distribution from a finite number of samples. For a specific observed data set, answers to the following questions would be desirable: (1) Estimation: Which candidate distribution provides the best fit to the observed data?, (2) Goodness-of-fit: How concordant is this distribution with the observed data?, and (3) Uncertainty: How concordant are other candidate distributions with the observed data? A simple unified approach for univariate data that addresses these traditionally distinct statistical notions is presented called "maximum fidelity". Maximum fidelity is a strict frequentist approach that is fundamentally based on model concordance with the observed data. The fidelity statistic is a general information measure based on the coordinate-independent cumulative distribution and critical yet previously neglected symmetry considerations. An approximation for the null distribution of the fidelity allows its direct conversi...

  10. Lossless Medical Image Compression

    Nagashree G


    Full Text Available Image compression has become an important process in today‟s world of information exchange. Image compression helps in effective utilization of high speed network resources. Medical Image Compression is very important in the present world for efficient archiving and transmission of images. In this paper two different approaches for lossless image compression is proposed. One uses the combination of 2D-DWT & FELICS algorithm for lossy to lossless Image Compression and another uses combination of prediction algorithm and Integer wavelet Transform (IWT. To show the effectiveness of the methodology used, different image quality parameters are measured and shown the comparison of both the approaches. We observed the increased compression ratio and higher PSNR values.

  11. Randomness Testing of Compressed Data

    Chang, Weiling; Yun, Xiaochun; Wang, Shupeng; Yu, Xiangzhan


    Random Number Generators play a critical role in a number of important applications. In practice, statistical testing is employed to gather evidence that a generator indeed produces numbers that appear to be random. In this paper, we reports on the studies that were conducted on the compressed data using 8 compression algorithms or compressors. The test results suggest that the output of compression algorithms or compressors has bad randomness, the compression algorithms or compressors are not suitable as random number generator. We also found that, for the same compression algorithm, there exists positive correlation relationship between compression ratio and randomness, increasing the compression ratio increases randomness of compressed data. As time permits, additional randomness testing efforts will be conducted.

  12. International Prostatic Symptom Score-voiding/storage subscore ratio in association with total prostatic volume and maximum flow rate is diagnostic of bladder outlet-related lower urinary tract dysfunction in men with lower urinary tract symptoms.

    Yuan-Hong Jiang

    Full Text Available OBJECTIVES: The aim of this study was to investigate the predictive values of the total International Prostate Symptom Score (IPSS-T and voiding to storage subscore ratio (IPSS-V/S in association with total prostate volume (TPV and maximum urinary flow rate (Qmax in the diagnosis of bladder outlet-related lower urinary tract dysfunction (LUTD in men with lower urinary tract symptoms (LUTS. METHODS: A total of 298 men with LUTS were enrolled. Video-urodynamic studies were used to determine the causes of LUTS. Differences in IPSS-T, IPSS-V/S ratio, TPV and Qmax between patients with bladder outlet-related LUTD and bladder-related LUTD were analyzed. The positive and negative predictive values (PPV and NPV for bladder outlet-related LUTD were calculated using these parameters. RESULTS: Of the 298 men, bladder outlet-related LUTD was diagnosed in 167 (56%. We found that IPSS-V/S ratio was significantly higher among those patients with bladder outlet-related LUTD than patients with bladder-related LUTD (2.28±2.25 vs. 0.90±0.88, p1 or >2 was factored into the equation instead of IPSS-T, PPV were 91.4% and 97.3%, respectively, and NPV were 54.8% and 49.8%, respectively. CONCLUSIONS: Combination of IPSS-T with TPV and Qmax increases the PPV of bladder outlet-related LUTD. Furthermore, including IPSS-V/S>1 or >2 into the equation results in a higher PPV than IPSS-T. IPSS-V/S>1 is a stronger predictor of bladder outlet-related LUTD than IPSS-T.

  13. Image compression in local helioseismology

    Löptien, Björn; Gizon, Laurent; Schou, Jesper


    Context. Several upcoming helioseismology space missions are very limited in telemetry and will have to perform extensive data compression. This requires the development of new methods of data compression. Aims. We give an overview of the influence of lossy data compression on local helioseismology. We investigate the effects of several lossy compression methods (quantization, JPEG compression, and smoothing and subsampling) on power spectra and time-distance measurements of supergranulation flows at disk center. Methods. We applied different compression methods to tracked and remapped Dopplergrams obtained by the Helioseismic and Magnetic Imager onboard the Solar Dynamics Observatory. We determined the signal-to-noise ratio of the travel times computed from the compressed data as a function of the compression efficiency. Results. The basic helioseismic measurements that we consider are very robust to lossy data compression. Even if only the sign of the velocity is used, time-distance helioseismology is still...

  14. Artificial Neural Network Model for Predicting Compressive

    Salim T. Yousif


    Full Text Available   Compressive strength of concrete is a commonly used criterion in evaluating concrete. Although testing of the compressive strength of concrete specimens is done routinely, it is performed on the 28th day after concrete placement. Therefore, strength estimation of concrete at early time is highly desirable. This study presents the effort in applying neural network-based system identification techniques to predict the compressive strength of concrete based on concrete mix proportions, maximum aggregate size (MAS, and slump of fresh concrete. Back-propagation neural networks model is successively developed, trained, and tested using actual data sets of concrete mix proportions gathered from literature.    The test of the model by un-used data within the range of input parameters shows that the maximum absolute error for model is about 20% and 88% of the output results has absolute errors less than 10%. The parametric study shows that water/cement ratio (w/c is the most significant factor  affecting the output of the model.     The results showed that neural networks has strong potential as a feasible tool for predicting compressive strength of concrete.

  15. Virtually Lossless Compression of Astrophysical Images

    Alparone Luciano


    Full Text Available We describe an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean square error of the decoded data is globally controlled by users, near-lossless methods are capable of locally constraining the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM scheme, recently introduced by the authors and relying on a causal spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.. The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble space telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually lossless compression, that is, a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community.

  16. Maximum power point tracking control with active disturbance rejection controller based on the best tip speed ratio%最佳叶尖速比的最大功率自抗扰跟踪控制

    李娟; 张克兆; 李生权; 刘超


    Considering the permanent magnet synchronous wind generator system with uncertainties, multi interferences and low efficiency, a maximum power point tracking with active disturbance rejection control strategy based on the best tip speed ratio was proposed to track the motor speed real time and to capture the maximum power. The active disturbance rejection controller does not depend on the mathematical model of the system. The uncertainties including nonlinear, strong coupling, parameter variations and ex-ternal disturbances wer lumped to the total disturbances of system, which affect the tracking speed in real time. The extended state observer estimates the total disturbances, and then compensates them through the feedback controller, which improves the speed tracking ability. Simulation results show that, com-pared with the traditional PI control method, the proposed control strategy not only guarantees the system to achieve maximum power output, but also has strong robustness against uncertain dynamics and external disturbances.%针对永磁同步风力发电系统中存在的不确定、多干扰、效率低等问题,提出一种以实现最大功率跟踪控制为目标,实时跟踪电机转速的基于最佳叶尖速比的自抗扰控制策略. 该方法不依赖于系统数学模型,将永磁同步风力发电机存在的、影响转速难以实时跟踪的非线性、强耦合、参数变化、外界干扰等不确定性看成系统总干扰. 通过扩张状态观测器对系统的总干扰进行估计,然后通过反馈控制器进行干扰补偿,从而提高转速的跟踪能力. 仿真结果表明,与传统的PI控制方法相比,自抗扰控制不仅能保证系统实现最大功率输出,而且提高了系统的鲁棒性和抗干扰性能.

  17. Spectral Animation Compression

    Chao Wang; Yang Liu; Xiaohu Guo; Zichun Zhong; Binh Le; Zhigang Deng


    This paper presents a spectral approach to compress dynamic animation consisting of a sequence of homeomor-phic manifold meshes. Our new approach directly compresses the field of deformation gradient defined on the surface mesh, by decomposing it into rigid-body motion (rotation) and non-rigid-body deformation (stretching) through polar decompo-sition. It is known that the rotation group has the algebraic topology of 3D ring, which is different from other operations like stretching. Thus we compress these two groups separately, by using Manifold Harmonics Transform to drop out their high-frequency details. Our experimental result shows that the proposed method achieves a good balance between the reconstruction quality and the compression ratio. We compare our results quantitatively with other existing approaches on animation compression, using standard measurement criteria.

  18. Increased NR2A:NR2B ratio compresses long-term depression range and constrains long-term memory.

    Cui, Zhenzhong; Feng, Ruiben; Jacobs, Stephanie; Duan, Yanhong; Wang, Huimin; Cao, Xiaohua; Tsien, Joe Z


    The NR2A:NR2B subunit ratio of the NMDA receptors is widely known to increase in the brain from postnatal development to sexual maturity and to aging, yet its impact on memory function remains speculative. We have generated forebrain-specific NR2A overexpression transgenic mice and show that these mice had normal basic behaviors and short-term memory, but exhibited broad long-term memory deficits as revealed by several behavioral paradigms. Surprisingly, increased NR2A expression did not affect 1-Hz-induced long-term depression (LTD) or 100 Hz-induced long-term potentiation (LTP) in the CA1 region of the hippocampus, but selectively abolished LTD responses in the 3-5 Hz frequency range. Our results demonstrate that the increased NR2A:NR2B ratio is a critical genetic factor in constraining long-term memory in the adult brain. We postulate that LTD-like process underlies post-learning information sculpting, a novel and essential consolidation step in transforming new information into long-term memory.

  19. Speech Compression Using Multecirculerletet Transform

    Sulaiman Murtadha


    Full Text Available Compressing the speech reduces the data storage requirements, leading to reducing the time of transmitting the digitized speech over long-haul links like internet. To obtain best performance in speech compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry.The MCT bases functions are derived from GHM bases function using 2D linear convolution .The fast computation algorithm methods introduced here added desirable features to the current transform. We further assess the performance of the MCT in speech compression application. This paper discusses the effect of using DWT and MCT (one and two dimension on speech compression. DWT and MCT performances in terms of compression ratio (CR, mean square error (MSE and peak signal to noise ratio (PSNR are assessed. Computer simulation results indicate that the two dimensions MCT offer a better compression ratio, MSE and PSNR than DWT.

  20. libpolycomp: Compression/decompression library

    Tomasi, Maurizio


    Libpolycomp compresses and decompresses one-dimensional streams of numbers by means of several algorithms. It is well-suited for time-ordered data acquired by astronomical instruments or simulations. One of the algorithms, called "polynomial compression", combines two widely-used ideas (namely, polynomial approximation and filtering of Fourier series) to achieve substantial compression ratios for datasets characterized by smoothness and lack of noise. Notable examples are the ephemerides of astronomical objects and the pointing information of astronomical telescopes. Other algorithms implemented in this C library are well known and already widely used, e.g., RLE, quantization, deflate (via libz) and Burrows-Wheeler transform (via libbzip2). Libpolycomp can compress the timelines acquired by the Planck/LFI instrument with an overall compression ratio of ~9, while other widely known programs (gzip, bzip2) reach compression ratios less than 1.5.

  1. "Compressed" Compressed Sensing

    Reeves, Galen


    The field of compressed sensing has shown that a sparse but otherwise arbitrary vector can be recovered exactly from a small number of randomly constructed linear projections (or samples). The question addressed in this paper is whether an even smaller number of samples is sufficient when there exists prior knowledge about the distribution of the unknown vector, or when only partial recovery is needed. An information-theoretic lower bound with connections to free probability theory and an upper bound corresponding to a computationally simple thresholding estimator are derived. It is shown that in certain cases (e.g. discrete valued vectors or large distortions) the number of samples can be decreased. Interestingly though, it is also shown that in many cases no reduction is possible.

  2. Compressibility effects on the flow past a rotating cylinder

    Teymourtash, A. R.; Salimipour, S. E.


    In this paper, laminar flow past a rotating circular cylinder placed in a compressible uniform stream is investigated via a two-dimensional numerical simulation and the compressibility effects due to the combination of the free-stream and cylinder rotation on the flow pattern such as forming, shedding, and removing of vortices and also the lift and drag coefficients are studied. The numerical simulation of the flow is based on the discretization of convective fluxes of the unsteady Navier-Stokes equations by second-order Roe's scheme and an explicit finite volume method. Because of the importance of the time dependent parameters in the solution, the second-order time accurate is applied by a dual time stepping approach. In order to validate the operation of a computer program, some results are compared with previous experimental and numerical data. The results of this study show that the effects due to flow compressibility such as normal shock wave caused the interesting variations on the flow around the cylinder even at a free-stream with a low Mach number. At incompressible flow around the rotating cylinder, increasing the speed ratio, α (ratio of the surface speed to free-stream velocity), causes the ongoing increase in the lift coefficient, but in compressible flow for each free-stream Mach number, increasing the speed ratio results in obtaining a limited lift coefficient (a maximum mean lift coefficient). In addition, results from the compressible flow indicate that by increasing the free-stream Mach number, the maximum mean lift coefficient is decreased, while the mean drag coefficient is increased. It is also found that by increasing the Reynolds number at low Mach numbers, the maximum mean lift coefficient and critical speed ratio are decreased and the mean drag coefficient and Strouhal number are increased. However at the higher Mach numbers, these parameters become independent of the Reynolds number.

  3. Engineering Relative Compression of Genomes

    Grabowski, Szymon


    Technology progress in DNA sequencing boosts the genomic database growth at faster and faster rate. Compression, accompanied with random access capabilities, is the key to maintain those huge amounts of data. In this paper we present an LZ77-style compression scheme for relative compression of multiple genomes of the same species. While the solution bears similarity to known algorithms, it offers significantly higher compression ratios at compression speed over a order of magnitude greater. One of the new successful ideas is augmenting the reference sequence with phrases from the other sequences, making more LZ-matches available.

  4. Axial Compressive Strength of Foamcrete with Different Profiles and Dimensions

    Othuman Mydin M.A.


    Full Text Available Lightweight foamcrete is a versatile material; primarily consist of a cement based mortar mixed with at least 20% volume of air. High flow ability, lower self-weight, minimal requirement of aggregate, controlled low strength and good thermal insulation properties are a few characteristics of foamcrete. Its dry densities, typically, is below 1600kg/m3 with compressive strengths maximum of 15MPa. The ASTM standard provision specifies a correction factor for concrete strengths of between 14 and 42MPa to compensate for the reduced strength when the aspect height-to-diameter ratio of specimen is less than 2.0, while the CEB-FIP provision specifically mentions the ratio of 150 x 300mm cylinder strength to 150 mm cube strength. However, both provisions requirements do not specifically clarify the applicability and/or modification of the correction factors for the compressive strength of foamcrete. This proposed laboratory work is intended to study the effect of different dimensions and profiles on the axial compressive strength of concrete. Specimens of various dimensions and profiles are cast with square and circular cross-sections i.e., cubes, prisms and cylinders, and to investigate their behavior in compression strength at 7 and 28 days. Hypothetically, compressive strength will decrease with the increase of concrete specimen dimension and concrete specimen with cube profile would yield comparable compressive strength to cylinder (100 x 100 x 100mm cube to 100dia x 200mm cylinder.


    阎西康; 陈育苏; 常璐平; 陈培


    Ductility is one of a most important parameters to evaluate the seismic performance of the structure, in order to study the impact of axial compression ratio on the ductility of frame with construction joints, skeleton curve, displacement ductility factor and the hysteresis curves of the frames with construction joint were got by a low-cycle loading experiment,whose results were compared to the results of the experiments of the four frame columns under the same conditions. The results showed that with the increase of axial compression ratio,frame ductility decreased,and so did the impact of construction joints on the frame ductility.%延性是评价结构抗震性能的最主要的参数之一,为了研究轴压比对带有施工缝框架结构的延性的影响,通过对带有施工缝框架进行低周反复荷载试验,得到骨架曲线,位移延性系数和滞回曲线,并与相同条件下4根框架柱的试验得到的结果进行对比分析,得到随着轴压比的提高,框架延性有所降低,且施工缝的存在对框架延性降低的影响减小。

  6. Maximum Work of Free-Piston Stirling Engine Generators

    Kojima, Shinji


    Using the method of adjoint equations described in Ref. [1], we have calculated the maximum thermal efficiencies that are theoretically attainable by free-piston Stirling and Carnot engine generators by considering the work loss due to friction and Joule heat. The net work done by the Carnot cycle is negative even when the duration of heat addition is optimized to give the maximum amount of heat addition, which is the same situation for the Brayton cycle described in our previous paper. For the Stirling cycle, the net work done is positive, and the thermal efficiency is greater than that of the Otto cycle described in our previous paper by a factor of about 2.7-1.4 for compression ratios of 5-30. The Stirling cycle is much better than the Otto, Brayton, and Carnot cycles. We have found that the optimized piston trajectories of the isothermal, isobaric, and adiabatic processes are the same when the compression ratio and the maximum volume of the same working fluid of the three processes are the same, which has facilitated the present analysis because the optimized piston trajectories of the Carnot and Stirling cycles are the same as those of the Brayton and Otto cycles, respectively.

  7. Mortar constituent of concrete under cyclic compression

    Maher, A.; Darwin, D.


    The behavior of the mortar constituent of concrete under cyclic compression was studied and a simple analytic model was developed to represent its cyclic behavior. Experimental work consisted of monotonic and cyclic compressive loading of mortar. Two mixes were used, with proportions corresponding to concretes having water cement ratios of 0.5 and 0.6. Forty-four groups of specimens were tested at ages ranging from 5 to 70 days. complete monotonic and cyclic stress strain envelopes were obtained. A number of loading regimes were investigated, including cycles to a constant maximum strain. Major emphasis was placed on tests using relatively high stress cycles. Degradation was shown to be a continuous process and a function of both total strain and load history. No stability or fatigue limit was apparent.

  8. Backpropagation Neural Network Implementation for Medical Image Compression

    Kamil Dimililer


    Full Text Available Medical images require compression, before transmission or storage, due to constrained bandwidth and storage capacity. An ideal image compression system must yield high-quality compressed image with high compression ratio. In this paper, Haar wavelet transform and discrete cosine transform are considered and a neural network is trained to relate the X-ray image contents to their ideal compression method and their optimum compression ratio.

  9. Beamforming Using Compressive Sensing


    dB to align the peak at 7.3o. Comparing peaks to val- leys , compressive sensing provides a greater main to interference (and noise) ratio...elements. Acknowledgments This research was supported by the Office of Naval Research. The authors would like to especially thank of Roger Gauss and Joseph

  10. Prediction of Concrete Compressive Strength by Evolutionary Artificial Neural Networks

    Mehdi Nikoo


    Full Text Available Compressive strength of concrete has been predicted using evolutionary artificial neural networks (EANNs as a combination of artificial neural network (ANN and evolutionary search procedures, such as genetic algorithms (GA. In this paper for purpose of constructing models samples of cylindrical concrete parts with different characteristics have been used with 173 experimental data patterns. Water-cement ratio, maximum sand size, amount of gravel, cement, 3/4 sand, 3/8 sand, and coefficient of soft sand parameters were considered as inputs; and using the ANN models, the compressive strength of concrete is calculated. Moreover, using GA, the number of layers and nodes and weights are optimized in ANN models. In order to evaluate the accuracy of the model, the optimized ANN model is compared with the multiple linear regression (MLR model. The results of simulation verify that the recommended ANN model enjoys more flexibility, capability, and accuracy in predicting the compressive strength of concrete.

  11. Effect of Lossy JPEG Compression of an Image with Chromatic Aberrations on Target Measurement Accuracy

    Matsuoka, R.


    This paper reports an experiment conducted to investigate the effect of lossy JPEG compression of an image with chromatic aberrations on the measurement accuracy of target center by the intensity-weighted centroid method. I utilized six images shooting a white sheet with 30 by 20 black filled circles in the experiment. The images were acquired by a digital camera Canon EOS 20D. The image data were compressed by using two compression parameter sets of a downsampling ratio, a quantization table and a Huffman code table utilized in EOS 20D. The experiment results clearly indicate that lossy JPEG compression of an image with chromatic aberrations would produce a significant effect on measurement accuracy of target center by the intensity-weighted centroid method. The maximum displacements of red, green and blue components caused by lossy JPEG compression were 0.20, 0.09, and 0.20 pixels respectively. The results also suggest that the downsampling of the chrominance components Cb and Cr in lossy JPEG compression would produce displacements between uncompressed image data and compressed image data. In conclusion, since the author consider that it would be unable to correct displacements caused by lossy JPEG compression, the author would recommend that lossy JPEG compression before recording an image in a digital camera should not be executed in case of highly precise image measurement by using color images acquired by a non-metric digital camera.

  12. Conceptual design of heavy ion beam compression using a wedge

    Jonathan C. Wong


    Full Text Available Heavy ion beams are a useful tool for conducting high energy density physics (HEDP experiments. Target heating can be enhanced by beam compression, because a shorter pulse diminishes hydrodynamic expansion during irradiation. A conceptual design is introduced to compress ∼100  MeV/u to ∼GeV/u heavy ion beams using a wedge. By deflecting the beam with a time-varying field and placing a tailor-made wedge amid its path downstream, each transverse slice passes through matter of different thickness. The resulting energy loss creates a head-to-tail velocity gradient, and the wedge shape can be designed by using stopping power models to give maximum compression at the target. The compression ratio at the target was found to vary linearly with (head-to-tail centroid offset/spot radius at the wedge. The latter should be approximately 10 to attain tenfold compression. The decline in beam quality due to projectile ionization, energy straggling, fragmentation, and scattering is shown to be acceptable for well-chosen wedge materials. A test experiment is proposed to verify the compression scheme and to study the beam-wedge interaction and its associated beam dynamics, which will facilitate further efforts towards a HEDP facility.

  13. Compressive sensing of sparse tensors.

    Friedland, Shmuel; Li, Qun; Schonfeld, Dan


    Compressive sensing (CS) has triggered an enormous research activity since its first appearance. CS exploits the signal's sparsity or compressibility in a particular domain and integrates data compression and acquisition, thus allowing exact reconstruction through relatively few nonadaptive linear measurements. While conventional CS theory relies on data representation in the form of vectors, many data types in various applications, such as color imaging, video sequences, and multisensor networks, are intrinsically represented by higher order tensors. Application of CS to higher order data representation is typically performed by conversion of the data to very long vectors that must be measured using very large sampling matrices, thus imposing a huge computational and memory burden. In this paper, we propose generalized tensor compressive sensing (GTCS)-a unified framework for CS of higher order tensors, which preserves the intrinsic structure of tensor data with reduced computational complexity at reconstruction. GTCS offers an efficient means for representation of multidimensional data by providing simultaneous acquisition and compression from all tensor modes. In addition, we propound two reconstruction procedures, a serial method and a parallelizable method. We then compare the performance of the proposed method with Kronecker compressive sensing (KCS) and multiway compressive sensing (MWCS). We demonstrate experimentally that GTCS outperforms KCS and MWCS in terms of both reconstruction accuracy (within a range of compression ratios) and processing speed. The major disadvantage of our methods (and of MWCS as well) is that the compression ratios may be worse than that offered by KCS.

  14. Image Compression using GSOM Algorithm



    Full Text Available compression. Conventional techniques such as Huffman coding and the Shannon Fano method, LZ Method, Run Length Method, LZ-77 are more recent methods for the compression of data. A traditional approach to reduce the large amount of data would be to discard some data redundancy and introduce some noise after reconstruction. We present a neural network based Growing self-organizing map technique that may be a reliable and efficient way to achieve vector quantization. Typical application of such algorithm is image compression. Moreover, Kohonen networks realize a mapping between an input and an output space that preserves topology. This feature can be used to build new compression schemes which allow obtaining better compression rate than with classical method as JPEG without reducing the image quality .the experiment result show that proposed algorithm improve the compression ratio in BMP, JPG and TIFF File.

  15. Effect of Raw Materials and Their Ratio on Compressive Strength of Magnesium Phosphate Cement%原材料及配比对磷酸镁水泥强度的影响

    齐召庆; 徐哲; 孙运亮; 丁建华; 张时豪; 姜自超


    The influence of M/P ratio (mass ratio of MgO and K2HPO4), water/cement ratio (W/C), borax content, MgO specific surface area on the early strength of magnesium phosphate cement was studied. And microstructure of magnesium phosphate cement was analyzed by using scanning electron microscope (SEM). The results show that the compressive strength of magnesium phosphate cement at the age of 1h decreases with the increase of M/P ratio, when hydration age is 3 d and 7 d, M/P ratio is 4:1,magnesium phosphate cement stone has the highest strength, the highest intensity can reach to 74.68 MPa; water-binder ratio has little effect on early strength of magnesium phosphate cement;when hydration age is 7 d, the strength decreases with the increase of water-binder ratio; in the early hydration, with the increasing of borax content, crystal of magnesium phosphate cement hydration products becomes small, the structure becomes loose, the crystal defect increases, so its strength decreases; in the later hydration, the structure becomes compact, the strength after 7 d has little change. Within 7 d, the strength of the magnesium phosphate cement increases with increasing of MgO specific surface area.%研究了 M/P 比值(MgO 与 K2HPO4的质量比),水胶比(W/C),硼砂掺量,MgO 比表面积对磷酸镁水泥早期强度的影响,采用扫描电子显微镜对磷酸镁水泥的微观形貌进行了表征。结果表明:磷酸镁水泥1 h的抗压强度随 M/P 比值的增大而减小,在水化龄期为3 d 和7 d 时,M/P 比值为4:1时磷酸镁水泥石的强度最高,最高强度达到了74.68 MPa,水胶比对磷酸镁水泥石早期强度影响不大,7 d 强度随着水胶比的增大而减小,磷酸镁水泥在水化早期随着硼砂掺量的增加,水化产物晶体变得细小,晶体缺陷增多,结构疏松,其强度随着硼砂掺量增加而降低,后期水化产物连接成一体,结构致密,7 d 的强度几乎

  16. Effects of JPEG data compression on magnetic resonance imaging evaluation of small vessels ischemic lesions of the brain; Efeitos da compressao de dados JPEG na avaliacao de lesoes vasculares cerebrais isquemicas de pequenos vasos em ressonancia magnetica

    Kuriki, Paulo Eduardo de Aguiar; Abdala, Nitamar; Nogueira, Roberto Gomes; Carrete Junior, Henrique; Szejnfeld, Jacob [Universidade Federal de Sao Paulo (UNIFESP/EPM), SP (Brazil). Dept. de Diagnostico por Imagem]. E-mail:


    Objective: to establish the maximum achievable JPEG compression ratio without affecting quantitative and qualitative magnetic resonance imaging analysis of ischemic lesion in small vessels of the brain. Material and method: fifteen DICOM images were converted to JPEG with a compression ratio of 1:10 to 1:60 and were assessed together with the original images by three neuro radiologists. The number, morphology and signal intensity of the lesions were analyzed. Results: lesions were properly identified up to a 1:30 ratio. More lesions were identified with a 1:10 ratio then in the original images. Morphology and edges were properly evaluated up toa 1:40 ratio. Compression did not affect signal. Conclusion: small lesions were identified ( < 2 mm ) and in all compression ratios the JPEG algorithm generated image noise that misled observers to identify more lesions in JPEG images then in DICOM images, thus generating false-positive results.(author)

  17. Transverse Compression of Tendons.

    Salisbury, S T Samuel; Buckley, C Paul; Zavatsky, Amy B


    A study was made of the deformation of tendons when compressed transverse to the fiber-aligned axis. Bovine digital extensor tendons were compression tested between flat rigid plates. The methods included: in situ image-based measurement of tendon cross-sectional shapes, after preconditioning but immediately prior to testing; multiple constant-load creep/recovery tests applied to each tendon at increasing loads; and measurements of the resulting tendon displacements in both transverse directions. In these tests, friction resisted axial stretch of the tendon during compression, giving approximately plane-strain conditions. This, together with the assumption of a form of anisotropic hyperelastic constitutive model proposed previously for tendon, justified modeling the isochronal response of tendon as that of an isotropic, slightly compressible, neo-Hookean solid. Inverse analysis, using finite-element (FE) simulations of the experiments and 10 s isochronal creep displacement data, gave values for Young's modulus and Poisson's ratio of this solid of 0.31 MPa and 0.49, respectively, for an idealized tendon shape and averaged data for all the tendons and E = 0.14 and 0.10 MPa for two specific tendons using their actual measured geometry. The compression load versus displacement curves, as measured and as simulated, showed varying degrees of stiffening with increasing load. This can be attributed mostly to geometrical changes in tendon cross section under load, varying according to the initial 3D shape of the tendon.

  18. Ratio between maximum standardized uptake value of N1 lymph nodes and tumor predicts N2 disease in patients with non-small cell lung cancer in 18F-FDG PET-CT scan.

    Honguero Martínez, A F; García Jiménez, M D; García Vicente, A; López-Torres Hidalgo, J; Colon, M J; van Gómez López, O; Soriano Castrejón, Á M; León Atance, P


    F-18 fluorodeoxyglucose integrated PET-CT scan is commonly used in the work-up of lung cancer to improve preoperative disease stage. The aim of the study was to analyze the ratio between SUVmax of N1 lymph nodes and primary lung cancer to establish prediction of mediastinal disease (N2) in patients operated on non-small cell lung cancer. This is a retrospective study of a prospective database. Patients operated on non-small cell lung cancer (NSCLC) with N1 disease by PET-CT scan were included. None of them had previous induction treatment, but they underwent standard surgical resection plus systematic lymphadenectomy. There were 51 patients with FDG-PET-CT scan N1 disease. 44 (86.3%) patients were male with a mean age of 64.1±10.8 years. Type of resection: pneumonectomy=4 (7.9%), lobectomy/bilobectomy=44 (86.2%), segmentectomy=3 (5.9%). adenocarcinoma=26 (51.0%), squamous=23 (45.1%), adenosquamous=2 (3.9%). Lymph nodes after surgical resection: N0=21 (41.2%), N1=12 (23.5%), N2=18 (35.3%). Mean ratio of the SUVmax of N1 lymph node to the SUVmax of the primary lung tumor (SUVmax N1/T ratio) was 0.60 (range 0.08-2.80). ROC curve analysis to obtain the optimal cut-off value of SUVmax N1/T ratio to predict N2 disease was performed. At multivariate analysis, we found that a ratio of 0.46 or greater was an independent predictor factor of N2 mediastinal lymph node metastases with a sensitivity and specificity of 77.8% and 69.7%, respectively. SUVmax N1/T ratio in NSCLC patients correlates with mediastinal lymph node metastasis (N2 disease) after surgical resection. When SUVmax N1/T ratio on integrated PET-CT scan is equal or superior to 0.46, special attention should be paid on higher probability of N2 disease. Copyright © 2015 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  19. Analysis of fracture process zone in brittle rock subjected to shear-compressive loading

    ZHOU De-quan; CHEN Feng; CAO Ping; MA Chun-de


    An analytical expression for the prediction of shear-compressive fracture process zone(SCFPZ) is derived by using a proposed local strain energy density criterion, in which the strain energy density is separated into the dilatational and distortional strain energy density, only the former is considered to contribute to the brittle fracture of rock in different loading cases. The theoretical prediction by this criterion shows that the SCFPZ is of asymmetric mulberry leaf in shape, which forms a shear-compression fracture kern. Dilatational strain energy density along the boundary of SCFPZ reaches its maximum value. The dimension of SCFPZ is governed by the ratio of KⅡ to KⅠ . The analytical results are then compared with those from literatures and the tests conducted on double edge cracked Brazilian disk subjected to diametrical compression. The obtained results are useful to the prediction of crack extension and to nonlinear analysis of shear-compressive fracture of brittle rock.

  20. Wellhead compression

    Harrington, Joe [Sertco Industries, Inc., Okemah, OK (United States); Vazquez, Daniel [Hoerbiger Service Latin America Inc., Deerfield Beach, FL (United States); Jacobs, Denis Richard [Hoerbiger do Brasil Industria de Equipamentos, Cajamar, SP (Brazil)


    Over time, all wells experience a natural decline in oil and gas production. In gas wells, the major problems are liquid loading and low downhole differential pressures which negatively impact total gas production. As a form of artificial lift, wellhead compressors help reduce the tubing pressure resulting in gas velocities above the critical velocity needed to surface water, oil and condensate regaining lost production and increasing recoverable reserves. Best results come from reservoirs with high porosity, high permeability, high initial flow rates, low decline rates and high total cumulative production. In oil wells, excessive annulus gas pressure tends to inhibit both oil and gas production. Wellhead compression packages can provide a cost effective solution to these problems by reducing the system pressure in the tubing or annulus, allowing for an immediate increase in production rates. Wells furthest from the gathering compressor typically benefit the most from wellhead compression due to system pressure drops. Downstream compressors also benefit from higher suction pressures reducing overall compression horsepower requirements. Special care must be taken in selecting the best equipment for these applications. The successful implementation of wellhead compression from an economical standpoint hinges on the testing, installation and operation of the equipment. Key challenges and suggested equipment features designed to combat those challenges and successful case histories throughout Latin America are discussed below.(author)

  1. Compressive beamforming

    Xenaki, Angeliki; Mosegaard, Klaus


    Sound source localization with sensor arrays involves the estimation of the direction-of-arrival (DOA) from a limited number of observations. Compressive sensing (CS) solves such underdetermined problems achieving sparsity, thus improved resolution, and can be solved efficiently with convex...

  2. Image quality (IQ) guided multispectral image compression

    Zheng, Yufeng; Chen, Genshe; Wang, Zhonghai; Blasch, Erik


    Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT -- discrete cosine transform), JPEG 2000 (DWT -- discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW -- Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.

  3. Beamforming using compressive sensing.

    Edelmann, Geoffrey F; Gaumond, Charles F


    Compressive sensing (CS) is compared with conventional beamforming using horizontal beamforming of at-sea, towed-array data. They are compared qualitatively using bearing time records and quantitatively using signal-to-interference ratio. Qualitatively, CS exhibits lower levels of background interference than conventional beamforming. Furthermore, bearing time records show increasing, but tolerable, levels of background interference when the number of elements is decreased. For the full array, CS generates signal-to-interference ratio of 12 dB, but conventional beamforming only 8 dB. The superiority of CS over conventional beamforming is much more pronounced with undersampling.

  4. Partial transparency of compressed wood

    Sugimoto, Hiroyuki; Sugimori, Masatoshi


    We have developed novel wood composite with optical transparency at arbitrary region. Pores in wood cells have a great variation in size. These pores expand the light path in the sample, because the refractive indexes differ between constituents of cell and air in lumen. In this study, wood compressed to close to lumen had optical transparency. Because the condition of the compression of wood needs the plastic deformation, wood was impregnated phenolic resin. The optimal condition for high transmission is compression ratio above 0.7.

  5. 高压缩比甲醇发动机的性能和排放研究%Performance and Emissions Research of the High Compression Ratio Methanol Engine

    王晋; 朱建军; 王勇; 刘磊; 高聪慧


    用一辆农用拖拉机的柴油机进行台架试验。在一台1115单缸柴油机上,经过加装电热塞、增大压缩比和喷油泵直径的改装后燃用M100甲醇燃料,并对改装后的甲醇发动机与原柴油机进行对比试验。试验结果表明:甲醇发动机的动力性比原柴油机高,经济性得到改善;尾气排放中甲醇发动机 NOx平均降低45%,尽管 HC和CO的排放整体比柴油机高,但在大负荷时可以平均降低70%;并且在尾气进行三元催化处理后,可以使 HC和CO排放降低到与柴油机一样。通过试验对比,对甲醇替代柴油的可能提供了理论依据,对柴油机节能减排具有重大意义。%The experiment uses an farm tractor diesel engine to conduct an bench test .To make it burns M100 methanol fuel ,we add a glow plug , enlarge the compression ratio and the diameter of fuel injection pump on a 1115 single cylinder diesel engine , and combine the test research between the methanol engine and the original diesel engine .Results shows that the methanol engine can improve dynamic and economy;The methanol engine can reduce 45%averagely on NO x e-missions;Although HC and CO emissions are higher than the original diesel engine , but the methanol engine also can re-duce by 70%on average in high loads .emissions decreace the same as the original engine's after the tail gas is treated by three-way catalytic converter .According to the research , it provides a theoretical basis that methanol could replace diesel and makes great significance for energy save and emission reduction .

  6. Compressing DNA sequence databases with coil

    Hendy Michael D


    Full Text Available Abstract Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.

  7. Image Compression Using Discrete Wavelet Transform

    Mohammad Mozammel Hoque Chowdhury


    Full Text Available Image compression is a key technology in transmission and storage of digital images because of vast data associated with them. This research suggests a new image compression scheme with pruning proposal based on discrete wavelet transformation (DWT. The effectiveness of the algorithm has been justified over some real images, and the performance of the algorithm has been compared with other common compression standards. The algorithm has been implemented using Visual C++ and tested on a Pentium Core 2 Duo 2.1 GHz PC with 1 GB RAM. Experimental results demonstrate that the proposed technique provides sufficient high compression ratios compared to other compression techniques.

  8. Maximum Autocorrelation Factorial Kriging

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.


    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from...

  9. Application of detector precision characteristics and histogram packing for compression of biological fluorescence micrographs.

    Bernas, Tytus; Starosolski, Roman; Robinson, J Paul; Rajwa, Bartlomiej


    Modern applications of biological microscopy such as high-content screening (HCS), 4D imaging, and multispectral imaging may involve collection of thousands of images in every experiment making efficient image-compression techniques necessary. Reversible compression algorithms, when used with biological micrographs, provide only a moderate compression ratio, while irreversible techniques obtain better ratios at the cost of removing some information from images and introducing artifacts. We construct a model of noise, which is a function of signal in the imaging system. In the next step insignificant intensity levels are discarded using intensity binning. The resultant images, characterized by sparse intensity histograms, are coded reversibly. We evaluate compression efficiency of combined reversible coding and intensity depth-reduction using single-channel 12-bit light micrographs of several subcellular structures. We apply local and global measures of intensity distribution to estimate maximum distortions introduced by the proposed algorithm. We demonstrate that the algorithm provides efficient compression and does not introduce significant changes to biological micrographs. The algorithm preserves information content of these images and therefore offers better fidelity than standard irreversible compression method JPEG2000. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Establishment of Maximum Voluntary Compressive Neck Tolerance Levels


    Bridges Casey Pirnstill Chris Burneka John Plaga Grant Roush Biosciences and Performance Division Vulnerability Analysis Branch July 2011...S) Michael Cote, John Buhrman, Nathaniel Bridges, Casey Pirnstill, Chris Burneka, John Plaga , Grant Roush 5d. PROJECT NUMBER OSMS 5e. TASK

  11. Structured-light Image Compression Based on Fractal Theory


    The method of fractal image compression is introduced which is applied to compress the line structured-light image. Based on the self-similarity of the structured-light image, we attain satisfactory compression ratio and higher peak signal-to-noise ratio (PSNR). The experimental results indicate that this method can achieve high performance.

  12. The Lateral Compressive Buckling Performance of Aluminum Honeycomb Panels for Long-Span Hollow Core Roofs

    Caiqi Zhao


    Full Text Available To solve the problem of critical buckling in the structural analysis and design of the new long-span hollow core roof architecture proposed in this paper (referred to as a “honeycomb panel structural system” (HSSS, lateral compression tests and finite element analyses were employed in this study to examine the lateral compressive buckling performance of this new type of honeycomb panel with different length-to-thickness ratios. The results led to two main conclusions: (1 Under the experimental conditions that were used, honeycomb panels with the same planar dimensions but different thicknesses had the same compressive stiffness immediately before buckling, while the lateral compressive buckling load-bearing capacity initially increased rapidly with an increasing honeycomb core thickness and then approached the same limiting value; (2 The compressive stiffnesses of test pieces with the same thickness but different lengths were different, while the maximum lateral compressive buckling loads were very similar. Overall instability failure is prone to occur in long and flexible honeycomb panels. In addition, the errors between the lateral compressive buckling loads from the experiment and the finite element simulations are within 6%, which demonstrates the effectiveness of the nonlinear finite element analysis and provides a theoretical basis for future analysis and design for this new type of spatial structure.

  13. Hydrologic Cycle Response to the Paleocene-Eocene Thermal Maximum at Austral, High-Latitude Site 690 as Revealed by In Situ Measurements of Foraminiferal Oxygen Isotope and Mg/Ca Ratios

    Kozdon, R.; Kelly, D.; Fournelle, J.; Valley, J. W.


    Earth surface temperatures warmed by ~5°C during an ancient (~55.5 Ma) global warming event termed the Paleocene-Eocene thermal maximum (PETM). This transient (~200 ka) "hyperthermal" climate state had profound consequences for the planet's surficial processes and biosphere, and is widely touted as being an ancient analog for climate change driven by human activities. Hallmarks of the PETM are pervasive carbonate dissolution in the ocean basins and a negative carbon isotope excursion (CIE) recorded in variety of substrates including soil and marine carbonates. Together these lines of evidence signal the rapid (≤30 ka) release of massive quantities (≥2000 Gt) of 13C-depleted carbon into the exogenic carbon cycle. Paleoenvironmental reconstructions based on pedogenic features in paleosols, clay mineralogy and sedimentology of coastal and continental deposits, and land-plant communities indicate that PETM warmth was accompanied by a major perturbation to the hydrologic cycle. Micropaleontological evidence and n-alkane hydrogen isotope records indicate that increased poleward moisture transport reduced sea-surface salinities (SSSs) in the central Arctic Ocean during the PETM. Such findings are broadly consistent with predictions of climate model simulations. Here we reassess a well-studied PETM record from the Southern Ocean (ODP Site 690) in light of new δ18O and Mg/Ca data obtained from planktic foraminiferal shells by secondary ion mass spectrometry (SIMS) and electron microprobe analysis (EMPA), respectively. The unparalleled spatial resolution of these in situ techniques permits extraction of more reliable δ18O and Mg/Ca data by targeting of minute (≤10 μm spots), biogenic domains within individual planktic foraminifera that retain the original shell chemistry (Kozdon et al. 2011, Paleocean.). In general, the stratigraphic profile and magnitude of the δ18O decrease (~2.2‰) delimiting PETM warming in our SIMS-generated record are similar to those of

  14. Transfer induced compressive strain in graphene

    Larsen, Martin Benjamin Barbour Spanget; Mackenzie, David; Caridad, Jose


    We have used spatially resolved micro Raman spectroscopy to map the full width at half maximum (FWHM) of the graphene G-band and the 2D and G peak positions, for as-grown graphene on copper catalyst layers, for transferred CVD graphene and for micromechanically exfoliated graphene, in order...... to characterize the effects of a transfer process on graphene properties. Here we use the FWHM(G) as an indicator of the doping level of graphene, and the ratio of the shifts in the 2D and G bands as an indicator of strain. We find that the transfer process introduces an isotropic, spatially uniform, compressive...... strain in graphene, and increases the carrier concentration....

  15. Maximum stellar iron core mass

    F W Giacobbe


    An analytical method of estimating the mass of a stellar iron core, just prior to core collapse, is described in this paper. The method employed depends, in part, upon an estimate of the true relativistic mass increase experienced by electrons within a highly compressed iron core, just prior to core collapse, and is significantly different from a more typical Chandrasekhar mass limit approach. This technique produced a maximum stellar iron core mass value of 2.69 × 1030 kg (1.35 solar masses). This mass value is very near to the typical mass values found for neutron stars in a recent survey of actual neutron star masses. Although slightly lower and higher neutron star masses may also be found, lower mass neutron stars are believed to be formed as a result of enhanced iron core compression due to the weight of non-ferrous matter overlying the iron cores within large stars. And, higher mass neutron stars are likely to be formed as a result of fallback or accretion of additional matter after an initial collapse event involving an iron core having a mass no greater than 2.69 × 1030 kg.

  16. Development of Wavelet Image Compression Technique to Particle Image Velocimetry



    In order to reduce the noise in the images and the physical storage,the wavelet-based image compression technique was applied to PIV processing in this paper,To study the effect of the wavelet bases,the standard PIV images were compressed by some known wavelet families,Daubechies,Coifman and Baylkin families with various compression ratios.It was found that a higher order wavelet base provided good compression performance for compressing PIV images,The error analysis of velocity field obtained indicated that the high compression ratio even up to 64:1,can be realized without losing significant flow information in PIV processing.The wavelet compression technique of PIV was applied to the experimental images of jet flow and showed excellent performance,A reduced number of erroneous vectors can be realized by varying compression ratio.It can say that the wavelet image compression technique is very effective in PIV system.

  17. Dual compression is not an uncommon type of iliac vein compression syndrome.

    Shi, Wan-Yin; Gu, Jian-Ping; Liu, Chang-Jian; Lou, Wen-Sheng; He, Xu


    Typical iliac vein compression syndrome (IVCS) is characterized by compression of left common iliac vein (LCIV) by the overlying right common iliac artery (RCIA). We described an underestimated type of IVCS with dual compression by right and left common iliac arteries (LCIA) simultaneously. Thirty-one patients with IVCS were retrospectively included. All patients received trans-catheter venography and computed tomography (CT) examinations for diagnosing and evaluating IVCS. Late venography and reconstructed CT were used for evaluating the anatomical relationship among LCIV, RCIA and LCIA. Imaging manifestations as well as demographic data were collected and evaluated by two experienced radiologists. Sole and dual compression were found in 32.3% (n = 10) and 67.7% (n = 21) of 31 patients respectively. No statistical differences existed between them in terms of age, gender, LCIV diameter at the maximum compression point, pressure gradient across stenosis, and the percentage of compression level. On CT and venography, sole compression was commonly presented with a longitudinal compression at the orifice of LCIV while dual compression was usually presented as two types: one had a lengthy stenosis along the upper side of LCIV and the other was manifested by a longitudinal compression near to the orifice of external iliac vein. The presence of dual compression seemed significantly correlated with the tortuous LCIA (p = 0.006). Left common iliac vein can be presented by dual compression. This type of compression has typical manifestations on late venography and CT.

  18. Binary-phase compression of stretched pulses

    Lozovoy, Vadim V.; Nairat, Muath; Dantus, Marcos


    Pulse stretching and compression are essential for the energy scale-up of ultrafast lasers. Here, we consider a radical approach using spectral binary phases, containing only two values (0 and π) for stretching and compressing laser pulses. We numerically explore different strategies and present results for pulse compression of factors up to a million back to the transform limit and experimentally obtain results for pulse compression of a factor of one hundred, in close agreement with numerical calculations. Imperfections resulting from binary-phase compression are addressed by considering cross-polarized wave generation filtering, and show that this approach leads to compressed pulses with contrast ratios greater than ten orders of magnitude. This new concept of binary-phase stretching and compression, if implemented in a multi-layer optic, could eliminate the need for traditional pulse stretchers and more importantly expensive compressors.

  19. Unsupervised regions of interest extraction for color image compression

    Xiaoguang Shao; Kun Gao; Lili L(U); Guoqiang Ni


    A novel unsupervised approach for regions of interest (ROI) extraction that combines the modified visual attention model and clustering analysis method is proposed.Then the non-uniform color image compression algorithm is followed to compress ROI and other regions with different compression ratios through the JPEG image compression algorithm.The reconstruction algorithm of the compressed image is similar to that of the JPEG algorithm.Experimental results show that the proposed method has better performance in terms of compression ratio and fidelity when comparing with other traditional approaches.

  20. Quality Aware Compression of Electrocardiogram Using Principal Component Analysis.

    Gupta, Rajarshi


    Electrocardiogram (ECG) compression finds wide application in various patient monitoring purposes. Quality control in ECG compression ensures reconstruction quality and its clinical acceptance for diagnostic decision making. In this paper, a quality aware compression method of single lead ECG is described using principal component analysis (PCA). After pre-processing, beat extraction and PCA decomposition, two independent quality criteria, namely, bit rate control (BRC) or error control (EC) criteria were set to select optimal principal components, eigenvectors and their quantization level to achieve desired bit rate or error measure. The selected principal components and eigenvectors were finally compressed using a modified delta and Huffman encoder. The algorithms were validated with 32 sets of MIT Arrhythmia data and 60 normal and 30 sets of diagnostic ECG data from PTB Diagnostic ECG data ptbdb, all at 1 kHz sampling. For BRC with a CR threshold of 40, an average Compression Ratio (CR), percentage root mean squared difference normalized (PRDN) and maximum absolute error (MAE) of 50.74, 16.22 and 0.243 mV respectively were obtained. For EC with an upper limit of 5 % PRDN and 0.1 mV MAE, the average CR, PRDN and MAE of 9.48, 4.13 and 0.049 mV respectively were obtained. For mitdb data 117, the reconstruction quality could be preserved up to CR of 68.96 by extending the BRC threshold. The proposed method yields better results than recently published works on quality controlled ECG compression.

  1. The Influenced of Compression on Properties of Binderless Compressed Veneer Made from Oil Palm Trunk

    Norhafizah Saari


    Full Text Available Binderless compressed veneer panels from oil palm trunk consisted of 5 layers of oil palm trunk veneers were made with 3 different thickness, 7 mm, 10 mm and 15 mm. The panels were pressed at temperature of 180 °C with pressure 5 MPa at duration time of 20 minutes. The veneers were pressed without using any synthetic adhesive in the manufacturing process. Mechanical and physical properties such as flexural test, thickness swelling and water absorption, density and compression ratio were observed and evaluated based on Japanese Agricultural Standard 2003 (JAS. The findings showed that binderless compressed veneer panels that undergo pressing process with thickness bar 7 mm showed the highest value of flexural strength compared to other type of panels. Dimensional stability such as thickness swelling and water absorption showed relationship with compression ratio. Based on the results, the compression ratio did influenced the properties of binderless compressed veneer panel made from oil palm trunk.

  2. 正态总体均值与标准差比在简单半序约束下的最大似然估计%Maximum Likelihood Estimation of Ratios of Means and Standard Deviations from Normal Populations with Different Sample Numbers under Semi-Order Restriction

    史海芳; 李树有; 姬永刚


    For two normal populations with u~nown means μi and variances σ2i>0,i=1,2,assume that there is a semi-order restriction between ratios of means and standard deviations and sample numbers of two normal populations are different.A procedure of obtaining the maximum likelihood estimatom of μi's and σ's under the semi-order restrictions is proposed.For i=3 case,some connected results and simulations are given.

  3. An improved fast fractal image compression using spatial texture correlation

    Wang Xing-Yuan; Wang Yuan-Xing; Yun Jiao-Jiao


    This paper utilizes a spatial texture correlation and the intelligent classification algorithm (ICA) search strategy to speed up the encoding process and improve the bit rate for fractal image compression.Texture features is one of the most important properties for the representation of an image.Entropy and maximum entry from co-occurrence matrices are used for representing texture features in an image.For a range block,concerned domain blocks of neighbouring range blocks with similar texture features can be searched.In addition,domain blocks with similar texture features are searched in the ICA search process.Experiments show that in comparison with some typical methods,the proposed algorithm significantly speeds up the encoding process and achieves a higher compression ratio,with a slight diminution in the quality of the reconstructed image; in comparison with a spatial correlation scheme,the proposed scheme spends much less encoding time while the compression ratio and the quality of the reconstructed image are almost the same.

  4. Compressive Sensing Over Networks

    Feizi, Soheil; Effros, Michelle


    In this paper, we demonstrate some applications of compressive sensing over networks. We make a connection between compressive sensing and traditional information theoretic techniques in source coding and channel coding. Our results provide an explicit trade-off between the rate and the decoding complexity. The key difference of compressive sensing and traditional information theoretic approaches is at their decoding side. Although optimal decoders to recover the original signal, compressed by source coding have high complexity, the compressive sensing decoder is a linear or convex optimization. First, we investigate applications of compressive sensing on distributed compression of correlated sources. Here, by using compressive sensing, we propose a compression scheme for a family of correlated sources with a modularized decoder, providing a trade-off between the compression rate and the decoding complexity. We call this scheme Sparse Distributed Compression. We use this compression scheme for a general multi...

  5. Data Compression for Video-Conferencing using Half tone and Wavelet Transform

    Dr. H.B.Kekre


    Full Text Available Overhead of data transmission over internet is increasing exponentially every day. Optimization of natural bandwidth is the basic motive by compressing image data to the maximum extend. For the same objective, combination of lossy half tone and lossless Wavelet Transform techniques is proposed so as to obtain low-bit rate video data transmission. Decimal values of bitmapped image are to be converted into either 1 or 0 in half toning process that incur pictorial loss and gives 8:1 compression ratio (CR irrespective of image. Wavelet Transform is applied on half tone image for higher compression for various levels. An experimental result shows the higher CR, minimum Mean Square Error (MSE. Ten sample images of different people captured by Nikon camera are used for experimentation.All images are bitmap (.BMP 512 X 512 in size. The proposed technique can be used for video conferencing, storage of movies and CCTV footage etc.

  6. Influence of compression-expansion effect on oscillating-flow heat transfer in a finned heat exchanger

    Ke TANG; Juan YU; Tao JIN; Zhi-hua GAN


    Compression and expansion of a working gas due to the pressure oscillation of an oscillating flow can lead to a temperature variation of the working gas,which will affect the heat transfer in the oscillating flow.This study focuses on the impact of the compression-expansion effect,indicated by the pressure ratio,on the heat transfer in a finned heat exchanger under practical operating conditions of the ambient-temperature heat exchangers in Stirling-type pulse tube refrigerators.The experimental results summarized as the Nusselt number are presented for analysis.An increase in the pressure ratio can result in a marked rise in the Nussclt number,which indicates that the compression-expansion effect should be considered in characterizing the heat transfer of the oscillating flow,especially in the cases with a higher Valensi number and a lower maximum Reynolds number.

  7. Effect of Specimen Shape and Size on the Compressive Strength of Foamed Concrete

    Sudin M.A.S.


    Full Text Available Lightweight concrete, in the form of foamed concrete, is a versatile material that primarily consists of a cement based mortar, mixed with at least 20% volume of air. Its dry density is typically below 1600 kg/m3 with a maximum compressive strength of 15MPa. The ASTM standard provision specifies a correction factor for concrete strength of between 14 and 42Mpa, in order to compensate for a reduced strength, when the aspect height-to-diameter ratio of a specimen is less than 2.0. However, the CEB-FIP provision specifically mentions a ratio of 150mm dia. × 300mm cylinder strength to 150 mm cube strength; though, both provision requirements do not specifically clarify the applicability and/or modification of the correction factors for the compressive strength to lightweight concrete (in this case, foamed concrete. The focus of this work is to study the effect of specimen size and shape on the axial compressive strength of concrete. Specimens of various sizes and shapes were cast with square and circular cross-sections i.e., cubes, prisms, and cylinders. Their compression strength behaviours at 7 and 28 days were investigated. The results indicate that, as the CEB-FIP provision specified, even for foamed concrete, 100mm cubes (l/d = 1.0 produce a comparable compressive strength with 100mm dia. × 200mm cylinders (l/d = 2.0.

  8. Compression limits in cascaded quadratic soliton compression

    Bache, Morten; Bang, Ole; Krolikowski, Wieslaw;


    Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency.......Cascaded quadratic soliton compressors generate under optimal conditions few-cycle pulses. Using theory and numerical simulations in a nonlinear crystal suitable for high-energy pulse compression, we address the limits to the compression quality and efficiency....

  9. Fundamental Interactions in Gasoline Compression Ignition Engines with Fuel Stratification

    Wolk, Benjamin Matthew

    ) a 98-species version including nitric oxide formation reactions. Development of reduced mechanisms is necessary because the detailed mechanism is computationally prohibitive in three-dimensional CFD and chemical kinetics simulations. Simulations of Partial Fuel Stratification (PFS), a GCI strategy, have been performed using CONVERGE with the 96-species reduced mechanism developed in this work for a 4-component gasoline surrogate. Comparison is made to experimental data from the Sandia HCCI/GCI engine at a compression ratio 14:1 at intake pressures of 1 bar and 2 bar. Analysis of the heat release and temperature in the different equivalence ratio regions reveals that sequential auto-ignition of the stratified charge occurs in order of increasing equivalence ratio for 1 bar intake pressure and in order of decreasing equivalence ratio for 2 bar intake pressure. Increased low- and intermediate-temperature heat release with increasing equivalence ratio at 2 bar intake pressure compensates for decreased temperatures in higher-equivalence ratio regions due to evaporative cooling from the liquid fuel spray and decreased compression heating from lower values of the ratio of specific heats. The presence of low- and intermediate-temperature heat release at 2 bar intake pressure alters the temperature distribution of the mixture stratification before hot-ignition, promoting the desired sequential auto-ignition. At 1 bar intake pressure, the sequential auto-ignition occurs in the reverse order compared to 2 bar intake pressure and too fast for useful reduction of the maximum pressure rise rate compared to HCCI. Additionally, the premixed portion of the charge auto-ignites before the highest-equivalence ratio regions. Conversely, at 2 bar intake pressure, the premixed portion of the charge auto-ignites last, after the higher-equivalence ratio regions. More importantly, the sequential auto-ignition occurs over a longer time period for 2 bar intake pressure than at 1 bar intake

  10. Satellite data compression

    Huang, Bormin


    Satellite Data Compression covers recent progress in compression techniques for multispectral, hyperspectral and ultra spectral data. A survey of recent advances in the fields of satellite communications, remote sensing and geographical information systems is included. Satellite Data Compression, contributed by leaders in this field, is the first book available on satellite data compression. It covers onboard compression methodology and hardware developments in several space agencies. Case studies are presented on recent advances in satellite data compression techniques via various prediction-

  11. 高轴压比CFRP约束钢筋混凝土圆柱抗震性能试验与有限元分析%Experiment and finite element analysis of the seismic behavior of CFRP-confined RC circular columns with high axial compression ratio

    王震宇; 王代玉; 吕大刚


    To investigate the seismic behavior of FRP-confined circular RC columns with high axial compression ratio, six columns confined with Carbon Fiber-Reinforced Polymer (CFRP) at plastic hinge region and two control columns were tested under constant axial load and cyclic lateral force. Test results demonstrated marked improvement in the ductility and energy dissipation of the columns due to CFRP wrapping in the plastic hinge region and the contribution of hoops to the confining effect should not be ignored under the condition of high axial compression ratio. A nonlinear analytical procedure was developed using fiber model method based on OpenSees (Open System for Earthquake Engineering Simulation). The simulation results agree well with the experimental results for axial compression ratios less than O. 45. Inclusion of the confining effects of both the hoop~ and CFRP results in better simulation of the test results if the .axial compression ratio exceeds 0.45. Finally, influences of axial compression ratio and length of CFRP in the plastic hinge region on the seismic performance of FRP-confined columns were analyzed. The results indicate that the lateral loading capacity of columns begins to decrease when axial compression in the plastic hinge region exceeds 1.2 times the column diameter, the wrapped columns. ratio exceeds 0.6. If the length of wrapped CFRP performance could be equivalent to that with fully%为研究高轴压比下FRP约束钢筋混凝土圆柱的抗震性能,对6根碳纤维约束钢筋混凝土圆柱及2根对比柱进行伪静力试验。结果表明:塑性铰区包裹碳纤维可显著改善高轴压比柱的抗震性能,轴压比较高时不应忽略箍筋对核心混凝土的约束贡献。基于OpenSees中的纤维模型,对柱水平力.位移滞回曲线进行有限元模拟。轴压比小于0.45时,数值模拟与试验结果吻合较好;轴压比大于0.45时,考虑核心混凝土受箍筋及FRP双重约束的计

  12. Maximum Autocorrelation Factorial Kriging

    Nielsen, Allan Aasbjerg; Conradsen, Knut; Pedersen, John L.; Steenfelt, Agnete


    This paper describes maximum autocorrelation factor (MAF) analysis, maximum autocorrelation factorial kriging, and its application to irregularly sampled stream sediment geochemical data from South Greenland. Kriged MAF images are compared with kriged images of varimax rotated factors from an ordinary non-spatial factor analysis, and they are interpreted in a geological context. It is demonstrated that MAF analysis contrary to ordinary non-spatial factor analysis gives an objective discrimina...

  13. Homogeneous Charge Compression Ignition Combustion of Dimethyl Ether

    Pedersen, Troels Dyhr; Schramm, Jesper


    This thesis is based on experimental and numerical studies on the use of dimethyl ether (DME) in the homogeneous charge compression ignition (HCCI) combustion process. The first paper in this thesis was published in 2007 and describes HCCI combustion of pure DME in a small diesel engine. The tests were designed to investigate the effect of engine speed, compression ratio and equivalence ratio on the combustion timing and the engine performance. It was found that the required compression ratio...

  14. Digital image compression in dermatology: format comparison.

    Guarneri, F; Vaccaro, M; Guarneri, C


    Digital image compression (reduction of the amount of numeric data needed to represent a picture) is widely used in electronic storage and transmission devices. Few studies have compared the suitability of the different compression algorithms for dermatologic images. We aimed at comparing the performance of four popular compression formats, Tagged Image File (TIF), Portable Network Graphics (PNG), Joint Photographic Expert Group (JPEG), and JPEG2000 on clinical and videomicroscopic dermatologic images. Nineteen (19) clinical and 15 videomicroscopic digital images were compressed using JPEG and JPEG2000 at various compression factors and TIF and PNG. TIF and PNG are "lossless" formats (i.e., without alteration of the image), JPEG is "lossy" (the compressed image has a lower quality than the original), JPEG2000 has a lossless and a lossy mode. The quality of the compressed images was assessed subjectively (by three expert reviewers) and quantitatively (by measuring, point by point, the color differences from the original). Lossless JPEG2000 (49% compression) outperformed the other lossless algorithms, PNG and TIF (42% and 31% compression, respectively). Lossy JPEG2000 compression was slightly less efficient than JPEG, but preserved image quality much better, particularly at higher compression factors. For its good quality and compression ratio, JPEG2000 appears to be a good choice for clinical/videomicroscopic dermatologic image compression. Additionally, its diffusion and other features, such as the possibility of embedding metadata in the image file and to encode various parts of an image at different compression levels, make it perfectly suitable for the current needs of dermatology and teledermatology.

  15. Compressive Fatigue in Wood

    Clorius, Christian Odin; Pedersen, Martin Bo Uhre; Hoffmeyer, Preben;


    An investigation of fatigue failure in wood subjected to load cycles in compression parallel to grain is presented. Small clear specimens of spruce are taken to failure in square wave formed fatigue loading at a stress excitation level corresponding to 80% of the short term strength. Four...... frequencies ranging from 0.01 Hz to 10 Hz are used. The number of cycles to failure is found to be a poor measure of the fatigue performance of wood. Creep, maximum strain, stiffness and work are monitored throughout the fatigue tests. Accumulated creep is suggested identified with damage and a correlation...... is observed between stiffness reduction and accumulated creep. A failure model based on the total work during the fatigue life is rejected, and a modified work model based on elastic, viscous and non-recovered viscoelastic work is experimentally supported, and an explanation at a microstructural level...

  16. Image compression algorithm using wavelet transform

    Cadena, Luis; Cadena, Franklin; Simonov, Konstantin; Zotin, Alexander; Okhotnikov, Grigory


    Within the multi-resolution analysis, the study of the image compression algorithm using the Haar wavelet has been performed. We have studied the dependence of the image quality on the compression ratio. Also, the variation of the compression level of the studied image has been obtained. It is shown that the compression ratio in the range of 8-10 is optimal for environmental monitoring. Under these conditions the compression level is in the range of 1.7 - 4.2, depending on the type of images. It is shown that the algorithm used is more convenient and has more advantages than Winrar. The Haar wavelet algorithm has improved the method of signal and image processing.

  17. An efficient medical image compression scheme.

    Li, Xiaofeng; Shen, Yi; Ma, Jiachen


    In this paper, a fast lossless compression scheme is presented for the medical image. This scheme consists of two stages. In the first stage, a Differential Pulse Code Modulation (DPCM) is used to decorrelate the raw image data, therefore increasing the compressibility of the medical image. In the second stage, an effective scheme based on the Huffman coding method is developed to encode the residual image. This newly proposed scheme could reduce the cost for the Huffman coding table while achieving high compression ratio. With this algorithm, a compression ratio higher than that of the lossless JPEG method for image can be obtained. At the same time, this method is quicker than the lossless JPEG2000. In other words, the newly proposed algorithm provides a good means for lossless medical image compression.

  18. Optimizing chest compressions during delivery-room resuscitation.

    Wyckoff, Myra H; Berg, Robert A


    There is a paucity of data to support the recommendations for cardiac compressions for the newly born. Techniques, compression to ventilation ratios, hand placement, and depth of compression guidelines are generally based on expert consensus, physiologic plausibility, and data from pediatric and adult models.

  19. The Research for Compression Algorithm of Aerial Imagery

    Zhiyong Peng


    Full Text Available In this study, the new method of the JPEG image compression algorithm with predictive coding algorithm combining was proposed, effectively eliminates redundant information of the sub-blocks and redundant information between the sub-blocks and sub-blocks. Achieved higher compression ratio compared to the JPEG compression algorithm and a good image quality.

  20. Efficient compression of molecular dynamics trajectory files.

    Marais, Patrick; Kenwood, Julian; Smith, Keegan Carruthers; Kuttel, Michelle M; Gain, James


    We investigate whether specific properties of molecular dynamics trajectory files can be exploited to achieve effective file compression. We explore two classes of lossy, quantized compression scheme: "interframe" predictors, which exploit temporal coherence between successive frames in a simulation, and more complex "intraframe" schemes, which compress each frame independently. Our interframe predictors are fast, memory-efficient and well suited to on-the-fly compression of massive simulation data sets, and significantly outperform the benchmark BZip2 application. Our schemes are configurable: atomic positional accuracy can be sacrificed to achieve greater compression. For high fidelity compression, our linear interframe predictor gives the best results at very little computational cost: at moderate levels of approximation (12-bit quantization, maximum error ≈ 10(-2) Å), we can compress a 1-2 fs trajectory file to 5-8% of its original size. For 200 fs time steps-typically used in fine grained water diffusion experiments-we can compress files to ~25% of their input size, still substantially better than BZip2. While compression performance degrades with high levels of quantization, the simulation error is typically much greater than the associated approximation error in such cases.

  1. Effect of Axial Pre-Compression on Lateral Performance of Masonry Under Cyclic Loading

    Syed HassanFarooq


    Full Text Available Strengthening of masonry against seismic events is very essential and getting maximum attention of researchers around the globe. An extensive experimental program was carried out to study the in-plane lateral performance of un-reinforced masonry, strengthened and retrofitted masonry wall panels under lateral cyclic loading. Twenty tests were carried out; four tests under monotonic lateral loading, twelve tests under static cyclic loading and four tests under pure compression. The test results were analyzed in five groups and this paper presents the analysis of group 4, which deals with effect of axial pre-compression on masonry seismic performance. Three single leaf panels with aspect ratio of 0.67 having size 1.65x1.1m were constructed using same material and workmanship. All the three un-reinforced walls were tested under 0, 0.5 and 1.0MPa vertical pre-compression and displacement controlled static cyclic loading. The wall tested under 0.5MPa pre-compression was reference specimen. The key parameters studied were hysterics behavior, peak lateral load, ultimate lateral displacement, energy dissipation, ductility, response factor and damping ratio. It was observed that level of axial pre-compression has significant effect on lateral capacity, failure mode and performance of masonry. In case of zero pre-compression the lateral capacity was very less and wall went into rocking failure at early stages of loading. Increase in pre-compression to 1.0MPa enhanced the lateral capacity by a factor of 1.92 times. After analysis of test results, it is found that pre-compression has significant effect on lateral capacity, failure mode and performance of masonry. In case of zero pre-compression the lateral capacity was very less and wall went into rocking failure at early stages of loading. Increase in pre-compression to 1.0MPa enhanced the lateral capacity by a factor of 1.92 times. After analysis of test results, it is found that pre-compression has very

  2. Morphological Transform for Image Compression

    Luis Pastor Sanchez Fernandez


    Full Text Available A new method for image compression based on morphological associative memories (MAMs is presented. We used the MAM to implement a new image transform and applied it at the transformation stage of image coding, thereby replacing such traditional methods as the discrete cosine transform or the discrete wavelet transform. Autoassociative and heteroassociative MAMs can be considered as a subclass of morphological neural networks. The morphological transform (MT presented in this paper generates heteroassociative MAMs derived from image subblocks. The MT is applied to individual blocks of the image using some transformation matrix as an input pattern. Depending on this matrix, the image takes a morphological representation, which is used to perform the data compression at the next stages. With respect to traditional methods, the main advantage offered by the MT is the processing speed, whereas the compression rate and the signal-to-noise ratio are competitive to conventional transforms.

  3. Effects of maximum aggregate size on UPV of brick aggregate concrete.

    Mohammed, Tarek Uddin; Mahmood, Aziz Hasan


    Investigation was carried out to study the effects of maximum aggregate size (MAS) (12.5mm, 19.0mm, 25.0mm, 37.5mm, and 50.0mm) on ultrasonic pulse velocity (UPV) of concrete. For investigation, first class bricks were collected and broken to make coarse aggregate. The aggregates were tested for specific gravity, absorption capacity, unit weight, and abrasion resistance. Cylindrical concrete specimens were made with different sand to aggregate volume ratio (s/a) (0.40 and 0.45), W/C ratio (0.45, 0.50, and 0.55), and cement content (375kg/m(3) and 400kg/m(3)). The specimens were tested for compressive strength and Young's modulus. UPV through wet specimen was measured using Portable Ultrasonic Non-destructive Digital Indicating Tester (PUNDIT). Results indicate that the pulse velocity through concrete increases with an increase in MAS. Relationships between UPV and compressive strength; and UPV and Young's modulus of concrete are proposed for different maximum sizes of brick aggregate.

  4. Performance Evaluation of Data Compression Systems Applied to Satellite Imagery

    Lilian N. Faria


    Full Text Available Onboard image compression systems reduce the data storage and downlink bandwidth requirements in space missions. This paper presents an overview and evaluation of some compression algorithms suitable for remote sensing applications. Prediction-based compression systems, such as DPCM and JPEG-LS, and transform-based compression systems, such as CCSDS-IDC and JPEG-XR, were tested over twenty multispectral (5-band images from CCD optical sensor of the CBERS-2B satellite. Performance evaluation of these algorithms was conducted using both quantitative rate-distortion measurements and subjective image quality analysis. The PSNR, MSSIM, and compression ratio results plotted in charts and the SSIM maps are used for comparison of quantitative performance. Broadly speaking, the lossless JPEG-LS outperforms other lossless compression schemes, and, for lossy compression, JPEG-XR can provide lower bit rate and better tradeoff between compression ratio and image quality.

  5. Maximum likely scale estimation

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo


    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and/or ...

  6. Compression of surface myoelectric signals using MP3 encoding.

    Chan, Adrian D C


    The potential of MP3 compression of surface myoelectric signals is explored in this paper. MP3 compression is a perceptual-based encoder scheme, used traditionally to compress audio signals. The ubiquity of MP3 compression (e.g., portable consumer electronics and internet applications) makes it an attractive option for remote monitoring and telemedicine applications. The effects of muscle site and contraction type are examined at different MP3 encoding bitrates. Results demonstrate that MP3 compression is sensitive to the myoelectric signal bandwidth, with larger signal distortion associated with myoelectric signals that have higher bandwidths. Compared to other myoelectric signal compression techniques reported previously (embedded zero-tree wavelet compression and adaptive differential pulse code modulation), MP3 compression demonstrates superior performance (i.e., lower percent residual differences for the same compression ratios).

  7. Word-Based Text Compression

    Platos, Jan


    Today there are many universal compression algorithms, but in most cases is for specific data better using specific algorithm - JPEG for images, MPEG for movies, etc. For textual documents there are special methods based on PPM algorithm or methods with non-character access, e.g. word-based compression. In the past, several papers describing variants of word-based compression using Huffman encoding or LZW method were published. The subject of this paper is the description of a word-based compression variant based on the LZ77 algorithm. The LZ77 algorithm and its modifications are described in this paper. Moreover, various ways of sliding window implementation and various possibilities of output encoding are described, as well. This paper also includes the implementation of an experimental application, testing of its efficiency and finding the best combination of all parts of the LZ77 coder. This is done to achieve the best compression ratio. In conclusion there is comparison of this implemented application wi...

  8. Compressive light field sensing.

    Babacan, S Derin; Ansorge, Reto; Luessi, Martin; Matarán, Pablo Ruiz; Molina, Rafael; Katsaggelos, Aggelos K


    We propose a novel design for light field image acquisition based on compressive sensing principles. By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images. Each captured image is a random linear combination of different angular views of a scene. The encoded images are then used to recover the original light field image via a novel Bayesian reconstruction algorithm. Using the principles of compressive sensing, we show that light field images with a large number of angular views can be recovered from only a few acquisitions. Moreover, the proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs. We present a prototype camera design based on the proposed framework by modifying a regular digital camera. Finally, we demonstrate the effectiveness of the proposed system using experimental results with both synthetic and real images.

  9. Computer Modeling of a CI Engine for Optimization of Operating Parameters Such as Compression Ratio, Injection Timing and Injection Pressure for Better Performance and Emission Using Diesel-Diesel Biodiesel Blends

    M. Venkatraman


    Full Text Available Problem statement: The present work describes a theoretical investigation concerning the performance of a four strokes compression ignition engine, which is powered by alternative fuels in the form of diesel and diesel biodiesel blends. Approach: The developed simulation model used to estimate the cylinder pressure, heat release rate, brake thermal efficiency, brake specific fuel consumption and engine out emissions. The simulation model includes Honerberg’s equation heat transfer model, Zero dimensional combustion model for the prediction of combustion parameters. Results: Experiments were performed in a single cylinder DI diesel engine fuelled with a blend of pungam methyl ester for the proportion of PME10, PME20 and PME30 by volume with diesel fuel for validation of simulated results. Conclusion/Recommendations: It was observed that there is a good agreement between simulated and experimental results which reveals the fact that the simulation model developed predicts the performance and emission characteristics of any biodiesel and diesel fuel and engine specifications given as input.

  10. A New Approach for Fingerprint Image Compression

    Mazieres, Bertrand


    The FBI has been collecting fingerprint cards since 1924 and now has over 200 million of them. Digitized with 8 bits of grayscale resolution at 500 dots per inch, it means 2000 terabytes of information. Also, without any compression, transmitting a 10 Mb card over a 9600 baud connection will need 3 hours. Hence we need a compression and a compression as close to lossless as possible: all fingerprint details must be kept. A lossless compression usually do not give a better compression ratio than 2:1, which is not sufficient. Compressing these images with the JPEG standard leads to artefacts which appear even at low compression rates. Therefore the FBI has chosen in 1993 a scheme of compression based on a wavelet transform, followed by a scalar quantization and an entropy coding : the so-called WSQ. This scheme allows to achieve compression ratios of 20:1 without any perceptible loss of quality. The publication of the FBI specifies a decoder, which means that many parameters can be changed in the encoding process: the type of analysis/reconstruction filters, the way the bit allocation is made, the number of Huffman tables used for the entropy coding. The first encoder used 9/7 filters for the wavelet transform and did the bit allocation using a high-rate bit assumption. Since the transform is made into 64 subbands, quite a lot of bands receive only a few bits even at an archival quality compression rate of 0.75 bit/pixel. Thus, after a brief overview of the standard, we will discuss a new approach for the bit-allocation that seems to make more sense where theory is concerned. Then we will talk about some implementation aspects, particularly for the new entropy coder and the features that allow other applications than fingerprint image compression. Finally, we will compare the performances of the new encoder to those of the first encoder.

  11. Maximum information photoelectron metrology

    Hockett, P; Wollenhaupt, M; Baumert, T


    Photoelectron interferograms, manifested in photoelectron angular distributions (PADs), are a high-information, coherent observable. In order to obtain the maximum information from angle-resolved photoionization experiments it is desirable to record the full, 3D, photoelectron momentum distribution. Here we apply tomographic reconstruction techniques to obtain such 3D distributions from multiphoton ionization of potassium atoms, and fully analyse the energy and angular content of the 3D data. The PADs obtained as a function of energy indicate good agreement with previous 2D data and detailed analysis [Hockett et. al., Phys. Rev. Lett. 112, 223001 (2014)] over the main spectral features, but also indicate unexpected symmetry-breaking in certain regions of momentum space, thus revealing additional continuum interferences which cannot otherwise be observed. These observations reflect the presence of additional ionization pathways and, most generally, illustrate the power of maximum information measurements of th...

  12. Spectral Distortion in Lossy Compression of Hyperspectral Data

    Bruno Aiazzi


    Full Text Available Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM, is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD for near-lossless methods, for example, differential pulse code modulation (DPCM, or mean-squared error (MSE for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies of interband distortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to the virtually lossless protocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.

  13. Metal Hydride Compression

    Johnson, Terry A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bowman, Robert [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Smith, Barton [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Anovitz, Lawrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jensen, Craig [Hawaii Hydrogen Carriers LLC, Honolulu, HI (United States)


    Conventional hydrogen compressors often contribute over half of the cost of hydrogen stations, have poor reliability, and have insufficient flow rates for a mature FCEV market. Fatigue associated with their moving parts including cracking of diaphragms and failure of seal leads to failure in conventional compressors, which is exacerbated by the repeated starts and stops expected at fueling stations. Furthermore, the conventional lubrication of these compressors with oil is generally unacceptable at fueling stations due to potential fuel contamination. Metal hydride (MH) technology offers a very good alternative to both conventional (mechanical) and newly developed (electrochemical, ionic liquid pistons) methods of hydrogen compression. Advantages of MH compression include simplicity in design and operation, absence of moving parts, compactness, safety and reliability, and the possibility to utilize waste industrial heat to power the compressor. Beyond conventional H2 supplies of pipelines or tanker trucks, another attractive scenario is the on-site generating, pressuring and delivering pure H2 at pressure (≥ 875 bar) for refueling vehicles at electrolysis, wind, or solar generating production facilities in distributed locations that are too remote or widely distributed for cost effective bulk transport. MH hydrogen compression utilizes a reversible heat-driven interaction of a hydride-forming metal alloy with hydrogen gas to form the MH phase and is a promising process for hydrogen energy applications [1,2]. To deliver hydrogen continuously, each stage of the compressor must consist of multiple MH beds with synchronized hydrogenation & dehydrogenation cycles. Multistage pressurization allows achievement of greater compression ratios using reduced temperature swings compared to single stage compressors. The objectives of this project are to investigate and demonstrate on a laboratory scale a two-stage MH hydrogen (H2) gas compressor with a

  14. Comparing biological networks via graph compression

    Hayashida Morihiro


    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  15. Variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} for the small-field dosimetric parameters percentage depth dose, tissue-maximum ratio, and off-axis ratio

    Francescon, Paolo, E-mail:; Satariano, Ninfa [Department of Radiation Oncology, Ospedale Di Vicenza, Viale Rodolfi, Vicenza 36100 (Italy); Beddar, Sam [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77005 (United States); Das, Indra J. [Department of Radiation Oncology, Indiana University School of Medicine, Indianapolis, Indiana 46202 (United States)


    Purpose: Evaluate the ability of different dosimeters to correctly measure the dosimetric parameters percentage depth dose (PDD), tissue-maximum ratio (TMR), and off-axis ratio (OAR) in water for small fields. Methods: Monte Carlo (MC) simulations were used to estimate the variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} for several types of microdetectors as a function of depth and distance from the central axis for PDD, TMR, and OAR measurements. The variation of k{sub Q{sub c{sub l{sub i{sub n,Q{sub m{sub s{sub r}{sup f{sub c}{sub l}{sub i}{sub n},f{sub m}{sub s}{sub r}}}}}}}}} enables one to evaluate the ability of a detector to reproduce the PDD, TMR, and OAR in water and consequently determine whether it is necessary to apply correction factors. The correctness of the simulations was verified by assessing the ratios between the PDDs and OARs of 5- and 25-mm circular collimators used with a linear accelerator measured with two different types of dosimeters (the PTW 60012 diode and PTW PinPoint 31014 microchamber) and the PDDs and the OARs measured with the Exradin W1 plastic scintillator detector (PSD) and comparing those ratios with the corresponding ratios predicted by the MC simulations. Results: MC simulations reproduced results with acceptable accuracy compared to the experimental results; therefore, MC simulations can be used to successfully predict the behavior of different dosimeters in small fields. The Exradin W1 PSD was the only dosimeter that reproduced the PDDs, TMRs, and OARs in water with high accuracy. With the exception of the EDGE diode, the stereotactic diodes reproduced the PDDs and the TMRs in water with a systematic error of less than 2% at depths of up to 25 cm; however, they produced OAR values that were significantly different from those in water, especially in the tail region (lower than 20% in some cases). The microchambers could be used for PDD

  16. 水灰比对再生混凝土抗压强度影响的研究%Study of Water Cement Ratios on Compressive Strength of Recycled Concrete



    指出了再生混凝土是将废弃混凝土经过清洗、破碎、分级,并按一定比例相互配合后得到的,是将再生骨料作为部分或者全部骨料配置的混凝土。水灰比是影响混凝土的抗压强度的主要因素。不同的水灰比对混凝土的其它方面的性能也有一定的影响。在相同的水灰比下,普通混凝土的性能与再生混凝土的性能又是否存在着差异,是试验研究的问题。试验采用不同的水灰比,将再生混凝土与普通混凝土的强度进行了比较,分析了不同水灰比对再生混凝土强度的影响。%Recycled concrete is made of waste concrete by cleaning ,crushing ,grading and under a certain proportion of mutual cooperation ,and is configured by concrete recycled aggregate as part of or all of the aggregate .The water cement ratio influences the strength of concrete ,and different water cement ratios also influence concrete′s other properties .This test focuses on the problems of whether there are some influences on the recycled concrete performance and whether there are differences between the performance of ordinary concrete and recycled concrete performance in the same water cement ratio .By adopting different water -cement ratios to the recycled concrete and normal concrete strength ,it compares their durability ,carbonation resistance .Finally ,it analyzes the influences of different water cement ratios on the properties of recycled concrete .

  17. Fabrication and compressive performance of plain carbon steel honeycomb sandwich panels

    Yu'an Jing; Shiju Guo; Jingtao Han; Yufei Zhang; Weijuan Li


    Plain carbon steel Q215 honeycomb sandwich panels were manufactured by brazing in a vacuum furnace. Their characteristic parameters, including equivalent density, equivalent elastic modulus, and equivalent compressive strength along out-of-plane (z-direction) and in-plane (x- and y-directions), were derived theoretically and then determined experimentally by an 810 material test system. On the basis of the experimental data, the compressive stress-strain curves were given. The results indicate that the measurements of equivalent Young's modulus and initial compressive strength are in good agreement with calculations, and that the maximum compressive strain near to solid can be up to 0.5-0.6 along out-of-plane, 0.6-0.7 along in-plane. The strength-to-density ratio of plain carbon steel honeycomb panels is near to those of Al alloy hexagonal-honeycomb and 304L stainless steel square-honeycomb, but the compressive peak strength is greater than that of Al alloy hexagonal-honeycomb.

  18. The behavior of compression and degradation for municipal solid waste and combined settlement calculation method.

    Shi, Jianyong; Qian, Xuede; Liu, Xiaodong; Sun, Long; Liao, Zhiqiang


    The total compression of municipal solid waste (MSW) consists of primary, secondary, and decomposition compressions. It is usually difficult to distinguish between the three parts of compressions. In this study, the odeometer test was used to distinguish between the primary and secondary compressions to determine the primary and secondary compression coefficient. In addition, the ending time of the primary compressions were proposed based on municipal solid waste compression tests in a degradation-inhibited condition by adding vinegar. The amount of the secondary compression occurring in the primary compression stage has a relatively high percentage to either the total compression or the total secondary compression. The relationship between the degradation ratio and time was obtained from the tests independently. Furthermore, a combined compression calculation method of municipal solid waste for all three parts of compressions including considering organics degradation is proposed based on a one-dimensional compression method. The relationship between the methane generation potential L0 of LandGEM model and degradation compression index was also discussed in the paper. A special column compression apparatus system, which can be used to simulate the whole compression process of municipal solid waste in China, was designed. According to the results obtained from 197-day column compression test, the new combined calculation method for municipal solid waste compression was analyzed. The degradation compression is the main part of the compression of MSW in the medium test period.

  19. Lossless wavelet compression on medical image

    Zhao, Xiuying; Wei, Jingyuan; Zhai, Linpei; Liu, Hong


    An increasing number of medical imagery is created directly in digital form. Such as Clinical image Archiving and Communication Systems (PACS), as well as telemedicine networks require the storage and transmission of this huge amount of medical image data. Efficient compression of these data is crucial. Several lossless and lossy techniques for the compression of the data have been proposed. Lossless techniques allow exact reconstruction of the original imagery, while lossy techniques aim to achieve high compression ratios by allowing some acceptable degradation in the image. Lossless compression does not degrade the image, thus facilitating accurate diagnosis, of course at the expense of higher bit rates, i.e. lower compression ratios. Various methods both for lossy (irreversible) and lossless (reversible) image compression are proposed in the literature. The recent advances in the lossy compression techniques include different methods such as vector quantization. Wavelet coding, neural networks, and fractal coding. Although these methods can achieve high compression ratios (of the order 50:1, or even more), they do not allow reconstructing exactly the original version of the input data. Lossless compression techniques permit the perfect reconstruction of the original image, but the achievable compression ratios are only of the order 2:1, up to 4:1. In our paper, we use a kind of lifting scheme to generate truly loss-less non-linear integer-to-integer wavelet transforms. At the same time, we exploit the coding algorithm producing an embedded code has the property that the bits in the bit stream are generated in order of importance, so that all the low rate codes are included at the beginning of the bit stream. Typically, the encoding process stops when the target bit rate is met. Similarly, the decoder can interrupt the decoding process at any point in the bit stream, and still reconstruct the image. Therefore, a compression scheme generating an embedded code can

  20. Maximum Likelihood Associative Memories

    Gripon, Vincent; Rabbat, Michael


    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amo...

  1. Maximum likely scale estimation

    Loog, Marco; Pedersen, Kim Steenstrup; Markussen, Bo


    A maximum likelihood local scale estimation principle is presented. An actual implementation of the estimation principle uses second order moments of multiple measurements at a fixed location in the image. These measurements consist of Gaussian derivatives possibly taken at several scales and....../or having different derivative orders. Although the principle is applicable to a wide variety of image models, the main focus here is on the Brownian model and its use for scale selection in natural images. Furthermore, in the examples provided, the simplifying assumption is made that the behavior...... of the measurements is completely characterized by all moments up to second order....

  2. A numerical analysis of the effects of a stratified pre-mixture on homogeneous charge compression ignition combustion

    Jamsran, Narankhuu; Lim, Ock Taeck [University of Ulsan, Ulsan (Korea, Republic of)


    We investigated the efficacy of fuel stratification in a pre-mixture of dimethyl ether (DME) and n-butane, which have different autoignition characteristics, for reducing the pressure rise rate (PRR) of homogeneous charge compression ignition engines. A new chemical reaction model was created by mixing DME and n-butane and compared with existing chemical reaction models to verify the effects observed. The maximum PRR depended on the mixture ratio. When DME was charged with stratification and n-butane was charged with homogeneity, the maximum PRR was the lowest among all the mixtures studied. Calculations were performed using CHEMKIN and modified using SENKIN software.

  3. Compressed Sensing with Rank Deficient Dictionaries

    Hansen, Thomas Lundgaard; Johansen, Daniel Højrup; Jørgensen, Peter Bjørn


    In compressed sensing it is generally assumed that the dictionary matrix constitutes a (possibly overcomplete) basis of the signal space. In this paper we consider dictionaries that do not span the signal space, i.e. rank deficient dictionaries. We show that in this case the signal-to-noise ratio...... (SNR) in the compressed samples can be increased by selecting the rows of the measurement matrix from the column space of the dictionary. As an example application of compressed sensing with a rank deficient dictionary, we present a case study of compressed sensing applied to the Coarse Acquisition (C....../A) step in a GPS receiver. Simulations show that for this application the proposed choice of measurement matrix yields an increase in SNR performance of up to 5 − 10 dB, compared to the conventional choice of a fully random measurement matrix. Furthermore, the compressed sensing based C/A step is compared...

  4. A novel ROM compression architecture for DDFS utilizing the parabolic approximation of equi-section division.

    Jeng, Shiann-Shiun; Lin, Hsing-Chen; Lin, Chi-Huei


    In this paper, we propose the parabolic approximation of equi-section division (PAESD) utilizing the symmetry property and amplitude approximation of a sinusoidal waveform to design a direct digital frequency synthesizer (DDFS). The sinusoidal phase of a one-quarter period is divided into equi-sections. The proposed method utilizes the curvature equivalence to derive each parabolic curve function, and then the value of the error function between each parabolic curve function and the sinusoidal function is stored in an error-compensation ROM to reconstruct the real sinusoidal waveform. The upper/lower bound of the maximum error value stored in the error-compensation ROM is derived to determine the minimum required memory word length relative to the number of bits of the equi-sections. Thus, the minimum size of the total ROMs of the DDFS using the PAESD without error-compensation ROM is compressed to 544 bits; the total compression ratio, compared with the minimum size of the total ROMs of the DDFS using the basic look-up table (LUT), is approximately 843:1, achieved by consuming additional circuits [71 adaptive look-up tables (ALUTs), 3 digital signal processor (DSP) block 9-bit elements]. Consequently, the results show that the proposed ROM compression method can effectively achieve a better compression ratio than the state-of-the-art solutions without affecting the spectrum performance of an average spurious-free dynamic range (SFDR) of -85 dBc.

  5. A Novel Video Compression Approach Based on Underdetermined Blind Source Separation

    Liu, Jing; Wei, Qi; Yang, Huazhong


    This paper develops a new video compression approach based on underdetermined blind source separation. Underdetermined blind source separation, which can be used to efficiently enhance the video compression ratio, is combined with various off-the-shelf codecs in this paper. Combining with MPEG-2, video compression ratio could be improved slightly more than 33%. As for combing with H.264, 4X~12X more compression ratio could be achieved with acceptable PSNR, according to different kinds of video sequences.

  6. Focus on Compression Stockings

    ... the stocking every other day with a mild soap. Do not use Woolite™ detergent. Use warm water ... compression clothing will lose its elasticity and its effectiveness. Compression stockings last for about 4-6 months ...

  7. A Compressive Superresolution Display

    Heide, Felix


    In this paper, we introduce a new compressive display architecture for superresolution image presentation that exploits co-design of the optical device configuration and compressive computation. Our display allows for superresolution, HDR, or glasses-free 3D presentation.

  8. Microbunching and RF Compression

    Venturini, M.; Migliorati, M.; Ronsivalle, C.; Ferrario, M.; Vaccarezza, C.


    Velocity bunching (or RF compression) represents a promising technique complementary to magnetic compression to achieve the high peak current required in the linac drivers for FELs. Here we report on recent progress aimed at characterizing the RF compression from the point of view of the microbunching instability. We emphasize the development of a linear theory for the gain function of the instability and its validation against macroparticle simulations that represents a useful tool in the evaluation of the compression schemes for FEL sources.

  9. The study of lossy compressive method with different interpolation for holographic reconstruction in optical scanning holography

    HU Zhijuan


    Full Text Available The lossy hologram compression method with three different interpolations is investigated to compress images holographically recorded with optical scanning holography.Without loss of major reconstruction details,results have shown that the lossy compression method is able to achieve high compression ratio of up to 100.

  10. Maximum Entropy Fundamentals

    F. Topsøe


    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  11. Hyperspectral data compression

    Motta, Giovanni; Storer, James A


    Provides a survey of results in the field of compression of remote sensed 3D data, with a particular interest in hyperspectral imagery. This work covers topics such as compression architecture, lossless compression, lossy techniques, and more. It also describes a lossless algorithm based on vector quantization.

  12. Compressed gas manifold

    Hildebrand, Richard J.; Wozniak, John J.


    A compressed gas storage cell interconnecting manifold including a thermally activated pressure relief device, a manual safety shut-off valve, and a port for connecting the compressed gas storage cells to a motor vehicle power source and to a refueling adapter. The manifold is mechanically and pneumatically connected to a compressed gas storage cell by a bolt including a gas passage therein.

  13. Compressing Binary Decision Diagrams

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter


    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  14. Compressing Binary Decision Diagrams

    Rune Hansen, Esben; Srinivasa Rao, S.; Tiedemann, Peter

    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...

  15. Compressing Binary Decision Diagrams

    Hansen, Esben Rune; Satti, Srinivasa Rao; Tiedemann, Peter


    The paper introduces a new technique for compressing Binary Decision Diagrams in those cases where random access is not required. Using this technique, compression and decompression can be done in linear time in the size of the BDD and compression will in many cases reduce the size of the BDD to 1...


    Y. Venkataramani


    Full Text Available The amount of digital contents grows at a faster speed as a result does the demand for communicate them. On the other hand, the amount of storage and bandwidth increases at a slower rate. Thus powerful and efficient compression methods are required. The repetition of words and phrases cause the reordered text much more compressible than the original text. On the whole system is fast and achieves close to the best result on the test files. In this study a novel fast dictionary based text compression technique MBRH (Multidictionary with burrows wheeler transforms, Run length coding and Huffman coding is proposed for the purpose of obtaining improved performance on various document sizes. MBRH algorithm comprises of two stages, the first stage is concerned with the conversion of input text into dictionary based compression .The second stage deals mainly with reduction of the redundancy in multidictionary based compression by using BWT, RLE and Huffman coding. Bib test files of input size of 111, 261 bytes achieves compression ratio of 0.192, bit rate of 1.538 and high speed using MBRH algorithm. The algorithm has attained a good compression ratio, reduction of bit rate and the increase in execution speed.

  17. [Lossless compression of hyperspectral image for space-borne application].

    Li, Jin; Jin, Long-xu; Li, Guo-ning


    In order to resolve the difficulty in hardware implementation, lower compression ratio and time consuming for the whole hyperspectral image lossless compression algorithm based on the prediction, transform, vector quantization and their combination, a hyperspectral image lossless compression algorithm for space-borne application was proposed in the present paper. Firstly, intra-band prediction is used only for the first image along the spectral line using a median predictor. And inter- band prediction is applied to other band images. A two-step and bidirectional prediction algorithm is proposed for the inter-band prediction. In the first step prediction, a bidirectional and second order predictor proposed is used to obtain a prediction reference value. And a improved LUT prediction algorithm proposed is used to obtain four values of LUT prediction. Then the final prediction is obtained through comparison between them and the prediction reference. Finally, the verification experiments for the compression algorithm proposed using compression system test equipment of XX-X space hyperspectral camera were carried out. The experiment results showed that compression system can be fast and stable work. The average compression ratio reached 3.05 bpp. Compared with traditional approaches, the proposed method could improve the average compression ratio by 0.14-2.94 bpp. They effectively improve the lossless compression ratio and solve the difficulty of hardware implementation of the whole wavelet-based compression scheme.

  18. Comparison of different Fingerprint Compression Techniques

    Ms.Mansi Kambli


    Full Text Available The important features of wavelet transform and different methods in compression of fingerprint images have been implemented. Image quality is measured objectively using peak signal to noise ratio (PSNR and mean square error (MSE.A comparative study using discrete cosine transform based Joint Photographic Experts Group(JPEG standard , wavelet based basic Set Partitioning in Hierarchical trees(SPIHT and Modified SPIHT is done. The comparison shows that Modified SPIHT offers better compression than basic SPIHT and JPEG. The results will help application developers to choose a good wavelet compression system for their applications.

  19. Regularized maximum correntropy machine

    Wang, Jim Jing-Yan


    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  20. Compressibility and Density Fluctuations in Molecular-Cloud Turbulence

    Pan, Liubin; Haugbolle, Troels; Nordlund, Aake


    The compressibility of molecular cloud (MC) turbulence plays a crucial role in star formation models, because it controls the amplitude and distribution of density fluctuations. The relation between the compressive ratio (the ratio of powers in compressive and solenoidal motions) and the statistics of turbulence has been studied systematically only in idealized simulations with random external forces. In this work, we analyze a simulation of large-scale turbulence(250 pc) driven by supernova (SN) explosions that has been shown to yield realistic MC properties. We demonstrate that SN driving results in MC turbulence that is only mildly compressive, with the turbulent ratio of compressive to solenoidal modes ~0.3 on average, lower than the equilibrium value of 0.5 found in the inertial range of isothermal simulations with random solenoidal driving. We also find that the compressibility of the turbulence is not noticeably affected by gravity, nor is the mean cloud expansion or contraction velocity (MCs do not co...

  1. Experimental Study of Fractal Image Compression Algorithm

    Chetan R. Dudhagara


    Full Text Available Image compression applications have been increasing in recent years. Fractal compression is a lossy compression method for digital images, based on fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. In this paper, a study on fractal-based image compression and fixed-size partitioning will be made, analyzed for performance and compared with a standard frequency domain based image compression standard, JPEG. Sample images will be used to perform compression and decompression. Performance metrics such as compression ratio, compression time and decompression time will be measured in JPEG cases. Also the phenomenon of resolution/scale independence will be studied and described with examples. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Fractal encoding is a mathematical process used to encode bitmaps containing a real-world image as a set of mathematical data that describes the fractal properties of the image. Fractal encoding relies on the fact that all natural, and most artificial, objects contain redundant information in the form of similar, repeating patterns called fractals.

  2. Lossless Compression on MRI Images Using SWT.

    Anusuya, V; Raghavan, V Srinivasa; Kavitha, G


    Medical image compression is one of the growing research fields in biomedical applications. Most medical images need to be compressed using lossless compression as each pixel information is valuable. With the wide pervasiveness of medical imaging applications in health-care settings and the increased interest in telemedicine technologies, it has become essential to reduce both storage and transmission bandwidth requirements needed for archival and communication of related data, preferably by employing lossless compression methods. Furthermore, providing random access as well as resolution and quality scalability to the compressed data has become of great utility. Random access refers to the ability to decode any section of the compressed image without having to decode the entire data set. The system proposes to implement a lossless codec using an entropy coder. 3D medical images are decomposed into 2D slices and subjected to 2D-stationary wavelet transform (SWT). The decimated coefficients are compressed in parallel using embedded block coding with optimized truncation of the embedded bit stream. These bit streams are decoded and reconstructed using inverse SWT. Finally, the compression ratio (CR) is evaluated to prove the efficiency of the proposal. As an enhancement, the proposed system concentrates on minimizing the computation time by introducing parallel computing on the arithmetic coding stage as it deals with multiple subslices.

  3. Minimum Length - Maximum Velocity

    Panes, Boris


    We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example we can predict the ratio between the minimum lengths in space and time using the results from OPERA about superluminal neutrinos.

  4. Both compressive test and drying shrinkage test on recycled aggregate concrete with different fine recycled aggregate replacement ratio%不同再生细骨料取代率混凝土的抗压及干燥收缩试验研究

    郝彤; 赵文兰


    对不同再乍细骨料替代率混凝土的抗压性能和干燥收缩性能进行研究.结果表明,随着再生细骨料替代率的提高,再生混凝土呈脆性趋势发展,但当采用基于自由水灰比的配合比设计方法和二次搅拌工艺时,再生细骨料混凝土的强度基本与普通混凝土相近.再生混凝土的收缩机理与普通混凝土基本相同,再生细骨料混凝土的收缩值随龄期增长而逐渐增大;再生混凝土的干燥收缩率大于普通混凝土;随着再生细骨料替代率的增加,再生混凝土收缩率随之增大.%Both compressive test and drying shrinkage test on recycled aggregate concrete with different fine recycled aggregate replacement ratio were studied. The reaults show that with the replacement ratio increase,recycled concreie are becoming more and more brittle:with the mixing proportion design of free W/C and the secondary mixing process,compressive intensity of recycled cancrete iB nearly comparable with those of normal concrete with similar composition.The shrinkage theory of the recycled concrete is the same as natural aggregate concrete,with the increase of the curing ages,the shrinkage of the recycled concrete increases;the shrinkage of the recycled concrete is higher than that of natural aggregate concrete;with the ingrease of the replacement rate,the shrinkage of the recycled concrete increases.

  5. The differentiation of the character of solid lesions in the breast in the compression sonoelastography. Part II: Diagnostic value of BIRADS-US classification, Tsukuba score and FLR ratio

    Katarzyna Dobruch-Sobczak


    Full Text Available Sonoelastography is a dynamically developing method of ultrasound examination used to differentiate the character of focal lesions in the breasts. The aim of the Part II of the study is to determine the usefulness of sonoelastography in the differentiation diagnosis of focal breast lesions including the evaluation of the diagnostic value of Tsukuba score and FLR ratio in characterizing solid lesions in the breasts. Furthermore, the paper provides a comparison of classic B-mode imaging and sonoelastography. Material and methods: From January to July 2010 in the Ultrasound Department of the Cancer Centre, The Institute of Maria Skłodowska-Curie, 375 breast ultrasound examinations were conducted. The examined group included patients who in B-mode examinations presented indications for pathological verification. They were 80 women aged between 17 and 83 (mean age was 50 with 99 solid focal lesions in the breasts. All patients underwent: the interview, physical examination, B-mode ultrasound examination and elastography of the mammary glands and axillary fossae. The visualized lesions were evaluated according to BIRADS-US classification and Tsukuba score as well as FLR ratio was calculated. In all cases, the histopathological and/or cytological verification of the tested lesions was obtained. Results: In the group of 80 patients, the examination revealed 39 malignant neoplastic lesions and 60 benign ones. The mean age of women with malignant neoplasms was 55.07 (SD=10.54, and with benign lesions – 46.9 (SD=15.47. In order to identify threshold values that distinguish benign lesions from malignant ones, a comparative analysis of statistical models based on BIRADS-US classification and Tsukuba score was conducted and the cut-off value for FLR was assumed. The sensitivity and specificity values for BIRADS-US 4/5 were 76.92% and 96.67% and for Tsukuba 3/4 – 64.1% and 98.33% respectively. The assumed FLR threshold value to differentiate between

  6. Theoretical Evaluation of the Maximum Work of Free-Piston Engine Generators

    Kojima, Shinji


    Utilizing the adjoint equations that originate from the calculus of variations, we have calculated the maximum thermal efficiency that is theoretically attainable by free-piston engine generators considering the work loss due to friction and Joule heat. Based on the adjoint equations with seven dimensionless parameters, the trajectory of the piston, the histories of the electric current, the work done, and the two kinds of losses have been derived in analytic forms. Using these we have conducted parametric studies for the optimized Otto and Brayton cycles. The smallness of the pressure ratio of the Brayton cycle makes the net work done negative even when the duration of heat addition is optimized to give the maximum amount of heat addition. For the Otto cycle, the net work done is positive, and both types of losses relative to the gross work done become smaller with the larger compression ratio. Another remarkable feature of the optimized Brayton cycle is that the piston trajectory of the heat addition/disposal process is expressed by the same equation as that of an adiabatic process. The maximum thermal efficiency of any combination of isochoric and isobaric heat addition/disposal processes, such as the Sabathe cycle, may be deduced by applying the methods described here.

  7. On-board image compression for the RAE lunar mission

    Miller, W. H.; Lynch, T. J.


    The requirements, design, implementation, and flight performance of an on-board image compression system for the lunar orbiting Radio Astronomy Explorer-2 (RAE-2) spacecraft are described. The image to be compressed is a panoramic camera view of the long radio astronomy antenna booms used for gravity-gradient stabilization of the spacecraft. A compression ratio of 32 to 1 is obtained by a combination of scan line skipping and adaptive run-length coding. The compressed imagery data are convolutionally encoded for error protection. This image compression system occupies about 1000 cu cm and consumes 0.4 W.

  8. An Optimal Seed Based Compression Algorithm for DNA Sequences

    Pamela Vinitha Eric


    Full Text Available This paper proposes a seed based lossless compression algorithm to compress a DNA sequence which uses a substitution method that is similar to the LempelZiv compression scheme. The proposed method exploits the repetition structures that are inherent in DNA sequences by creating an offline dictionary which contains all such repeats along with the details of mismatches. By ensuring that only promising mismatches are allowed, the method achieves a compression ratio that is at par or better than the existing lossless DNA sequence compression algorithms.

  9. Chest compressions for bradycardia or asystole in neonates.

    Kapadia, Vishal; Wyckoff, Myra H


    When effective ventilation fails to establish a heart rate of greater than 60 bpm, cardiac compressions should be initiated to improve perfusion. The 2-thumb method is the most effective and least fatiguing technique. A ratio of 3 compressions to 1 breath is recommended to provide adequate ventilation, the most common cause of newborn cardiovascular collapse. Interruptions in compressions should be limited to not diminishing the perfusion generated. Oxygen (100%) is recommended during compressions and can be reduced once adequate heart rate and oxygen saturation are achieved. Limited clinical data are available to form newborn cardiac compression recommendations.

  10. Equalized near maximum likelihood detector


    This paper presents new detector that is used to mitigate intersymbol interference introduced by bandlimited channels. This detector is named equalized near maximum likelihood detector which combines nonlinear equalizer and near maximum likelihood detector. Simulation results show that the performance of equalized near maximum likelihood detector is better than the performance of nonlinear equalizer but worse than near maximum likelihood detector.

  11. Generalized Maximum Entropy

    Cheeseman, Peter; Stutz, John


    A long standing mystery in using Maximum Entropy (MaxEnt) is how to deal with constraints whose values are uncertain. This situation arises when constraint values are estimated from data, because of finite sample sizes. One approach to this problem, advocated by E.T. Jaynes [1], is to ignore this uncertainty, and treat the empirically observed values as exact. We refer to this as the classic MaxEnt approach. Classic MaxEnt gives point probabilities (subject to the given constraints), rather than probability densities. We develop an alternative approach that assumes that the uncertain constraint values are represented by a probability density {e.g: a Gaussian), and this uncertainty yields a MaxEnt posterior probability density. That is, the classic MaxEnt point probabilities are regarded as a multidimensional function of the given constraint values, and uncertainty on these values is transmitted through the MaxEnt function to give uncertainty over the MaXEnt probabilities. We illustrate this approach by explicitly calculating the generalized MaxEnt density for a simple but common case, then show how this can be extended numerically to the general case. This paper expands the generalized MaxEnt concept introduced in a previous paper [3].

  12. Compressive Strength of Concrete Containing Palm Kernel Shell Ash

    FADELE Oluwadamilola A


    Full Text Available This study examined the influence of varying palm kernel shell ash content, as supplementary cementitious material (SCM at specified water/cement ratios and curing ages on the compressive strength of concrete cubes samples. Palm kernel shell ash was used as a partial replacement for ordinary Portland cement (OPC up to 30% at 5% intervals using mix ratio 1:2:4. River sand with particles passing 4.75mmBS sieve and crushed aggregate of 20mm maximum size were used while the palm kernel shell ash used was ofparticles passing through 212μm BS sieve. The compressive strength of the test cubes (100mm were tested at 5 different curing ages of 3, 7, 14, 28 and 56 days. The result showed that test cube containing Palm kernel shell ash gained strength over a longer curing period compared with ordinary Portlandcement concrete samples and the strength varies with percentagePKSAcontent in the cube samples. The results showed that at 28 days test cubes containing 5%, 10%, 15%, 20%, 25% and 30% PKSA content achieved compressive strength of 26.1 MPa, 22.53MPa, 19.43 MPa, 20.43 MPa, 16.97 MPa and 16.5MPa compared to 29MPa of Ordinary Portland cement concrete cubes. It was concluded that for structural concrete works requiring a characteristic strength of 25Mpa,5% palm kernel shell ash can effectively replace ordinary Portland cement while up to 15% PKSA content can be used for concrete works requiring 20Mpa strength at 28 days

  13. Homogeneous Charge Compression Ignition Combustion of Dimethyl Ether

    Pedersen, Troels Dyhr

    This thesis is based on experimental and numerical studies on the use of dimethyl ether (DME) in the homogeneous charge compression ignition (HCCI) combustion process. The first paper in this thesis was published in 2007 and describes HCCI combustion of pure DME in a small diesel engine. The tests...... a substantial combustion delay in HCCI operation with DME to achieve post TDC combustion. By adding methanol to the inlet port during HCCI combustion of DME, the engine reached 50 percent of its full DI CI load capability without engine knock at 1000 rpm and somewhat less at 1800 rpm. The engine also had EGR...... were designed to investigate the effect of engine speed, compression ratio and equivalence ratio on the combustion timing and the engine performance. It was found that the required compression ratio depended on the equivalence ratio used. A lower equivalence ratio requires a higher compression ratio...

  14. Huffman-based code compression techniques for embedded processors

    Bonny, Mohamed Talal


    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  15. Performance Characterization and Auto-Ignition Performance of a Rapid Compression Machine

    Hao Liu; Hongguang Zhang; Zhicheng Shi; Haitao Lu; Guangyao Zhao; Baofeng Yao


      A rapid compression machine (RCM) test bench is developed in this study. The performance characterization and auto-ignition performance tests are conducted at an initial temperature of 293 K, a compression ratio of 9.5...

  16. Lossless compression of VLSI layout image data.

    Dai, Vito; Zakhor, Avideh


    We present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4), which integrates the advantages of two very disparate compression techniques: context-based modeling and Lempel-Ziv (LZ) style copying. While the algorithm can be applied to many lossless compression applications, such as document image compression, our primary target application has been lossless compression of integrated circuit layout image data. These images contain a heterogeneous mix of data: dense repetitive data better suited to LZ-style coding, and less dense structured data, better suited to context-based encoding. As part of C4, we have developed a novel binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding, and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and two-dimensional LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for gray-pixel image data.

  17. Celiac Artery Compression Syndrome

    Mohammed Muqeetadnan


    Full Text Available Celiac artery compression syndrome is a rare disorder characterized by episodic abdominal pain and weight loss. It is the result of external compression of celiac artery by the median arcuate ligament. We present a case of celiac artery compression syndrome in a 57-year-old male with severe postprandial abdominal pain and 30-pound weight loss. The patient eventually responded well to surgical division of the median arcuate ligament by laparoscopy.

  18. Compression evaluation of surgery video recordings retaining diagnostic credibility (compression evaluation of surgery video)

    Duplaga, M.; Leszczuk, M. I.; Papir, Z.; Przelaskowski, A.


    Wider dissemination of medical digital video libraries is affected by two correlated factors, resource effective content compression that directly influences its diagnostic credibility. It has been proved that it is possible to meet these contradictory requirements halfway for long-lasting and low motion surgery recordings at compression ratios close to 100 (bronchoscopic procedures were a case study investigated). As the main supporting assumption, it has been accepted that the content can be compressed as far as clinicians are not able to sense a loss of video diagnostic fidelity (a visually lossless compression). Different market codecs were inspected by means of the combined subjective and objective tests toward their usability in medical video libraries. Subjective tests involved a panel of clinicians who had to classify compressed bronchoscopic video content according to its quality under the bubble sort algorithm. For objective tests, two metrics (hybrid vector measure and hosaka Plots) were calculated frame by frame and averaged over a whole sequence.

  19. Wavelet-based Image Compression using Subband Threshold

    Muzaffar, Tanzeem; Choi, Tae-Sun


    Wavelet based image compression has been a focus of research in recent days. In this paper, we propose a compression technique based on modification of original EZW coding. In this lossy technique, we try to discard less significant information in the image data in order to achieve further compression with minimal effect on output image quality. The algorithm calculates weight of each subband and finds the subband with minimum weight in every level. This minimum weight subband in each level, that contributes least effect during image reconstruction, undergoes a threshold process to eliminate low-valued data in it. Zerotree coding is done next on the resultant output for compression. Different values of threshold were applied during experiment to see the effect on compression ratio and reconstructed image quality. The proposed method results in further increase in compression ratio with negligible loss in image quality.

  20. Evaluation of Huffman and Arithmetic Algorithms for Multimedia Compression Standards

    Shahbahrami, Asadollah; Rostami, Mobin Sabbaghi; Mobarhan, Mostafa Ayoubi


    Compression is a technique to reduce the quantity of data without excessively reducing the quality of the multimedia data. The transition and storing of compressed multimedia data is much faster and more efficient than original uncompressed multimedia data. There are various techniques and standards for multimedia data compression, especially for image compression such as the JPEG and JPEG2000 standards. These standards consist of different functions such as color space conversion and entropy coding. Arithmetic and Huffman coding are normally used in the entropy coding phase. In this paper we try to answer the following question. Which entropy coding, arithmetic or Huffman, is more suitable compared to other from the compression ratio, performance, and implementation points of view? We have implemented and tested Huffman and arithmetic algorithms. Our implemented results show that compression ratio of arithmetic coding is better than Huffman coding, while the performance of the Huffman coding is higher than A...

  1. Compressed sensing & sparse filtering

    Carmi, Avishy Y; Godsill, Simon J


    This book is aimed at presenting concepts, methods and algorithms ableto cope with undersampled and limited data. One such trend that recently gained popularity and to some extent revolutionised signal processing is compressed sensing. Compressed sensing builds upon the observation that many signals in nature are nearly sparse (or compressible, as they are normally referred to) in some domain, and consequently they can be reconstructed to within high accuracy from far fewer observations than traditionally held to be necessary. Apart from compressed sensing this book contains other related app

  2. Wavelet image compression

    Pearlman, William A


    This book explains the stages necessary to create a wavelet compression system for images and describes state-of-the-art systems used in image compression standards and current research. It starts with a high level discussion of the properties of the wavelet transform, especially the decomposition into multi-resolution subbands. It continues with an exposition of the null-zone, uniform quantization used in most subband coding systems and the optimal allocation of bitrate to the different subbands. Then the image compression systems of the FBI Fingerprint Compression Standard and the JPEG2000 S

  3. Stiffness of compression devices

    Giovanni Mosti


    Full Text Available This issue of Veins and Lymphatics collects papers coming from the International Compression Club (ICC Meeting on Stiffness of Compression Devices, which took place in Vienna on May 2012. Several studies have demonstrated that the stiffness of compression products plays a major role for their hemodynamic efficacy. According to the European Committee for Standardization (CEN, stiffness is defined as the pressure increase produced by medical compression hosiery (MCH per 1 cm of increase in leg circumference.1 In other words stiffness could be defined as the ability of the bandage/stockings to oppose the muscle expansion during contraction.

  4. Oncologic image compression using both wavelet and masking techniques.

    Yin, F F; Gao, Q


    A new algorithm has been developed to compress oncologic images using both wavelet transform and field masking methods. A compactly supported wavelet transform is used to decompose the original image into high- and low-frequency subband images. The region-of-interest (ROI) inside an image, such as an irradiated field in an electronic portal image, is identified using an image segmentation technique and is then used to generate a mask. The wavelet transform coefficients outside the mask region are then ignored so that these coefficients can be efficiently coded to minimize the image redundancy. In this study, an adaptive uniform scalar quantization method and Huffman coding with a fixed code book are employed in subsequent compression procedures. Three types of typical oncologic images are tested for compression using this new algorithm: CT, MRI, and electronic portal images with 256 x 256 matrix size and 8-bit gray levels. Peak signal-to-noise ratio (PSNR) is used to evaluate the quality of reconstructed image. Effects of masking and image quality on compression ratio are illustrated. Compression ratios obtained using wavelet transform with and without masking for the same PSNR are compared for all types of images. The addition of masking shows an increase of compression ratio by a factor of greater than 1.5. The effect of masking on the compression ratio depends on image type and anatomical site. A compression ratio of greater than 5 can be achieved for a lossless compression of various oncologic images with respect to the region inside the mask. Examples of reconstructed images with compression ratio greater than 50 are shown.

  5. Lossless Astronomical Image Compression and the Effects of Random Noise

    Pence, William


    In this paper we compare a variety of modern image compression methods on a large sample of astronomical images. We begin by demonstrating from first principles how the amount of noise in the image pixel values sets a theoretical upper limit on the lossless compression ratio of the image. We derive simple procedures for measuring the amount of noise in an image and for quantitatively predicting how much compression will be possible. We then compare the traditional technique of using the GZIP utility to externally compress the image, with a newer technique of dividing the image into tiles, and then compressing and storing each tile in a FITS binary table structure. This tiled-image compression technique offers a choice of other compression algorithms besides GZIP, some of which are much better suited to compressing astronomical images. Our tests on a large sample of images show that the Rice algorithm provides the best combination of speed and compression efficiency. In particular, Rice typically produces 1.5 times greater compression and provides much faster compression speed than GZIP. Floating point images generally contain too much noise to be effectively compressed with any lossless algorithm. We have developed a compression technique which discards some of the useless noise bits by quantizing the pixel values as scaled integers. The integer images can then be compressed by a factor of 4 or more. Our image compression and uncompression utilities (called fpack and funpack) that were used in this study are publicly available from the HEASARC web site.Users may run these stand-alone programs to compress and uncompress their own images.

  6. EEG data compression to monitor DoA in telemedicine.

    Palendeng, Mario E; Zhang, Qing; Pang, Chaoyi; Li, Yan


    Data compression techniques have been widely used to process and transmit huge amount of EEG data in real-time and remote EEG signal processing systems. In this paper we propose a lossy compression technique, F-shift, to compress EEG signals for remote depth of Anaesthesia (DoA) monitoring. Compared with traditional wavelet compression techniques, our method not only preserves valuable clinical information with high compression ratios, but also reduces high frequency noises in EEG signals. Moreover, our method has negligible compression overheads (less than 0.1 seconds), which can greatly benefit real-time EEG signal monitoring systems. Our extensive experiments demonstrate the efficiency and effectiveness of the proposed compression method.




    Full Text Available In Image Compression, the researcher’s aim is to reduce the number of bits required to represent an image by removing the spatial and spectral redundancies. Recently wavelet packet has emerged as popular techniques for image compression. In this paper proposes a wavelet-based compression scheme that is able to operate in lossyas well as lossless mode. First we describe integer wavelet transform (IWT and integer wavelet packet transform (IWPT as an application of lifting scheme (LS.After analyzing and implementing results for IWT and IWPT , another method combining DPCM and IWPT is implemented using Huffman coding for grey scale images. Then we propose to implement the same for color images using Shannon source coding technique. We measure the level of compression by the compression ratio (CR and compression factor (CF. Comparing with IWT and IWPT the DPCM-IWPT shows better performance in image compression.

  8. Perceptual Effects of Dynamic Range Compression in Popular Music Recordings

    Hjortkjær, Jens; Walther-Hansen, Mads


    The belief that the use of dynamic range compression in music mastering deteriorates sound quality needs to be formally tested. In this study normal hearing listeners were asked to evaluate popular music recordings in original versions and in remastered versions with higher levels of dynamic range...... compression. Surprisingly, the results failed to reveal any evidence of the effects of dynamic range compression on subjective preference or perceived depth cues. Perceptual data suggest that listeners are less sensitive than commonly believed to even high levels of compression. As measured in terms...... of differences in the peak-to-average ratio, compression has little perceptual effect other than increased loudness or clipping effects that only occur at high levels of compression. One explanation for the inconsistency between data and belief might result from the fact that compression is frequently...

  9. Perceptual effects of dynamic range compression in popular music recordings

    Hjortkjær, Jens; Walther-Hansen, Mads


    The belief that the use of dynamic range compression in music mastering deteriorates sound quality needs to be formally tested. In this study normal hearing listeners were asked to evaluate popular music recordings in original versions and in remastered versions with higher levels of dynamic range...... compression. Surprisingly, the results failed to reveal any evidence of the effects of dynamic range compression on subjective preference or perceived depth cues. Perceptual data suggest that listeners are less sensitive than commonly believed to even high levels of compression. As measured in terms...... of differences in the peak-to-average ratio, compression has little perceptual effect other than increased loudness or clipping effects that only occur at high levels of compression. One explanation for the inconsistency between data and belief might result from the fact that compression is frequently...

  10. Data Compression in RCS Modeling by Using the Threshold Discrete Fourier Transform Method

    SHENG Weixing; FANG Dagang; ZHUANG Jing; LIU T.J.; YANG Zhenglong


    A new data compression tech-nique, called the threshold discrete Fourier trans-form (TDFT) method, is proposed to efficiently com-press the scattered field data from complex targets.Compared with the matrix pencil (MP) method andCLEAN method, it is quite simple and time saving un-der the similar compression ratio and reconstructionerror. In TDFT and CLEAN methods, the optimizedsegmentation is found which results in high compres-sion ratio.

  11. Lossless Image Compression Using New Biorthogonal Wavelets

    M. Santhosh


    Full Text Available Even though a large number of wavelets exist, one n eeds new wavelets for their specific applications. One of the basic wavelet categories is orthogonal wavel ets. But it was hard to find orthogonal and symmetric wavelets. Symmetricity is required for perfect reconstruction. Hence, a need for orthogonal and symmetric arises. The solution was in the form of biorthogonal wavelets which preserves perfect reconstruction condition. Though a number of biorthogonal wavelets are proposed in the literature, in this paper four new biorthogonal wavelets are proposed which gives bett er compression performance. The new wavelets are compared with traditional wavelets by using the des ign metrics Peak Signal to Noise Ratio (PSNR and Compression Ratio (CR. Set Partitioning in Hierarc hical Trees (SPIHT coding algorithm was utilized to incorporate compression of images.

  12. An Enhanced Static Data Compression Scheme Of Bengali Short Message

    Arif, Abu Shamim Mohammod; Islam, Rashedul


    This paper concerns a modified approach of compressing Short Bengali Text Message for small devices. The prime objective of this research technique is to establish a low complexity compression scheme suitable for small devices having small memory and relatively lower processing speed. The basic aim is not to compress text of any size up to its maximum level without having any constraint on space and time, rather than the main target is to compress short messages up to an optimal level which needs minimum space, consume less time and the processor requirement is lower. We have implemented Character Masking, Dictionary Matching, Associative rule of data mining and Hyphenation algorithm for syllable based compression in hierarchical steps to achieve low complexity lossless compression of text message for any mobile devices. The scheme to choose the diagrams are performed on the basis of extensive statistical model and the static Huffman coding is done through the same context.

  13. Analysis of Thermal Efficiency Improvement Implemented with Miller Cycle for High Compression Ratio Gasoline Engine at High Load%高负荷下应用米勒循环提升高压比汽油机热效率机理研究

    郑斌; 李铁; 尹涛


    对高负荷工况下应用进气阀早关(EIVC)或者迟关(LIVC)技术实现的米勒循环进行仿真计算 ,基于热力学第一定律比较分析两者改善高压缩比增压直喷汽油机热效率的机理.结果表明 :几何压缩比的增加提高了发动机的理论热效率 ,但由于高负荷时的爆震限制使油耗恶化了1.9% ;米勒循环的应用可以有效降低爆震倾向 ,与原发动机相比 ,采用EIVC与LIVC策略燃油经济性的分别提升2 .4% 和3 .0% ;对比分析EIVC与LIVC对汽油机热效率的影响发现 ,LIVC策略能使燃烧相位更加优化、缸内燃烧更为充分 ,使得其燃油改善效果好于EIVC策略.%For a highly boosted ,high compression ratio and direct injection gasoline engine ,the Miller cycle realized by the early intake-valve closing (EIVC) or the late intake-valve closing (LIVC) strategy at high load was simulated and the improve-ment mechanisms of thermal efficiency for both strategies were compared based on the first law of thermodynamics .The results show that a higher geometric compression ratio can increase the theoretical thermal efficiency ,but lead to the fuel consumption increase by 1 .9% due to knock limit at high load .The application of Miller cycle can suppress knock tendency effectively and the fuel economy for EIVC and LIVC strategy improves by 2 .4% and 3 .0% respectively compared with the original engine . Compared with EIVC ,LIVC can bring about a better combustion phase and more thorough in-cylinder combustion .

  14. 接种量与通气比对印楝悬浮细胞生长及印楝素产量的影响%Effects of Inoculum Size and Compression/ventilation Ratio on Growth of Azadirachta indica Suspension Cells and Azadirachin Yield

    张云竹; 钟秋平


    为印楝悬浮细胞扩大培养提供依据,利用5 L 气升式发酵罐,以印楝悬浮细胞作种子细胞,研究接种量与通气比对印楝悬浮细胞生长和印楝素产量的影响。结果表明:在反应器中,印楝悬浮细胞生长和印楝素产量呈偶联型;种子细胞适宜接种量和通气比分别为60 g/L 和0.2 vvm,在此条件下,悬浮培养基质中的 pH 呈先降后升的变化趋势,细胞干重和印楝素产量分别为11.41 g DW/L 和8.32 mg/g,添加复合诱导子后48 h 印楝素产量最大,达94.78 mg/L。%The effects of inoculum size and compression/ventilation ratio on growth of A.indica suspension cells and A.indica yield were analyzed when A.indica suspension cells was cultured in 5 L airilift ferment reactor to provide a reference for enlarging culture of A.indica suspension cells.The results showed that growth of A.indica suspension cells and A.indica yield in the airilift ferment reactor represent a coupled type.The optimum inoculum size and compression/ventilation ratio for A.indica suspension cells is 60 g/L and 0.2 vvm respectively.The cell dry weight and azadirachin yield are 11.41 g DW/L and 8.32 mg/g under the optimum culture conditions respectively.Adding compound inductors can improve azadirachin yield significantly and the azadirachin yield reaches 94.78 mg/L after 48 h.

  15. Compressive imaging system design using task-specific information.

    Ashok, Amit; Baheti, Pawan K; Neifeld, Mark A


    We present a task-specific information (TSI) based framework for designing compressive imaging (CI) systems. The task of target detection is chosen to demonstrate the performance of the optimized CI system designs relative to a conventional imager. In our optimization framework, we first select a projection basis and then find the associated optimal photon-allocation vector in the presence of a total photon-count constraint. Several projection bases, including principal components (PC), independent components, generalized matched-filter, and generalized Fisher discriminant (GFD) are considered for candidate CI systems, and their respective performance is analyzed for the target-detection task. We find that the TSI-optimized CI system design based on a GFD projection basis outperforms all other candidate CI system designs as well as the conventional imager. The GFD-based compressive imager yields a TSI of 0.9841 bits (out of a maximum possible 1 bit for the detection task), which is nearly ten times the 0.0979 bits achieved by the conventional imager at a signal-to-noise ratio of 5.0. We also discuss the relation between the information-theoretic TSI metric and a conventional statistical metric like probability of error in the context of the target-detection problem. It is shown that the TSI can be used to derive an upper bound on the probability of error that can be attained by any detection algorithm.

  16. Equation-of-state model for shock compression of hot dense matter

    Pain, J C


    A quantum equation-of-state model is presented and applied to the calculation of high-pressure shock Hugoniot curves beyond the asymptotic fourfold density, close to the maximum compression where quantum effects play a role. An analytical estimate for the maximum attainable compression is proposed. It gives a good agreement with the equation-of-state model.

  17. Variable compression ratio device for internal combustion engine

    Maloney, Ronald P.; Faletti, James J.


    An internal combustion engine, particularly suitable for use in a work machine, is provided with a combustion cylinder, a cylinder head at an end of the combustion cylinder and a primary piston reciprocally disposed within the combustion cylinder. The cylinder head includes a secondary cylinder and a secondary piston reciprocally disposed within the secondary cylinder. An actuator is coupled with the secondary piston for controlling the position of the secondary piston dependent upon the position of the primary piston. A communication port establishes fluid flow communication between the combustion cylinder and the secondary cylinder.

  18. Efficient Data Compression Scheme using Dynamic Huffman Code Applied on Arabic Language

    Sameh Ghwanmeh


    Full Text Available The development of an efficient compression scheme to process the Arabic language represents a difficult task. This paper employs the dynamic Huffman coding on data compression with variable length bit coding, on the Arabic language. Experimental tests have been performed on both Arabic and English text. A comparison was made to measure the efficiency of compressing data results on both Arabic and English text. Also a comparison was made between the compression rate and the size of the file to be compressed. It has been found that as the file size increases, the compression ratio decreases for both Arabic and English text. The experimental results show that the average message length and the efficiency of compression on Arabic text was better than the compression on English text. Also, results show that the main factor which significantly affects compression ratio and average message length was the frequency of the symbols on the text.

  19. Variable Quality Compression of Fluid Dynamical Data Sets Using a 3D DCT Technique

    Loddoch, A.; Schmalzl, J.


    In this work we present a data compression scheme that is especially suited for the compression of data sets resulting from computational fluid dynamics (CFD). By adopting the concept of the JPEG compression standard and extending the approach of Schmalzl (Schmalzl, J. Using standard image compression algorithms to store data from computational fluid dynamics. Computers and Geosciences, 29, 10211031, 2003) we employ a three-dimensional discrete cosine transform of the data. The resulting frequency components are rearranged, quantized and finally stored using Huffman-encoding and standard variable length integer codes. The compression ratio and also the introduced loss of accuracy can be adjusted by means of two compression parameters to give the desired compression profile. Using the proposed technique compression ratios of more than 60:1 are possible with an mean error of the compressed data of less than 0.1%.

  20. Evaluation of pliers' grip spans in the maximum gripping task and sub-maximum cutting task.

    Kim, Dae-Min; Kong, Yong-Ku


    A total of 25 males participated to investigate the effects of the grip spans of pliers on the total grip force, individual finger forces and muscle activities in the maximum gripping task and wire-cutting tasks. In the maximum gripping task, results showed that the 50-mm grip span had significantly higher total grip strength than the other grip spans. In the cutting task, the 50-mm grip span also showed significantly higher grip strength than the 65-mm and 80-mm grip spans, whereas the muscle activities showed a higher value at 80-mm grip span. The ratios of cutting force to maximum grip strength were also investigated. Ratios of 30.3%, 31.3% and 41.3% were obtained by grip spans of 50-mm, 65-mm, and 80-mm, respectively. Thus, the 50-mm grip span for pliers might be recommended to provide maximum exertion in gripping tasks, as well as lower maximum-cutting force ratios in the cutting tasks.

  1. Maximum cycle work output optimization for generalized radiative law Otto cycle engines

    Xia, Shaojun; Chen, Lingen; Sun, Fengrui


    An Otto cycle internal combustion engine which includes thermal and friction losses is investigated by finite-time thermodynamics, and the optimization objective is the maximum cycle work output. The thermal energy transfer from the working substance to the cylinder inner wall follows the generalized radiative law (q∝Δ (Tn)). Under the condition that all of the fuel consumption, the compression ratio and the cycle period are given, the optimal piston trajectories for both the examples with unlimited and limited accelerations on every stroke are determined, and the cycle-period distribution among all strokes is also optimized. Numerical calculation results for the case of radiative law are provided and compared with those obtained for the cases of Newtonian law and linear phenomenological law. The results indicate that the optimal piston trajectory on each stroke contains three sections, which consist of an original maximum-acceleration and a terminal maximum-deceleration parts; for the case of radiative law, optimizing the piston motion path can achieve an improvement of more than 20% in both the cycle-work output and the second-law efficiency of the Otto cycle compared with the conventional near-sinusoidal operation, and heat transfer mechanisms have both qualitative and quantitative influences on the optimal paths of piston movements.

  2. Vestige: Maximum likelihood phylogenetic footprinting

    Maxwell Peter


    Full Text Available Abstract Background Phylogenetic footprinting is the identification of functional regions of DNA by their evolutionary conservation. This is achieved by comparing orthologous regions from multiple species and identifying the DNA regions that have diverged less than neutral DNA. Vestige is a phylogenetic footprinting package built on the PyEvolve toolkit that uses probabilistic molecular evolutionary modelling to represent aspects of sequence evolution, including the conventional divergence measure employed by other footprinting approaches. In addition to measuring the divergence, Vestige allows the expansion of the definition of a phylogenetic footprint to include variation in the distribution of any molecular evolutionary processes. This is achieved by displaying the distribution of model parameters that represent partitions of molecular evolutionary substitutions. Examination of the spatial incidence of these effects across regions of the genome can identify DNA segments that differ in the nature of the evolutionary process. Results Vestige was applied to a reference dataset of the SCL locus from four species and provided clear identification of the known conserved regions in this dataset. To demonstrate the flexibility to use diverse models of molecular evolution and dissect the nature of the evolutionary process Vestige was used to footprint the Ka/Ks ratio in primate BRCA1 with a codon model of evolution. Two regions of putative adaptive evolution were identified illustrating the ability of Vestige to represent the spatial distribution of distinct molecular evolutionary processes. Conclusion Vestige provides a flexible, open platform for phylogenetic footprinting. Underpinned by the PyEvolve toolkit, Vestige provides a framework for visualising the signatures of evolutionary processes across the genome of numerous organisms simultaneously. By exploiting the maximum-likelihood statistical framework, the complex interplay between mutational

  3. The effect of different parameters on the development of compressive strength of oil palm shell geopolymer concrete.

    Kupaei, Ramin Hosseini; Alengaram, U Johnson; Jumaat, Mohd Zamin


    This paper presents the experimental results of an on-going research project on geopolymer lightweight concrete using two locally available waste materials--low calcium fly ash (FA) and oil palm shell (OPS)--as the binder and lightweight coarse aggregate, respectively. OPS was pretreated with three different alkaline solutions of sodium hydroxide (NaOH), potassium hydroxide, and sodium silicate as well as polyvinyl alcohol (PVA) for 30 days; afterwards, oil palm shell geopolymer lightweight concrete (OPSGPC) was cast by using both pretreated and untreated OPSs. The effect of these solutions on the water absorption of OPS, and the development of compressive strength in different curing conditions of OPSGPC produced by pretreated OPS were investigated; subsequently the influence of NaOH concentration, alkaline solution to FA ratio (A/FA), and different curing regimes on the compressive strength and density of OPSGPC produced by untreated OPS was inspected. The 24-hour water absorption value for OPS pretreated with 20% and 50% PVA solution was about 4% compared to 23% for untreated OPS. OPSGPC produced from OPS treated with 50% PVA solution produced the highest compressive strength of about 30 MPa in ambient cured condition. The pretreatment with alkaline solution did not have a significant positive effect on the water absorption of OPS aggregate and the compressive strength of OPSGPC. The result revealed that a maximum compressive strength of 32 MPa could be obtained at a temperature of 65°C and curing period of 4 days. This investigation also found that an A/FA ratio of 0.45 has the optimum amount of alkaline liquid and it resulted in the highest level of compressive strength.

  4. The Effect of Different Parameters on the Development of Compressive Strength of Oil Palm Shell Geopolymer Concrete

    Ramin Hosseini Kupaei


    Full Text Available This paper presents the experimental results of an on-going research project on geopolymer lightweight concrete using two locally available waste materials—low calcium fly ash (FA and oil palm shell (OPS—as the binder and lightweight coarse aggregate, respectively. OPS was pretreated with three different alkaline solutions of sodium hydroxide (NaOH, potassium hydroxide, and sodium silicate as well as polyvinyl alcohol (PVA for 30 days; afterwards, oil palm shell geopolymer lightweight concrete (OPSGPC was cast by using both pretreated and untreated OPSs. The effect of these solutions on the water absorption of OPS, and the development of compressive strength in different curing conditions of OPSGPC produced by pretreated OPS were investigated; subsequently the influence of NaOH concentration, alkaline solution to FA ratio (A/FA, and different curing regimes on the compressive strength and density of OPSGPC produced by untreated OPS was inspected. The 24-hour water absorption value for OPS pretreated with 20% and 50% PVA solution was about 4% compared to 23% for untreated OPS. OPSGPC produced from OPS treated with 50% PVA solution produced the highest compressive strength of about 30 MPa in ambient cured condition. The pretreatment with alkaline solution did not have a significant positive effect on the water absorption of OPS aggregate and the compressive strength of OPSGPC. The result revealed that a maximum compressive strength of 32 MPa could be obtained at a temperature of 65°C and curing period of 4 days. This investigation also found that an A/FA ratio of 0.45 has the optimum amount of alkaline liquid and it resulted in the highest level of compressive strength.

  5. The maximum rotation of a galactic disc

    Bottema, R


    The observed stellar velocity dispersions of galactic discs show that the maximum rotation of a disc is on average 63% of the observed maximum rotation. This criterion can, however, not be applied to small or low surface brightness (LSB) galaxies because such systems show, in general, a continuously rising rotation curve until the outermost measured radial position. That is why a general relation has been derived, giving the maximum rotation for a disc depending on the luminosity, surface brightness, and colour of the disc. As a physical basis of this relation serves an adopted fixed mass-to-light ratio as a function of colour. That functionality is consistent with results from population synthesis models and its absolute value is determined from the observed stellar velocity dispersions. The derived maximum disc rotation is compared with a number of observed maximum rotations, clearly demonstrating the need for appreciable amounts of dark matter in the disc region and even more so for LSB galaxies. Matters h...

  6. Vascular compression syndromes.

    Czihal, Michael; Banafsche, Ramin; Hoffmann, Ulrich; Koeppel, Thomas


    Dealing with vascular compression syndromes is one of the most challenging tasks in Vascular Medicine practice. This heterogeneous group of disorders is characterised by external compression of primarily healthy arteries and/or veins as well as accompanying nerval structures, carrying the risk of subsequent structural vessel wall and nerve damage. Vascular compression syndromes may severely impair health-related quality of life in affected individuals who are typically young and otherwise healthy. The diagnostic approach has not been standardised for any of the vascular compression syndromes. Moreover, some degree of positional external compression of blood vessels such as the subclavian and popliteal vessels or the celiac trunk can be found in a significant proportion of healthy individuals. This implies important difficulties in differentiating physiological from pathological findings of clinical examination and diagnostic imaging with provocative manoeuvres. The level of evidence on which treatment decisions regarding surgical decompression with or without revascularisation can be relied on is generally poor, mostly coming from retrospective single centre studies. Proper patient selection is critical in order to avoid overtreatment in patients without a clear association between vascular compression and clinical symptoms. With a focus on the thoracic outlet-syndrome, the median arcuate ligament syndrome and the popliteal entrapment syndrome, the present article gives a selective literature review on compression syndromes from an interdisciplinary vascular point of view.

  7. Critical Data Compression

    Scoville, John


    A new approach to data compression is developed and applied to multimedia content. This method separates messages into components suitable for both lossless coding and 'lossy' or statistical coding techniques, compressing complex objects by separately encoding signals and noise. This is demonstrated by compressing the most significant bits of data exactly, since they are typically redundant and compressible, and either fitting a maximally likely noise function to the residual bits or compressing them using lossy methods. Upon decompression, the significant bits are decoded and added to a noise function, whether sampled from a noise model or decompressed from a lossy code. This results in compressed data similar to the original. For many test images, a two-part image code using JPEG2000 for lossy coding and PAQ8l for lossless coding produces less mean-squared error than an equal length of JPEG2000. Computer-generated images typically compress better using this method than through direct lossy coding, as do man...

  8. Magnetized Plasma Compression for Fusion Energy

    Degnan, James; Grabowski, Christopher; Domonkos, Matthew; Amdahl, David


    Magnetized Plasma Compression (MPC) uses magnetic inhibition of thermal conduction and enhancement of charge particle product capture to greatly reduce the temporal and spatial compression required relative to un-magnetized inertial fusion (IFE)--to microseconds, centimeters vs nanoseconds, sub-millimeter. MPC greatly reduces the required confinement time relative to MFE--to microseconds vs minutes. Proof of principle can be demonstrated or refuted using high current pulsed power driven compression of magnetized plasmas using magnetic pressure driven implosions of metal shells, known as imploding liners. This can be done at a cost of a few tens of millions of dollars. If demonstrated, it becomes worthwhile to develop repetitive implosion drivers. One approach is to use arrays of heavy ion beams for energy production, though with much less temporal and spatial compression than that envisioned for un-magnetized IFE, with larger compression targets, and with much less ambitious compression ratios. A less expensive, repetitive pulsed power driver, if feasible, would require engineering development for transient, rapidly replaceable transmission lines such as envisioned by Sandia National Laboratories. Supported by DOE-OFES.

  9. Issues in multiview autostereoscopic image compression

    Shah, Druti; Dodgson, Neil A.


    Multi-view auto-stereoscopic images and image sequences require large amounts of space for storage and large bandwidth for transmission. High bandwidth can be tolerated for certain applications where the image source and display are close together but, for long distance or broadcast, compression of information is essential. We report on the results of our two- year investigation into multi-view image compression. We present results based on four techniques: differential pulse code modulation (DPCM), disparity estimation, three- dimensional discrete cosine transform (3D-DCT), and principal component analysis (PCA). Our work on DPCM investigated the best predictors to use for predicting a given pixel. Our results show that, for a given pixel, it is generally the nearby pixels within a view that provide better prediction than the corresponding pixel values in adjacent views. This led to investigations into disparity estimation. We use both correlation and least-square error measures to estimate disparity. Both perform equally well. Combining this with DPCM led to a novel method of encoding, which improved the compression ratios by a significant factor. The 3D-DCT has been shown to be a useful compression tool, with compression schemes based on ideas from the two-dimensional JPEG standard proving effective. An alternative to 3D-DCT is PCA. This has proved to be less effective than the other compression methods investigated.

  10. Lossless compression of medical images using Hilbert scan

    Sun, Ziguang; Li, Chungui; Liu, Hao; Zhang, Zengfang


    The effectiveness of Hilbert scan in lossless medical images compression is discussed. In our methods, after coding of intensities, the pixels in a medical images have been decorrelated with differential pulse code modulation, then the error image has been rearranged using Hilbert scan, finally we implement five coding schemes, such as Huffman coding, RLE, lZW coding, Arithmetic coding, and RLE followed by Huffman coding. The experiments show that the case, which applies DPCM followed by Hilbert scan and then compressed by the Arithmetic coding scheme, has the best compression result, also indicate that Hilbert scan can enhance pixel locality, and increase the compression ratio effectively.

  11. Wave energy devices with compressible volumes.

    Kurniawan, Adi; Greaves, Deborah; Chaplin, John


    We present an analysis of wave energy devices with air-filled compressible submerged volumes, where variability of volume is achieved by means of a horizontal surface free to move up and down relative to the body. An analysis of bodies without power take-off (PTO) systems is first presented to demonstrate the positive effects a compressible volume could have on the body response. Subsequently, two compressible device variations are analysed. In the first variation, the compressible volume is connected to a fixed volume via an air turbine for PTO. In the second variation, a water column separates the compressible volume from another volume, which is fitted with an air turbine open to the atmosphere. Both floating and bottom-fixed, axisymmetric, configurations are considered, and linear analysis is employed throughout. Advantages and disadvantages of each device are examined in detail. Some configurations with displaced volumes less than 2000 m(3) and with constant turbine coefficients are shown to be capable of achieving 80% of the theoretical maximum absorbed power over a wave period range of about 4 s.

  12. Compressed Adjacency Matrices: Untangling Gene Regulatory Networks.

    Dinkla, K; Westenberg, M A; van Wijk, J J


    We present a novel technique-Compressed Adjacency Matrices-for visualizing gene regulatory networks. These directed networks have strong structural characteristics: out-degrees with a scale-free distribution, in-degrees bound by a low maximum, and few and small cycles. Standard visualization techniques, such as node-link diagrams and adjacency matrices, are impeded by these network characteristics. The scale-free distribution of out-degrees causes a high number of intersecting edges in node-link diagrams. Adjacency matrices become space-inefficient due to the low in-degrees and the resulting sparse network. Compressed adjacency matrices, however, exploit these structural characteristics. By cutting open and rearranging an adjacency matrix, we achieve a compact and neatly-arranged visualization. Compressed adjacency matrices allow for easy detection of subnetworks with a specific structure, so-called motifs, which provide important knowledge about gene regulatory networks to domain experts. We summarize motifs commonly referred to in the literature, and relate them to network analysis tasks common to the visualization domain. We show that a user can easily find the important motifs in compressed adjacency matrices, and that this is hard in standard adjacency matrix and node-link diagrams. We also demonstrate that interaction techniques for standard adjacency matrices can be used for our compressed variant. These techniques include rearrangement clustering, highlighting, and filtering.

  13. Evaluation of correlation property of linear-frequency-modulated signals coded by maximum-length sequences

    Yamanaka, Kota; Hirata, Shinnosuke; Hachiya, Hiroyuki


    Ultrasonic distance measurement for obstacles has been recently applied in automobiles. The pulse-echo method based on the transmission of an ultrasonic pulse and time-of-flight (TOF) determination of the reflected echo is one of the typical methods of ultrasonic distance measurement. Improvement of the signal-to-noise ratio (SNR) of the echo and the avoidance of crosstalk between ultrasonic sensors in the pulse-echo method are required in automotive measurement. The SNR of the reflected echo and the resolution of the TOF are improved by the employment of pulse compression using a maximum-length sequence (M-sequence), which is one of the binary pseudorandom sequences generated from a linear feedback shift register (LFSR). Crosstalk is avoided by using transmitted signals coded by different M-sequences generated from different LFSRs. In the case of lower-order M-sequences, however, the number of measurement channels corresponding to the pattern of the LFSR is not enough. In this paper, pulse compression using linear-frequency-modulated (LFM) signals coded by M-sequences has been proposed. The coding of LFM signals by the same M-sequence can produce different transmitted signals and increase the number of measurement channels. In the proposed method, however, the truncation noise in autocorrelation functions and the interference noise in cross-correlation functions degrade the SNRs of received echoes. Therefore, autocorrelation properties and cross-correlation properties in all patterns of combinations of coded LFM signals are evaluated.

  14. Nonrepetitive Colouring via Entropy Compression

    Dujmović, Vida; Wood, David R


    A vertex colouring of a graph is \\emph{nonrepetitive} if there is no path whose first half receives the same sequence of colours as the second half. A graph is nonrepetitively $k$-choosable if given lists of at least $k$ colours at each vertex, there is a nonrepetitive colouring such that each vertex is coloured from its own list. It is known that every graph with maximum degree $\\Delta$ is $c\\Delta^2$-choosable, for some constant $c$. We prove this result with $c=4$. We then prove that every subdivision of a graph with sufficiently many division vertices per edge is nonrepetitively 6-choosable. The proofs of both these results are based on the Moser-Tardos entropy-compression method, and a recent extension by Grytczuk, Kozik and Micek for the nonrepetitive choosability of paths. Finally, we prove that every graph with pathwidth $k$ is nonrepetitively ($2k^2+6k+1$)-colourable.

  15. LDPC Codes for Compressed Sensing

    Dimakis, Alexandros G; Vontobel, Pascal O


    We present a mathematical connection between channel coding and compressed sensing. In particular, we link, on the one hand, \\emph{channel coding linear programming decoding (CC-LPD)}, which is a well-known relaxation of maximum-likelihood channel decoding for binary linear codes, and, on the other hand, \\emph{compressed sensing linear programming decoding (CS-LPD)}, also known as basis pursuit, which is a widely used linear programming relaxation for the problem of finding the sparsest solution of an under-determined system of linear equations. More specifically, we establish a tight connection between CS-LPD based on a zero-one measurement matrix over the reals and CC-LPD of the binary linear channel code that is obtained by viewing this measurement matrix as a binary parity-check matrix. This connection allows the translation of performance guarantees from one setup to the other. The main message of this paper is that parity-check matrices of "good" channel codes can be used as provably "good" measurement ...

  16. Prediction by Compression

    Ratsaby, Joel


    It is well known that text compression can be achieved by predicting the next symbol in the stream of text data based on the history seen up to the current symbol. The better the prediction the more skewed the conditional probability distribution of the next symbol and the shorter the codeword that needs to be assigned to represent this next symbol. What about the opposite direction ? suppose we have a black box that can compress text stream. Can it be used to predict the next symbol in the stream ? We introduce a criterion based on the length of the compressed data and use it to predict the next symbol. We examine empirically the prediction error rate and its dependency on some compression parameters.

  17. LZW Data Compression

    Dheemanth H N


    Full Text Available Lempel–Ziv–Welch (LZW is a universal lossless data compression algorithm created by Abraham Lempel, Jacob Ziv, and Terry Welch. LZW compression is one of the Adaptive Dictionary techniques. The dictionary is created while the data are being encoded. So encoding can be done on the fly. The dictionary need not be transmitted. Dictionary can be built up at receiving end on the fly. If the dictionary overflows then we have to reinitialize the dictionary and add a bit to each one of the code words. Choosing a large dictionary size avoids overflow, but spoils compressions. A codebook or dictionary containing the source symbols is constructed. For 8-bit monochrome images, the first 256 words of the dictionary are assigned to the gray levels 0-255. Remaining part of the dictionary is filled with sequences of the gray levels.LZW compression works best when applied on monochrome images and text files that contain repetitive text/patterns.

  18. Shocklets in compressible flows

    袁湘江; 男俊武; 沈清; 李筠


    The mechanism of shocklets is studied theoretically and numerically for the stationary fluid, uniform compressible flow, and boundary layer flow. The conditions that trigger shock waves for sound wave, weak discontinuity, and Tollmien-Schlichting (T-S) wave in compressible flows are investigated. The relations between the three types of waves and shocklets are further analyzed and discussed. Different stages of the shocklet formation process are simulated. The results show that the three waves in compressible flows will transfer to shocklets only when the initial disturbance amplitudes are greater than the certain threshold values. In compressible boundary layers, the shocklets evolved from T-S wave exist only in a finite region near the surface instead of the whole wavefront.

  19. Reference Based Genome Compression

    Chern, Bobbie; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy


    DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target genome, and then compresses this mapping with an entropy coder. As an illustration of the performance: applying our algorithm to James Watson's genome with hg18 as a reference, we are able to reduce the 2991 megabyte (MB) genome down to 6.99 MB, while Gzip compresses it to 834.8 MB.

  20. Parametric optimization of thermoelectric elements footprint for maximum power generation

    Rezania, A.; Rosendahl, Lasse; Yin, Hao


    The development studies in thermoelectric generator (TEG) systems are mostly disconnected to parametric optimization of the module components. In this study, optimum footprint ratio of n- and p-type thermoelectric (TE) elements is explored to achieve maximum power generation, maximum cost-perform...

  1. FAPEC-based lossless and lossy hyperspectral data compression

    Portell, Jordi; Artigues, Gabriel; Iudica, Riccardo; García-Berro, Enrique


    Data compression is essential for remote sensing based on hyperspectral sensors owing to the increasing amount of data generated by modern instrumentation. CCSDS issued the 123.0 standard for lossless hyperspectral compression, and a new lossy hyperspectral compression recommendation is being prepared. We have developed multispectral and hyperspectral pre-processing stages for FAPEC, a data compression algorithm based on an entropy coder. We can select a prediction-based lossless stage that offers excellent results and speed. Alternatively, a DWT-based lossless and lossy stage can be selected, which offers excellent results yet obviously requiring more compression time. Finally, a lossless stage based on our HPA algorithm can also be selected, only lossless for now but with the lossy option in preparation. Here we present the overall design of these data compression systems and the results obtained on a variety of real data, including ratios, speed and quality.

  2. Experimental research on the compressibility of stale waste

    ZHANG Yongxing; XIE Qiang; ZHANG Jianhua; WEI Yongfa


    The compressibility of stale waste is studied based on the investigation into the composition and properties of stale waste in the Chongqing City. Stale waste sampled at a landfill closed for over 8 a was analyzed indoors for its natural density,natural water content, relative density, grain size distribution curve, uniformity coefficient and curvature coefficient. Indoor compression tests for the stale waste were performed to find out the void ratio and its dependence upon applied pressure,compressibility coefficient, constrained modulus and volume compressibility coefficient. From the experimental data, the curvature coefficient and the preconsolidation pressure of the stale waste were worked out. The results indicates that the stale waste is of high compressibility, which is different from the other kinds of common soil, and is underconsolidated soil. The measured compressibility parameters are applicable to settlement calculation of closed landfills.

  3. ECG compression: evaluation of FFT, DCT, and WT performance.

    GholamHosseini, H; Nazeran, H; Moran, B


    This work investigates a set of ECG data compression schemes to compare their performances in compressing and preparing ECG signals for automatic cardiac arrhythmia classification. These schemes are based on transform methods such as fast Fourier transform (FFT), discrete cosine transform (DCT), wavelet transform (WT), and their combinations. Each specific transform is applied to a pre-selected data segment from the MIT-BIH database and then compression is performed in the new domain. These transformation methods are known as an important class of ECG compression techniques. The WT has been shown as the most efficient method for further improvement. A compression ratio of 7.98 to 1 has been achieved with a percent of root mean square difference (PRD) of 0.25%, indicating that the wavelet compression technique offers the best performance over the other evaluated methods.

  4. Mathematical model and general laws of wet compression

    王永青; 刘铭; 廉乐明; 何健勇; 严家騄


    Wet compression is an effective way to enhance the performance of gas turbines and it has attracted a good deal of attention in recent years. The one-sidedness and inaccuracy of previous studies,which took concentration gradient as mass transfer potential are discussed. The mass transfer process is analyzed from the viewpoint of non-equilibrium thermodynamics,by taking generalized thermodynamic driving force as mass transfer potential,and the corresponding mass-transfer coefficient is obtained using the heat and mass transfer equilibrium occurring between moist air and water droplets at wet-bulb temperature for the sake of avoiding complex tests and providing more accurate formulas. A mathematical model of wet compression is therefore established,and the general laws of wet compression are investigated. The results show that the performance of atomizer is critical for wet compression and wet compression is more suitable for compressors with higher pressure ratio and longer compression time.

  5. On-Demand Indexing for Referential Compression of DNA Sequences.

    Fernando Alves

    Full Text Available The decreasing costs of genome sequencing is creating a demand for scalable storage and processing tools and techniques to deal with the large amounts of generated data. Referential compression is one of these techniques, in which the similarity between the DNA of organisms of the same or an evolutionary close species is exploited to reduce the storage demands of genome sequences up to 700 times. The general idea is to store in the compressed file only the differences between the to-be-compressed and a well-known reference sequence. In this paper, we propose a method for improving the performance of referential compression by removing the most costly phase of the process, the complete reference indexing. Our approach, called On-Demand Indexing (ODI compresses human chromosomes five to ten times faster than other state-of-the-art tools (on average, while achieving similar compression ratios.

  6. Deep Blind Compressed Sensing

    Singh, Shikha; Singhal, Vanika; Majumdar, Angshul


    This work addresses the problem of extracting deeply learned features directly from compressive measurements. There has been no work in this area. Existing deep learning tools only give good results when applied on the full signal, that too usually after preprocessing. These techniques require the signal to be reconstructed first. In this work we show that by learning directly from the compressed domain, considerably better results can be obtained. This work extends the recently proposed fram...

  7. Reference Based Genome Compression

    Chern, Bobbie; Ochoa, Idoia; Manolakos, Alexandros; No, Albert; Venkat, Kartik; Weissman, Tsachy


    DNA sequencing technology has advanced to a point where storage is becoming the central bottleneck in the acquisition and mining of more data. Large amounts of data are vital for genomics research, and generic compression tools, while viable, cannot offer the same savings as approaches tuned to inherent biological properties. We propose an algorithm to compress a target genome given a known reference genome. The proposed algorithm first generates a mapping from the reference to the target gen...

  8. Watermark Compression in Medical Image Watermarking Using Lempel-Ziv-Welch (LZW) Lossless Compression Technique.

    Badshah, Gran; Liew, Siau-Chuin; Zain, Jasni Mohd; Ali, Mushtaq


    In teleradiology, image contents may be altered due to noisy communication channels and hacker manipulation. Medical image data is very sensitive and can not tolerate any illegal change. Illegally changed image-based analysis could result in wrong medical decision. Digital watermarking technique can be used to authenticate images and detect as well as recover illegal changes made to teleradiology images. Watermarking of medical images with heavy payload watermarks causes image perceptual degradation. The image perceptual degradation directly affects medical diagnosis. To maintain the image perceptual and diagnostic qualities standard during watermarking, the watermark should be lossless compressed. This paper focuses on watermarking of ultrasound medical images with Lempel-Ziv-Welch (LZW) lossless-compressed watermarks. The watermark lossless compression reduces watermark payload without data loss. In this research work, watermark is the combination of defined region of interest (ROI) and image watermarking secret key. The performance of the LZW compression technique was compared with other conventional compression methods based on compression ratio. LZW was found better and used for watermark lossless compression in ultrasound medical images watermarking. Tabulated results show the watermark bits reduction, image watermarking with effective tamper detection and lossless recovery.

  9. Alternative Compression Garments

    Stenger, M. B.; Lee, S. M. C.; Ribeiro, L. C.; Brown, A. K.; Westby, C. M.; Platts, S. H.


    Orthostatic intolerance after spaceflight is still an issue for astronauts as no in-flight countermeasure has been 100% effective. Future anti-gravity suits (AGS) may be similar to the Shuttle era inflatable AGS or may be a mechanical compression device like the Russian Kentavr. We have evaluated the above garments as well as elastic, gradient compression garments of varying magnitude and determined that breast-high elastic compression garments may be a suitable replacement to the current AGS. This new garment should be more comfortable than the AGS, easy to don and doff, and as effective a countermeasure to orthostatic intolerance. Furthermore, these new compression garments could be worn for several days after space flight as necessary if symptoms persisted. We conducted two studies to evaluate elastic, gradient compression garments. The purpose of these studies was to evaluate the comfort and efficacy of an alternative compression garment (ACG) immediately after actual space flight and 6 degree head-down tilt bed rest as a model of space flight, and to determine if they would impact recovery if worn for up to three days after bed rest.

  10. Compressing industrial computed tomography images by means of contour coding

    Jiang, Haina; Zeng, Li


    An improved method for compressing industrial computed tomography (CT) images is presented. To have higher resolution and precision, the amount of industrial CT data has become larger and larger. Considering that industrial CT images are approximately piece-wise constant, we develop a compression method based on contour coding. The traditional contour-based method for compressing gray images usually needs two steps. The first is contour extraction and then compression, which is negative for compression efficiency. So we merge the Freeman encoding idea into an improved method for two-dimensional contours extraction (2-D-IMCE) to improve the compression efficiency. By exploiting the continuity and logical linking, preliminary contour codes are directly obtained simultaneously with the contour extraction. By that, the two steps of the traditional contour-based compression method are simplified into only one. Finally, Huffman coding is employed to further losslessly compress preliminary contour codes. Experimental results show that this method can obtain a good compression ratio as well as keeping satisfactory quality of compressed images.

  11. Size effect on cubic and prismatic compressive strength of cement paste

    苏捷; 叶缙垚; 方志; 赵明华


    A series of compression tests were conducted on 150 groups of cement paste specimens with side lengths ranging from 40 mm to 200 mm. The specimens include cube specimens and prism specimens with height to width ratio of 2. The experiment results show that size effect exists in the cubic compressive strength and prismatic compressive strength of the cement paste, and larger specimens resist less in terms of strength than smaller ones. The cubic compressive strength and the prismatic compressive strength of the specimens with side length of 200 mm are respectively about 91% and 89% of the compressive strength of the specimens with the side length of 40 mm. Water to binder ratio has a significant influence on the size effect of the compressive strengths of the cement paste. With a decrease in the water to binder ratio, the size effect is significantly enhanced. When the water to binder ratio is 0.2, the size effects of the cubic compressive strength and the prismatic compressive strength of the cement paste are 1.6 and 1.4 times stronger than those of a water to binder ratio of 0.6. Furthermore, a series of formulas are proposed to calculate the size effect of the cubic compressive strength and the prismatic compressive strength of cement paste, and the results of the size effect predicted by the formulas are in good agreement with the experiment results.

  12. Analysis of Photovoltaic Maximum Power Point Trackers

    Veerachary, Mummadi

    The photovoltaic generator exhibits a non-linear i-v characteristic and its maximum power point (MPP) varies with solar insolation. An intermediate switch-mode dc-dc converter is required to extract maximum power from the photovoltaic array. In this paper buck, boost and buck-boost topologies are considered and a detailed mathematical analysis, both for continuous and discontinuous inductor current operation, is given for MPP operation. The conditions on the connected load values and duty ratio are derived for achieving the satisfactory maximum power point operation. Further, it is shown that certain load values, falling out of the optimal range, will drive the operating point away from the true maximum power point. Detailed comparison of various topologies for MPPT is given. Selection of the converter topology for a given loading is discussed. Detailed discussion on circuit-oriented model development is given and then MPPT effectiveness of various converter systems is verified through simulations. Proposed theory and analysis is validated through experimental investigations.

  13. Gmz: a Gml Compression Model for Webgis

    Khandelwal, A.; Rajan, K. S.


    Geography markup language (GML) is an XML specification for expressing geographical features. Defined by Open Geospatial Consortium (OGC), it is widely used for storage and transmission of maps over the Internet. XML schemas provide the convenience to define custom features profiles in GML for specific needs as seen in widely popular cityGML, simple features profile, coverage, etc. Simple features profile (SFP) is a simpler subset of GML profile with support for point, line and polygon geometries. SFP has been constructed to make sure it covers most commonly used GML geometries. Web Feature Service (WFS) serves query results in SFP by default. But it falls short of being an ideal choice due to its high verbosity and size-heavy nature, which provides immense scope for compression. GMZ is a lossless compression model developed to work for SFP compliant GML files. Our experiments indicate GMZ achieves reasonably good compression ratios and can be useful in WebGIS based applications.

  14. Reconstruction in Time-Bandwidth Compression Systems

    Chan, Jacky; Asghari, Mohammad H; Jalali, Bahram


    Recently it has been shown that the intensity time-bandwidth product of optical signals can be engineered to match that of the data acquisition instrument. In particular, it is possible to slow down an ultrafast signal, resulting in compressed RF bandwidth - a similar benefit to that offered by the Time-Stretch Dispersive Fourier Transform (TS-DFT) - but with reduced temporal record length leading to time-bandwidth compression. The compression is implemented using a warped group delay dispersion leading to non-uniform time stretching of the signal's intensity envelope. Decoding requires optical phase retrieval and reconstruction of the input temporal profile, for the case where information of interest is resides in the complex field. In this paper, we present results on the general behavior of the reconstruction process and its dependence on the signal-to-noise ratio. We also discuss the role of chirp in the input signal.

  15. Compressive Sensing for Spread Spectrum Receivers

    Fyhn, Karsten; Jensen, Tobias Lindstrøm; Larsen, Torben


    With the advent of ubiquitous computing there are two design parameters of wireless communication devices that become very important: power efficiency and production cost. Compressive sensing enables the receiver in such devices to sample below the Shannon-Nyquist sampling rate, which may lead...... to a decrease in the two design parameters. This paper investigates the use of Compressive Sensing (CS) in a general Code Division Multiple Access (CDMA) receiver. We show that when using spread spectrum codes in the signal domain, the CS measurement matrix may be simplified. This measurement scheme, named...... Compressive Spread Spectrum (CSS), allows for a simple, effective receiver design. Furthermore, we numerically evaluate the proposed receiver in terms of bit error rate under different signal to noise ratio conditions and compare it with other receiver structures. These numerical experiments show that though...

  16. Compression of EMG Signals by Super imposing Methods: Case of WPT and DCT

    Aimé Joseph Oyobé-Okassa


    Full Text Available The objective of this work is to apply on the electromyographic signals (EMG a new compression approach. The originality of this algorithm, that improves the compression ratio of the EMG signals, compared to Modified Algorithm of Decomposition (MAD,is the association of the Discrete Wavelet Packet Transform (DWPT with the Discrete Cosine Transform (DCT. Indeed, the compression algorithms are intended principally to increase the compression ratio while maintaining the reconstructed signalquality. The results obtained by this method are interesting with regard to evaluation criteria of compression.

  17. Working Characteristics of Variable Intake Valve in Compressed Air Engine

    Qihui Yu


    Full Text Available A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.

  18. Working characteristics of variable intake valve in compressed air engine.

    Yu, Qihui; Shi, Yan; Cai, Maolin


    A new camless compressed air engine is proposed, which can make the compressed air energy reasonably distributed. Through analysis of the camless compressed air engine, a mathematical model of the working processes was set up. Using the software MATLAB/Simulink for simulation, the pressure, temperature, and air mass of the cylinder were obtained. In order to verify the accuracy of the mathematical model, the experiments were conducted. Moreover, performance analysis was introduced to design compressed air engine. Results show that, firstly, the simulation results have good consistency with the experimental results. Secondly, under different intake pressures, the highest output power is obtained when the crank speed reaches 500 rpm, which also provides the maximum output torque. Finally, higher energy utilization efficiency can be obtained at the lower speed, intake pressure, and valve duration angle. This research can refer to the design of the camless valve of compressed air engine.

  19. OECD Maximum Residue Limit Calculator

    With the goal of harmonizing the calculation of maximum residue limits (MRLs) across the Organisation for Economic Cooperation and Development, the OECD has developed an MRL Calculator. View the calculator.

  20. Spark ignition engine performance and emissions in a high compression engine using biogas and methane mixtures without knock occurrence

    Gómez Montoya Juan Pablo


    Full Text Available With the purpose to use biogas in an internal combustion engine with high compression ratio and in order to get a high output thermal efficiency, this investigation used a diesel engine with a maximum output power 8.5 kW, which was converted to spark ignition mode to use it with gaseous fuels. Three fuels were used: Simulated biogas, biogas enriched with 25% and 50% methane by volume. After conversion, the output power of the engine decreased by 17.64% when using only biogas, where 7 kW was the new maximum output power of the engine. The compression ratio was kept at 15.5:1, and knocking did not occur during engine operation. Output thermal efficiency operating the engine in SI mode with biogas enriched with 50% methane was almost the same compared with the engine running in diesel-biogas dual mode at full load and was greater at part loads. The dependence of the diesel pilot was eliminated when biogas was used in the engine converted in SI mode. The optimum condition of experiment for the engine without knocking was using biogas enriched with 50% methane, with 12 degrees of spark timing advance and equivalence ratio of 0.95, larger output powers and higher values of methane concentration lead the engine to knock operation. The presence of CO2 allows operating engines at high compression ratios with normal combustion conditions. Emissions of nitrogen oxides, carbon monoxide and unburnt methane all in g/kWh decreased when the biogas was enriched with 50% methane.

  1. The Maximum Resource Bin Packing Problem

    Boyar, J.; Epstein, L.; Favrholdt, L.M.


    algorithms, First-Fit-Increasing and First-Fit-Decreasing for the maximum resource variant of classical bin packing. For the on-line variant, we define maximum resource variants of classical and dual bin packing. For dual bin packing, no on-line algorithm is competitive. For classical bin packing, we find......Usually, for bin packing problems, we try to minimize the number of bins used or in the case of the dual bin packing problem, maximize the number or total size of accepted items. This paper presents results for the opposite problems, where we would like to maximize the number of bins used...... the competitive ratio of various natural algorithms. We study the general versions of the problems as well as the parameterized versions where there is an upper bound of on the item sizes, for some integer k....

  2. A Maximum Radius for Habitable Planets.

    Alibert, Yann


    We compute the maximum radius a planet can have in order to fulfill two constraints that are likely necessary conditions for habitability: 1- surface temperature and pressure compatible with the existence of liquid water, and 2- no ice layer at the bottom of a putative global ocean, that would prevent the operation of the geologic carbon cycle to operate. We demonstrate that, above a given radius, these two constraints cannot be met: in the Super-Earth mass range (1-12 Mearth), the overall maximum that a planet can have varies between 1.8 and 2.3 Rearth. This radius is reduced when considering planets with higher Fe/Si ratios, and taking into account irradiation effects on the structure of the gas envelope.

  3. Sagittal sinus compression is associated with neonatal cerebral sinovenous thrombosis.

    Tan, Marilyn; Deveber, Gabrielle; Shroff, Manohar; Moharir, Mahendra; Pontigon, Anne-Marie; Widjaja, Elisa; Kirton, Adam


    Neonatal cerebral sinovenous thrombosis (CSVT) causes lifelong morbidity. Newborns frequently incur positional occipital bone compression of the superior sagittal sinus (SSS). We hypothesized that SSS compression is associated with neonatal CSVT. Our retrospective case-control study recruited neonates with CSVT (SickKids Children's Stroke Program, January 1992-December 2006). Controls were neonates without CSVT undergoing magnetic resonance or computed tomography venography (institutional imaging database, 2002-2005) who were matched 2 per each case patient. Blinded neuroimaging review by 2 experts quantified SSS compression and head position. Effect of SSS compression on the primary outcome of CSVT was determined (logistic regression). Secondary analyses included the relationship of head position to SSS compression (t test) and group comparisons (cases versus controls, cases with and without compression) for demographic, clinical, and CSVT factors (χ² and Wilcoxon Mann-Whitney tests). Case (n = 55) and control (n = 90) patients had similar ages and delivery modes. SSS compression was common (cases: 43%; controls: 41%). Controlling for gender and head position, SSS compression was associated with CSVT (odds ratio: 2.5 [95% confidence interval: 1.07-5.67]). Compression was associated with greater mean (SD) angle toward head flexion (101.2 [15.0] vs 111.5 [9.7]; P infarction, recanalization, and outcome. Many idiopathic cases had SSS compression (79%). Interrater reliability of compression measurements was high (κ = 0.87). Neonatal SSS compression is common, quantifiable, and associated with CSVT. Optimizing head position and/or developing devices to alleviate mechanical SSS compression may represent a novel means to improve outcomes.

  4. Influence of free water content on the compressive mechanical behaviour of cement mortar under high strain rate

    Jikai Zhou; Xudong Chen; Longqiang Wu; Xiaowei Kan


    The effect of free water content upon the compressive mechanical behaviour of cement mortar under high loading rate was studied. The uniaxial rapid compressive loading testing of a total of 30 specimens, nominally 37 mm in diameter and 18.5 mm in height, with five different saturations (0%, 25%, 50%, 75% and 100%, respectively) were executed in this paper. The technique ‘Split Hopkinson pressure bar’ (SHPB) was used. The impact velocity was 10 m/s with the corresponding strain rate as 102/s. Water-cement ratio of 0.5 was used. The compressive behaviour of the materials was measured in terms of the maximum stress, Young’s modulus, critical strain at maximum stress and ultimate strain at failure. The data obtained from test indicates that the similarity exists in the shape of strain–stress curves of cement mortars with different water content, the upward section of the stress–strain curve shows bilinear characteristics, while the descending stage (softening state) is almost linear. The dynamic compressive strength of cement mortar increased with the decreasing of water content, the dynamic compressive strength of the saturated specimens was 23% lower than that of the totally dry specimens. With an increase in water content, the Young’s modulus first increases and then decreases, the Young’s modulus of the saturated specimens was 23% lower than that of the totally dry specimens. No significant changes occurred in the critical and ultimate strain value as the water content is changed.

  5. Influencing Factors of Compression Strength of Asphalt Mixture in Cold Region%寒区沥青混合料抗压强度影响因素

    韦佑坡; 马骉; 司伟


    针对寒区低温特点,对沥青混合料进行室内单轴压缩试验,分析温度、油石比、沥青种类和级配对混合料抗压强度的影响.结果表明,混合料抗压强度随温度的升高而降低;对比不同最大公称粒径的沥青混合料的抗压强度可知,SBR改性AC - 16混合料的抗压强度高于AC - 13;存在对应于抗压强度达到最大值时的最佳油石比,约在6.0%~7.0%之间;SBR改性沥青混合料的低温抗压性能明显优于l30#道路石油沥青混合料.混合料抗压强度值的对数与温度及油石比的关系符合二元一次函数关系.用SPSS相关分析方法分析各影响因素对混合料抗压特性的影响程度可知,温度和沥青种类对抗压强度影响较大.%Aimed at the climate feature of low temperature in cold region, the influence of temperature, asphalt-aggregate ratio, asphalt types and aggregate gradation on the compression strength of asphalt mixture was analysed by indoor uniaxial compression test. The results show that (1) the compressive strength become lower with the increase of temperature; (2) based on comparing strengths of asphalt mixture in different nominal maximum sizes of aggregate, the compression strength of SBR modified AC-16 asphalt mixture is better than that of AC-13; (3) corresponds to maximum compressive strength of asphalt mixture, there exists the optimum asphalt-aggregate ratio between 6. 0% -7. 0% ; (4) the compressive properties of SBR modified asphalt mixture is superior to that of paving asphalt mixture No. 100 under low temperature; (5) the relation of the logarithm of the compression strength with temperature and asphalt-aggregate ratio approximately obeys two-variable linear function. The results also revealed that temperature and asphalt types have greatly affect on compression strength of asphalt mixture among influencing factors based on correspondence analysis of SPSS.

  6. Pulse temporal compression by two-stage stimulated Brillouin scattering and laser-induced breakdown

    Liu, Zhaohong; Wang, Yulei; Wang, Hongli; Bai, Zhenxu; Li, Sensen; Zhang, Hengkang; Wang, Yirui; He, Weiming; Lin, Dianyang; Lu, Zhiwei


    A laser pulse temporal compression technique combining stimulated Brillouin scattering (SBS) and laser-induced breakdown (LIB) is proposed in which the leading edge of the laser pulse is compressed using SBS, and the low intensity trailing edge of the laser pulse is truncated by LIB. The feasibility of the proposed scheme is demonstrated by experiments in which a pulse duration of 8 ns is compressed to 170 ps. Higher compression ratios and higher efficiency are expected under optimal experimental conditions.

  7. Hybrid Prediction and Fractal Hyperspectral Image Compression

    Shiping Zhu


    Full Text Available The data size of hyperspectral image is too large for storage and transmission, and it has become a bottleneck restricting its applications. So it is necessary to study a high efficiency compression method for hyperspectral image. Prediction encoding is easy to realize and has been studied widely in the hyperspectral image compression field. Fractal coding has the advantages of high compression ratio, resolution independence, and a fast decoding speed, but its application in the hyperspectral image compression field is not popular. In this paper, we propose a novel algorithm for hyperspectral image compression based on hybrid prediction and fractal. Intraband prediction is implemented to the first band and all the remaining bands are encoded by modified fractal coding algorithm. The proposed algorithm can effectively exploit the spectral correlation in hyperspectral image, since each range block is approximated by the domain block in the adjacent band, which is of the same size as the range block. Experimental results indicate that the proposed algorithm provides very promising performance at low bitrate. Compared to other algorithms, the encoding complexity is lower, the decoding quality has a great enhancement, and the PSNR can be increased by about 5 dB to 10 dB.

  8. Fiber Effects on Compressibility of Peat

    Johari, N. N.; Bakar, I.; Razali, S. N. M.; Wahab, N.


    Fibers found in the soil, especially in peaty soil play an important role in the determination of soil compressibility. Peat soils are the results from the decomposition of organic matter and the type of peat can be classified based on the fibrous material in the soil. In the engineering field, peat soil was mostly known as soils that has a serious settlement with high compressibility index. From the previous research, fibers in the soil were influenced in compressibility in terms of size, shape, fibric, soil arrangement and etc. Hence, this study attempts the determination of fibers effects on the compressibility of peat using a 1-D oedometer consolidation test. The reconstituted peat samples of different particle sizes were used to determine the consolidation parameters and the results obtained from reconstituted samples were also compared with the undisturbed sample. 1-D oedometer consolidation tests were performed on the samples by using the load increment method. The results shows, the higher particle size (R3.35), give higher moisture content (w = 401.20%) and higher initial void ratio (eo = 5.74). In settlement prediction, the higher the fiber content will results the higher the compression index, therefore, it will cause high of settlement.

  9. Multi-Channel Maximum Likelihood Pitch Estimation

    Christensen, Mads Græsbøll


    In this paper, a method for multi-channel pitch estimation is proposed. The method is a maximum likelihood estimator and is based on a parametric model where the signals in the various channels share the same fundamental frequency but can have different amplitudes, phases, and noise characteristics....... This essentially means that the model allows for different conditions in the various channels, like different signal-to-noise ratios, microphone characteristics and reverberation. Moreover, the method does not assume that a certain array structure is used but rather relies on a more general model and is hence...

  10. Reconstruction-Free Action Inference from Compressive Imagers.

    Kulkarni, Kuldeep; Turaga, Pavan


    Persistent surveillance from camera networks, such as at parking lots, UAVs, etc., often results in large amounts of video data, resulting in significant challenges for inference in terms of storage, communication and computation. Compressive cameras have emerged as a potential solution to deal with the data deluge issues in such applications. However, inference tasks such as action recognition require high quality features which implies reconstructing the original video data. Much work in compressive sensing (CS) theory is geared towards solving the reconstruction problem, where state-of-the-art methods are computationally intensive and provide low-quality results at high compression rates. Thus, reconstruction-free methods for inference are much desired. In this paper, we propose reconstruction-free methods for action recognition from compressive cameras at high compression ratios of 100 and above. Recognizing actions directly from CS measurements requires features which are mostly nonlinear and thus not easily applicable. This leads us to search for such properties that are preserved in compressive measurements. To this end, we propose the use of spatio-temporal smashed filters, which are compressive domain versions of pixel-domain matched filters. We conduct experiments on publicly available databases and show that one can obtain recognition rates that are comparable to the oracle method in uncompressed setup, even for high compression ratios.

  11. Peak compression technique in high-performance liquid chromatography

    WEI YuXia; WANG Lin; XlAO ShengYuan; QING Hong; ZHU Yong; HU GaoFei; DENG YuLin


    Peak compression technique based on the difference of the solute migration velocity in two different mobile phases was described theoretically and confirmed using benzaldehyde and 4-hydroxyquinoline (4-HQ) as model compounds.After peak compression,the peak compression factors (the ratio of peak width at half-height under non-compression and that under compression condition) of benzaldehyde and 4-HQ were 0.19 and 0.13,respectively.By this application of the peak compression technique to the mixture,both enhanced peak height and good separation were obtained in one run cycle.This peak compression technique was introduced to determine benzaldehyde from semicarbazide-sensitive amine oxidase-catalyzed enzymetic reaction in order to illustrate the applicability of this technique to the real sample.As a result,the peak was compressed effectively,and 4.94-fold,19.3-fold and 5.74-fold enhancement in peak height,plate number and signal to noise ratio were also achieved,respectively.

  12. Performance visualization for image compression in telepathology

    Varga, Margaret J.; Ducksbury, Paul G.; Callagy, Grace


    The conventional approach to performance evaluation for image compression in telemedicine is simply to measure compression ratio, signal-to-noise ratio and computational load. Evaluation of performance is however a much more complex and many sided issue. It is necessary to consider more deeply the requirements of the applications. In telemedicine, the preservation of clinical information must be taken into account when assessing the suitability of any particular compression algorithm. In telemedicine the metrication of this characteristic is subjective because human judgement must be brought in to identify what is of clinical importance. The assessment must therefore take into account subjective user evaluation criteria as well as objective criteria. This paper develops the concept of user based assessment techniques for image compression used in telepathology. A novel visualization approach has been developed to show and explore the highly complex performance space taking into account both types of measure. The application considered is within a general histopathology image management system; the particular component is a store-and-forward facility for second opinion elicitation. Images of histopathology slides are transmitted to the workstations of consultants working remotely to enable them to provide second opinions.


    Li Hongbo


    In an inner-product space, an invertible vector generates a reflection with re-spect to a hyperplane, and the Clifford product of several invertible vectors, called a versor in Clifford algebra, generates the composition of the corresponding reflections, which is an orthogonal transformation. Given a versor in a Clifford algebra, finding another sequence of invertible vectors of strictly shorter length but whose Clifford product still equals the input versor, is called versor compression. Geometrically, versor compression is equivalent to decomposing an orthogoual transformation into a shorter sequence of reflections. This paper proposes a simple algorithm of compressing versors of symbolic form in Clifford algebra. The algorithm is based on computing the intersections of lines with planes in the corresponding Grassmann-Cayley algebra, and is complete in the case of Euclidean or Minkowski inner-product space.

  14. Image compression for dermatology

    Cookson, John P.; Sneiderman, Charles; Colaianni, Joseph; Hood, Antoinette F.


    Color 35mm photographic slides are commonly used in dermatology for education, and patient records. An electronic storage and retrieval system for digitized slide images may offer some advantages such as preservation and random access. We have integrated a system based on a personal computer (PC) for digital imaging of 35mm slides that depict dermatologic conditions. Such systems require significant resources to accommodate the large image files involved. Methods to reduce storage requirements and access time through image compression are therefore of interest. This paper contains an evaluation of one such compression method that uses the Hadamard transform implemented on a PC-resident graphics processor. Image quality is assessed by determining the effect of compression on the performance of an image feature recognition task.

  15. Anomalous compressibility effects and superconductivity of EuFe2As2 under high pressures

    Uhoya, Walter [University of Alabama, Birmingham; Tsoi, Georgiy [University of Alabama, Birmingham; Vohra, Y. K. [University of Alabama, Birmingham; McGuire, Michael A [ORNL; Sefat, A. S. [Oak Ridge National Laboratory (ORNL); Sales, Brian C [ORNL; Mandrus, David [ORNL; Weir, S. T. [Lawrence Livermore National Laboratory (LLNL)


    The crystal structure and electrical resistance of structurally layered EuFe{sub 2}As{sub 2} have been studied up to 70 GPa and down to a temperature of 10 K, using a synchrotron x-ray source and designer diamond anvils. The room temperature compression of the tetragonal phase of EuFe{sub 2}As{sub 2} (I4/mmm) results in an increase in the a-axis length and a rapid decrease in the c-axis length with increasing pressure. This anomalous compression reaches a maximum at 8 GPa and the tetragonal lattice behaves normally above 10 GPa, with a nearly constant c/a axial ratio. The rapid rise in the superconducting transition temperature (T{sub c}) to 41 K with increasing pressure is correlated with this anomalous compression, and a decrease in T{sub c} is observed above 10 GPa. We present P-V data or the equation of state for EuFe{sub 2}As{sub 2} both in the ambient tetragonal phase and in the high pressure collapsed tetragonal phase up to 70 GPa.

  16. Maximum margin Bayesian network classifiers.

    Pernkopf, Franz; Wohlmayr, Michael; Tschiatschek, Sebastian


    We present a maximum margin parameter learning algorithm for Bayesian network classifiers using a conjugate gradient (CG) method for optimization. In contrast to previous approaches, we maintain the normalization constraints on the parameters of the Bayesian network during optimization, i.e., the probabilistic interpretation of the model is not lost. This enables us to handle missing features in discriminatively optimized Bayesian networks. In experiments, we compare the classification performance of maximum margin parameter learning to conditional likelihood and maximum likelihood learning approaches. Discriminative parameter learning significantly outperforms generative maximum likelihood estimation for naive Bayes and tree augmented naive Bayes structures on all considered data sets. Furthermore, maximizing the margin dominates the conditional likelihood approach in terms of classification performance in most cases. We provide results for a recently proposed maximum margin optimization approach based on convex relaxation. While the classification results are highly similar, our CG-based optimization is computationally up to orders of magnitude faster. Margin-optimized Bayesian network classifiers achieve classification performance comparable to support vector machines (SVMs) using fewer parameters. Moreover, we show that unanticipated missing feature values during classification can be easily processed by discriminatively optimized Bayesian network classifiers, a case where discriminative classifiers usually require mechanisms to complete unknown feature values in the data first.

  17. Maximum Entropy in Drug Discovery

    Chih-Yuan Tseng


    Full Text Available Drug discovery applies multidisciplinary approaches either experimentally, computationally or both ways to identify lead compounds to treat various diseases. While conventional approaches have yielded many US Food and Drug Administration (FDA-approved drugs, researchers continue investigating and designing better approaches to increase the success rate in the discovery process. In this article, we provide an overview of the current strategies and point out where and how the method of maximum entropy has been introduced in this area. The maximum entropy principle has its root in thermodynamics, yet since Jaynes’ pioneering work in the 1950s, the maximum entropy principle has not only been used as a physics law, but also as a reasoning tool that allows us to process information in hand with the least bias. Its applicability in various disciplines has been abundantly demonstrated. We give several examples of applications of maximum entropy in different stages of drug discovery. Finally, we discuss a promising new direction in drug discovery that is likely to hinge on the ways of utilizing maximum entropy.

  18. Advanced low-complexity compression for maskless lithography data

    Dai, Vito; Zakhor, Avideh


    A direct-write maskless lithography system using 25nm for 50nm feature sizes requires data rates of about 10 Tb/s to maintain a throughput of one wafer per minute per layer achieved by today"s optical lithography systems. In a previous paper, we presented an architecture that achieves this data rate contingent on 25 to 1 compression of lithography data, and on implementation of a real-time decompressor fabricated on the same chip as a massively parallel array of lithography writers for 50 nm feature sizes. A number of compression techniques, including JBIG, ZIP, the novel 2D-LZ, and BZIP2 were demonstrated to achieve sufficiently high compression ratios on lithography data to make the architecture feasible, although no single technique could achieve this for all test layouts. In this paper we present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4) specifically tailored for lithography data. It successfully combines the advantages of context-based modeling in JBIG and copying in ZIP to achieve higher compression ratios across all test layouts. As part of C4, we have developed a low-complexity binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and 2D-LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for grey-pixel image data. The tradeoff between decoder buffer size, which directly affects implementation complexity and compression ratio is examined. For the same buffer size, C4 achieves higher compression than LZ77, ZIP, and BZIP2.

  19. Compressive Shift Retrieval

    Ohlsson, Henrik; Eldar, Yonina C.; Yang, Allen Y.; Sastry, S. Shankar


    The classical shift retrieval problem considers two signals in vector form that are related by a shift. The problem is of great importance in many applications and is typically solved by maximizing the cross-correlation between the two signals. Inspired by compressive sensing, in this paper, we seek to estimate the shift directly from compressed signals. We show that under certain conditions, the shift can be recovered using fewer samples and less computation compared to the classical setup. Of particular interest is shift estimation from Fourier coefficients. We show that under rather mild conditions only one Fourier coefficient suffices to recover the true shift.

  20. Graph Compression by BFS

    Alberto Apostolico


    Full Text Available The Web Graph is a large-scale graph that does not fit in main memory, so that lossless compression methods have been proposed for it. This paper introduces a compression scheme that combines efficient storage with fast retrieval for the information in a node. The scheme exploits the properties of the Web Graph without assuming an ordering of the URLs, so that it may be applied to more general graphs. Tests on some datasets of use achieve space savings of about 10% over existing methods.

  1. Image data compression investigation

    Myrie, Carlos


    NASA continuous communications systems growth has increased the demand for image transmission and storage. Research and analysis was conducted on various lossy and lossless advanced data compression techniques or approaches used to improve the efficiency of transmission and storage of high volume stellite image data such as pulse code modulation (PCM), differential PCM (DPCM), transform coding, hybrid coding, interframe coding, and adaptive technique. In this presentation, the fundamentals of image data compression utilizing two techniques which are pulse code modulation (PCM) and differential PCM (DPCM) are presented along with an application utilizing these two coding techniques.

  2. Chronic nerve root entrapment: compression and degeneration

    Vanhoestenberghe, A.


    Electrode mounts are being developed to improve electrical stimulation and recording. Some are tight-fitting, or even re-shape the nervous structure they interact with, for a more selective, fascicular, access. If these are to be successfully used chronically with human nerve roots, we need to know more about the possible damage caused by the long-term entrapment and possible compression of the roots following electrode implantation. As there are, to date, no such data published, this paper presents a review of the relevant literature on alternative causes of nerve root compression, and a discussion of the degeneration mechanisms observed. A chronic compression below 40 mmHg would not compromise the functionality of the root as far as electrical stimulation and recording applications are concerned. Additionally, any temporary increase in pressure, due for example to post-operative swelling, should be limited to 20 mmHg below the patient’s mean arterial pressure, with a maximum of 100 mmHg. Connective tissue growth may cause a slower, but sustained, pressure increase. Therefore, mounts large enough to accommodate the root initially without compressing it, or compliant, elastic, mounts, that may stretch to free a larger cross-sectional area in the weeks after implantation, are recommended.

  3. Negative linear compressibility in common materials

    Miller, W.; Evans, K. E.; Marmier, A., E-mail: [College of Engineering Mathematics and Physical Science, University of Exeter, Exeter EX4 4QF (United Kingdom)


    Negative linear compressibility (NLC) is still considered an exotic property, only observed in a few obscure crystals. The vast majority of materials compress axially in all directions when loaded in hydrostatic compression. However, a few materials have been observed which expand in one or two directions under hydrostatic compression. At present, the list of materials demonstrating this unusual behaviour is confined to a small number of relatively rare crystal phases, biological materials, and designed structures, and the lack of widespread availability hinders promising technological applications. Using improved representations of elastic properties, this study revisits existing databases of elastic constants and identifies several crystals missed by previous reviews. More importantly, several common materials-drawn polymers, certain types of paper and wood, and carbon fibre laminates-are found to display NLC. We show that NLC in these materials originates from the misalignment of polymers/fibres. Using a beam model, we propose that maximum NLC is obtained for misalignment of 26°. The existence of such widely available materials increases significantly the prospects for applications of NLC.

  4. Back Work Ratio of Brayton Cycle

    Malaver de la Fuente M.


    Full Text Available This paper analizes the existing relation between temperatures, back work ratio and net work of Brayton cycle, a cycle that describes gas turbine engines performance. The application of computational soft ware helps to show the influence of back work ratio or coupling ratio, compressor and turbine in let temperatures in an ideal thermodynamical cycle. The results lead to deduce that the maximum value reached in back work ratio will depend on the ranges of maximum and minimal temperatures of Brayton cycle.

  5. Fast-adaptive near-lossless image compression

    He, Kejing


    The purpose of image compression is to store or transmit image data efficiently. However, most compression methods emphasize the compression ratio rather than the throughput. We propose an encoding process and rules, and consequently a fast-adaptive near-lossless image compression method (FAIC) with good compression ratio. FAIC is a single-pass method, which removes bits from each codeword, then predicts the next pixel value through localized edge detection techniques, and finally uses Golomb-Rice codes to encode the residuals. FAIC uses only logical operations, bitwise operations, additions, and subtractions. Meanwhile, it eliminates the slow operations (e.g., multiplication, division, and logarithm) and the complex entropy coder, which can be a bottleneck in hardware implementations. Besides, FAIC does not depend on any precomputed tables or parameters. Experimental results demonstrate that FAIC achieves good balance between compression ratio and computational complexity in certain range (e.g., peak signal-to-noise ratio >35 dB, bits per pixel>2). It is suitable for applications in which the amount of data is huge or the computation power is limited.

  6. Finding maximum JPEG image block code size

    Lakhani, Gopal


    We present a study of JPEG baseline coding. It aims to determine the minimum storage needed to buffer the JPEG Huffman code bits of 8-bit image blocks. Since DC is coded separately, and the encoder represents each AC coefficient by a pair of run-length/AC coefficient level, the net problem is to perform an efficient search for the optimal run-level pair sequence. We formulate it as a two-dimensional, nonlinear, integer programming problem and solve it using a branch-and-bound based search method. We derive two types of constraints to prune the search space. The first one is given as an upper-bound for the sum of squares of AC coefficients of a block, and it is used to discard sequences that cannot represent valid DCT blocks. The second type constraints are based on some interesting properties of the Huffman code table, and these are used to prune sequences that cannot be part of optimal solutions. Our main result is that if the default JPEG compression setting is used, space of minimum of 346 bits and maximum of 433 bits is sufficient to buffer the AC code bits of 8-bit image blocks. Our implementation also pruned the search space extremely well; the first constraint reduced the initial search space of 4 nodes down to less than 2 nodes, and the second set of constraints reduced it further by 97.8%.

  7. Maximum Likelihood Analysis in the PEN Experiment

    Lehman, Martin


    The experimental determination of the π+ -->e+ ν (γ) decay branching ratio currently provides the most accurate test of lepton universality. The PEN experiment at PSI, Switzerland, aims to improve the present world average experimental precision of 3 . 3 ×10-3 to 5 ×10-4 using a stopped beam approach. During runs in 2008-10, PEN has acquired over 2 ×107 πe 2 events. The experiment includes active beam detectors (degrader, mini TPC, target), central MWPC tracking with plastic scintillator hodoscopes, and a spherical pure CsI electromagnetic shower calorimeter. The final branching ratio will be calculated using a maximum likelihood analysis. This analysis assigns each event a probability for 5 processes (π+ -->e+ ν , π+ -->μ+ ν , decay-in-flight, pile-up, and hadronic events) using Monte Carlo verified probability distribution functions of our observables (energies, times, etc). A progress report on the PEN maximum likelihood analysis will be presented. Work supported by NSF grant PHY-0970013.

  8. Fingerprints in Compressed Strings

    Bille, Philip; Cording, Patrick Hagge; Gørtz, Inge Li


    The Karp-Rabin fingerprint of a string is a type of hash value that due to its strong properties has been used in many string algorithms. In this paper we show how to construct a data structure for a string S of size N compressed by a context-free grammar of size n that answers fingerprint queries...

  9. Multiple snapshot compressive beamforming

    Gerstoft, Peter; Xenaki, Angeliki; Mecklenbrauker, Christoph F.


    For sound fields observed on an array, compressive sensing (CS) reconstructs the multiple source signals at unknown directions-of-arrival (DOAs) using a sparsity constraint. The DOA estimation is posed as an underdetermined problem expressing the field at each sensor as a phase-lagged superposition...

  10. Compressive CFAR radar detection

    Anitori, L.; Otten, M.P.G.; Rossum, W.L. van; Maleki, A.; Baraniuk, R.


    In this paper we develop the first Compressive Sensing (CS) adaptive radar detector. We propose three novel architectures and demonstrate how a classical Constant False Alarm Rate (CFAR) detector can be combined with ℓ1-norm minimization. Using asymptotic arguments and the Complex Approximate Messag

  11. Compressive CFAR Radar Processing

    Anitori, L.; Rossum, W.L. van; Otten, M.P.G.; Maleki, A.; Baraniuk, R.


    In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate Mess

  12. Compression of interferometric radio-astronomical data

    Offringa, A. R.


    Context. The volume of radio-astronomical data is a considerable burden in the processing and storing of radio observations that have high time and frequency resolutions and large bandwidths. For future telescopes such as the Square Kilometre Array (SKA), the data volume will be even larger. Aims: Lossy compression of interferometric radio-astronomical data is considered to reduce the volume of visibility data and to speed up processing. Methods: A new compression technique named "Dysco" is introduced that consists of two steps: a normalization step, in which grouped visibilities are normalized to have a similar distribution; and a quantization and encoding step, which rounds values to a given quantization scheme using a dithering scheme. Several non-linear quantization schemes are tested and combined with different methods for normalizing the data. Four data sets with observations from the LOFAR and MWA telescopes are processed with different processing strategies and different combinations of normalization and quantization. The effects of compression are measured in image plane. Results: The noise added by the lossy compression technique acts similarly to normal system noise. The accuracy of Dysco is depending on the signal-to-noise ratio (S/N) of the data: noisy data can be compressed with a smaller loss of image quality. Data with typical correlator time and frequency resolutions can be compressed by a factor of 6.4 for LOFAR and 5.3 for MWA observations with less than 1% added system noise. An implementation of the compression technique is released that provides a Casacore storage manager and allows transparent encoding and decoding. Encoding and decoding is faster than the read/write speed of typical disks. Conclusions: The technique can be used for LOFAR and MWA to reduce the archival space requirements for storing observed data. Data from SKA-low will likely be compressible by the same amount as LOFAR. The same technique can be used to compress data from

  13. The Maximum Density of Water.

    Greenslade, Thomas B., Jr.


    Discusses a series of experiments performed by Thomas Hope in 1805 which show the temperature at which water has its maximum density. Early data cast into a modern form as well as guidelines and recent data collected from the author provide background for duplicating Hope's experiments in the classroom. (JN)

  14. Abolishing the maximum tension principle

    Dabrowski, Mariusz P


    We find the series of example theories for which the relativistic limit of maximum tension $F_{max} = c^2/4G$ represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.

  15. Abolishing the maximum tension principle

    Mariusz P. Da̧browski


    Full Text Available We find the series of example theories for which the relativistic limit of maximum tension Fmax=c4/4G represented by the entropic force can be abolished. Among them the varying constants theories, some generalized entropy models applied both for cosmological and black hole horizons as well as some generalized uncertainty principle models.


    Jayroe, R. R.


    Several types of algorithms are generally used to process digital imagery such as Landsat data. The most commonly used algorithms perform the task of registration, compression, and classification. Because there are different techniques available for performing registration, compression, and classification, imagery data users need a rationale for selecting a particular approach to meet their particular needs. This collection of registration, compression, and classification algorithms was developed so that different approaches could be evaluated and the best approach for a particular application determined. Routines are included for six registration algorithms, six compression algorithms, and two classification algorithms. The package also includes routines for evaluating the effects of processing on the image data. This collection of routines should be useful to anyone using or developing image processing software. Registration of image data involves the geometrical alteration of the imagery. Registration routines available in the evaluation package include image magnification, mapping functions, partitioning, map overlay, and data interpolation. The compression of image data involves reducing the volume of data needed for a given image. Compression routines available in the package include adaptive differential pulse code modulation, two-dimensional transforms, clustering, vector reduction, and picture segmentation. Classification of image data involves analyzing the uncompressed or compressed image data to produce inventories and maps of areas of similar spectral properties within a scene. The classification routines available include a sequential linear technique and a maximum likelihood technique. The choice of the appropriate evaluation criteria is quite important in evaluating the image processing functions. The user is therefore given a choice of evaluation criteria with which to investigate the available image processing functions. All of the available

  17. Compression of FASTQ and SAM format sequencing data.

    Bonfield, James K; Mahoney, Matthew V


    Storage and transmission of the data produced by modern DNA sequencing instruments has become a major concern, which prompted the Pistoia Alliance to pose the SequenceSqueeze contest for compression of FASTQ files. We present several compression entries from the competition, Fastqz and Samcomp/Fqzcomp, including the winning entry. These are compared against existing algorithms for both reference based compression (CRAM, Goby) and non-reference based compression (DSRC, BAM) and other recently published competition entries (Quip, SCALCE). The tools are shown to be the new Pareto frontier for FASTQ compression, offering state of the art ratios at affordable CPU costs. All programs are freely available on SourceForge. Fastqz:, fqzcomp:, and samcomp:

  18. Lossless compression of hyperspectral images using hybrid context prediction.

    Liang, Yuan; Li, Jianping; Guo, Ke


    In this letter a new algorithm for lossless compression of hyperspectral images using hybrid context prediction is proposed. Lossless compression algorithms are typically divided into two stages, a decorrelation stage and a coding stage. The decorrelation stage supports both intraband and interband predictions. The intraband (spatial) prediction uses the median prediction model, since the median predictor is fast and efficient. The interband prediction uses hybrid context prediction. The hybrid context prediction is the combination of a linear prediction (LP) and a context prediction. Finally, the residual image of hybrid context prediction is coded by the arithmetic coding. We compare the proposed lossless compression algorithm with some of the existing algorithms for hyperspectral images such as 3D-CALIC, M-CALIC, LUT, LAIS-LUT, LUT-NN, DPCM (C-DPCM), JPEG-LS. The performance of the proposed lossless compression algorithm is evaluated. Simulation results show that our algorithm achieves high compression ratios with low complexity and computational cost.

  19. A new method for compression-rebuilding of IR spectra


    This work presents a new spectral data compression-rebuilding technique to translate the full IR spectral data into compact codes based on the analysis and comprehension encoding approach. This method has been successfully applied to a sample set of 505 IR spectra randomly picked from 100 000 spectra. The results show that the compression ratio reaches 12.7:1 under a very weak curve distortion. The choice of the number and shape of the basis functions is flexible. The IR spectra can be compressed in a fixed data size in fulfilling the distortion criteria. The data after compression have no significance in the sense of IR spectra. To recover the original spectra, a specific algorithm must be applied. So the method can be used as a cryptic tool. Furthermore, the method can be applied to the compression of other complex curve by utilizing some of proper basis functions.

  20. Wavelet/scalar quantization compression standard for fingerprint images

    Brislawn, C.M.


    US Federal Bureau of Investigation (FBI) has recently formulated a national standard for digitization and compression of gray-scale fingerprint images. Fingerprints are scanned at a spatial resolution of 500 dots per inch, with 8 bits of gray-scale resolution. The compression algorithm for the resulting digital images is based on adaptive uniform scalar quantization of a discrete wavelet transform subband decomposition (wavelet/scalar quantization method). The FBI standard produces archival-quality images at compression ratios of around 15 to 1 and will allow the current database of paper fingerprint cards to be replaced by digital imagery. The compression standard specifies a class of potential encoders and a universal decoder with sufficient generality to reconstruct compressed images produced by any compliant encoder, allowing flexibility for future improvements in encoder technology. A compliance testing program is also being implemented to ensure high standards of image quality and interchangeability of data between different implementations.




    Full Text Available Image compression is applied to many fields such as television broadcasting, remote sensing, image storage etc. Digitized images are compressed by a technique which exploits the redundancy of the images so that the number of bits required to represent the image can be reduced with acceptable degradation of the decoded image. The degradation of the image quality is limited wrt. the application used. There are various application where accuracy is of major concern. To achieve the objective of performance improvement with respect to decoded picture quality and compression ratios, compared to existing image compression techniques, a image compression technique using hybrid neural networks combining two different learning networks called Autoassociative multi-layer perceptron and self-organizing feature map is proposed.

  2. Comparative compressibility of hydrous wadsleyite

    Chang, Y.; Jacobsen, S. D.; Thomas, S.; Bina, C. R.; Smyth, J. R.; Frost, D. J.; Hauri, E. H.; Meng, Y.; Dera, P. K.


    Determining the effects of hydration on the density and elastic properties of wadsleyite, β-Mg2SiO4, is critical to constraining Earth’s global geochemical water cycle. Whereas previous studies of the bulk modulus (KT) have studied either hydrous Mg-wadsleyite, or anhydrous Fe-bearing wadsleyite, the combined effects of hydration and iron are under investigation. Also, whereas KT from compressibility studies is relatively well constrained by equation of state fitting to P-V data, the pressure derivative of the bulk modulus (K’) is usually not well constrained either because of poor data resolution, uncertainty in pressure calibrations, or narrow pressure ranges of previous single-crystal studies. Here we report the comparative compressibility of dry versus hydrous wadsleyite with Fo90 composition containing 1.9(2) wt% H2O, nearly the maximum water storage capacity of this phase. The composition was characterized by EMPA and nanoSIMS. The experiments were carried out using high-pressure, single-crystal diffraction up to 30 GPa at HPCAT, Advanced Photon Source. By loading three crystals each of hydrous and anhydrous wadsleyite together in the same diamond-anvil cell, we achieve good hkl coverage and eliminate the pressure scale as a variable in comparing the relative value of K’ between the dry and hydrous samples. We used MgO as an internal diffraction standard, in addition to recording ruby fluorescence pressures. By using neon as a pressure medium and about 1 GPa pressure steps up to 30 GPa, we obtain high-quality diffraction data for constraining the effect of hydration on the density and K’ of hydrous wadsleyite. Due to hydration, the initial volume of hydrous Fo90 wadsleyite is larger than anhydrous Fo90 wadsleyite, however the higher compressibility of hydrous wadsleyite leads to a volume crossover at 6 GPa. Hydration to 2 wt% H2O reduces the bulk modulus of Fo90 wadsleyite from 170(2) to 157(2) GPa, or about 7.6% reduction. In contrast to previous

  3. Osteoporotic vertebral compression fractures:correlation between number of fractured vertebrae and C7plumb line/sacro-femoral distance ratio%骨质疏松性椎体压缩骨折:椎体骨折数目和C7矢状位比值的关系

    张义龙; 孙志杰; 王雅辉; 任磊; 孙贺


    BACKGROUND:Sagittal imbalance induced by vertebral osteoporotic fractures has not been paid enough attention in previous studies. OBJECTIVE:To assess the correlation of osteoporotic vertebral compression fracture and spinal sagittal imbalance. METHODS:Sixty patients with old osteoporotic vertebral compression fracture, who were treated in the Department of Spine Surgery, the Affiliated Hospital of Chengde Medical Colege from February 2013 to August 2015, were enroled in this study as the observation group. Sixty healthy old people from physical examination center were enroled as the control group. The whole-spine anteroposterior and lateral X-ray films were taken in both groups. The number and the location of fractured vertebrae were recorded. Sagittal parameters of both groups including thoracic kyphotic angle, lumbar lordotic angle, T1-spinopelvic inclination angle and the C7plumb line/sacro-femoral distance (PL/SFD) ratio were measured and compared among groups. The observation group was dividedinto three subgroups according to the number of fractured vertebrae,i.e., single-vertebrae fracture subgroup, double-vertebrae fracture subgroup and above triple-vertebrae fracture subgroup. The C7PL/SFD ratio of the three subgroups was compared. The correlation between the number of fractured vertebrae and the C7PL/SFD ratio was analyzed. RESULTS AND CONCLUSION:(1) The thoracic kyphotic angle of the observation group was bigger than that of the control group (P  目的:评估骨质疏松性椎体压缩骨折与脊柱矢状位失衡的相关性。  方法:纳入2013年2月至2015年8月在承德医学院附属医院脊柱外科就诊的陈旧性骨质疏松性椎体压缩骨折患者60例,作为观察组;另外选择体检科同龄老年健康查体人群60例作为对照组。拍摄两组脊柱全长正侧位 X 射线片记录脊柱椎体骨折数目和位置,测量矢状位参数包括胸椎后凸角、腰椎前凸角、T1脊柱骨盆倾斜角、C7矢

  4. Numerical study of the scaling of the maximum kinetic energy per unit length for imploding Z-pinch liner

    Zeng Zheng-Zhong; Qiu Ai-Ci


    Numerical computation based on a zero-dimensional thin-plasma-shell model has been carried out to study the scaling of the maximum kinetic energy per unit length, the current amplitude and the compression ratio for the imploding Z-pinch liner driven by peaked current pulses. A dimensionless scaling constant of 0.9 with an error less than 10% is extracted at the optimal choice of the current and liner parameters. Deviation of the chosen experimental parameter from the optimal exerts a minor influence on the kinetic energy for wider-shaped and slower-decaying pulses, but the influence becomes significant for narrower-shaped and faster-decaying pulses. The computation is in reasonable agreement with experimental data from the Z, Saturn, Blackjack 5 and Qiangguang-I liners.

  5. Information Content in Uniformly Discretized Gaussian Noise:. Optimal Compression Rates

    Romeo, August; Gaztañaga, Enrique; Barriga, Jose; Elizalde, Emilio

    We approach the theoretical problem of compressing a signal dominated by Gaussian noise. We present expressions for the compression ratio which can be reached, under the light of Shannon's noiseless coding theorem, for a linearly quantized stochastic Gaussian signal (noise). The compression ratio decreases logarithmically with the amplitude of the frequency spectrum P(f) of the noise. Entropy values and compression rates are shown to depend on the shape of this power spectrum, given different normalizations. The cases of white noise (w.n.), fnp power-law noise (including 1/f noise), (w.n.+1/f) noise, and piecewise (w.n.+1/f | w.n.+1/f2) noise are discussed, while quantitative behaviors and useful approximations are provided.

  6. Compressive Detection Using Sub-Nyquist Radars for Sparse Signals

    Ying Sun


    Full Text Available This paper investigates the compression detection problem using sub-Nyquist radars, which is well suited to the scenario of high bandwidths in real-time processing because it would significantly reduce the computational burden and save power consumption and computation time. A compressive generalized likelihood ratio test (GLRT detector for sparse signals is proposed for sub-Nyquist radars without ever reconstructing the signal involved. The performance of the compressive GLRT detector is analyzed and the theoretical bounds are presented. The compressive GLRT detection performance of sub-Nyquist radars is also compared to the traditional GLRT detection performance of conventional radars, which employ traditional analog-to-digital conversion (ADC at Nyquist sampling rates. Simulation results demonstrate that the former can perform almost as well as the latter with a very small fraction of the number of measurements required by traditional detection in relatively high signal-to-noise ratio (SNR cases.

  7. Ovalization of Tubes Under Bending and Compression

    Demer, L J; Kavanaugh, E S


    An empirical equation has been developed that gives the approximate amount of ovalization for tubes under bending loads. Tests were made on tubes in the d/t range from 6 to 14, the latter d/t ratio being in the normal landing gear range. Within the range of the series of tests conducted, the increase in ovalization due to a compression load in combination with a bending load was very small. The bending load, being the principal factor in producing the ovalization, is a rather complex function of the bending moment, d/t ratio, cantilever length, and distance between opposite bearing faces. (author)

  8. TEM Video Compressive Sensing

    Stevens, Andrew J.; Kovarik, Libor; Abellan, Patricia; Yuan, Xin; Carin, Lawrence; Browning, Nigel D.


    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into a single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental

  9. A hyperspectral image compression algorithm based on wavelet transformation and fractal composition (AWFC)

    HU; Xingtang; ZHANG; Bing; ZHANG; Xia; ZHENG; Lanfen; TONG; Qingxi


    Starting with a fractal-based image-compression algorithm based on wavelet transformation for hyperspectral images, the authors were able to obtain more spectral bands with the help of of hyperspectral remote sensing. Because large amounts of data and limited bandwidth complicate the storage and transmission of data measured by TB-level bits, it is important to compress image data acquired by hyperspectral sensors such as MODIS, PHI, and OMIS; otherwise, conventional lossless compression algorithms cannot reach adequate compression ratios. Other loss-compression methods can reach high compression ratios but lack good image fidelity, especially for hyperspectral image data. Among the third generation of image compression algorithms, fractal image compression based on wavelet transformation is superior to traditional compression methods,because it has high compression ratios and good image fidelity, and requires less computing time. To keep the spectral dimension invariable, the authors compared the results of two compression algorithms based on the storage-file structures of BSQ and of BIP, and improved the HV and Quadtree partitioning and domain-range matching algorithms in order to accelerate their encode/decode efficiency. The authors' Hyperspectral Image Process and Analysis System (HIPAS) software used a VC++6.0 integrated development environment (IDE), with which good experimental results were obtained. Possible modifications of the algorithm and limitations of the method are also discussed.

  10. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.;


    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  11. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.


    We introduce a new compression scheme for labeled trees based on top trees [3]. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  12. Tree compression with top trees

    Bille, Philip; Gørtz, Inge Li; Landau, Gad M.


    We introduce a new compression scheme for labeled trees based on top trees. Our compression scheme is the first to simultaneously take advantage of internal repeats in the tree (as opposed to the classical DAG compression that only exploits rooted subtree repeats) while also supporting fast...

  13. Reinterpreting Compression in Infinitary Rewriting

    Ketema, J.; Tiwari, Ashish


    Departing from a computational interpretation of compression in infinitary rewriting, we view compression as a degenerate case of standardisation. The change in perspective comes about via two observations: (a) no compression property can be recovered for non-left-linear systems and (b) some standar

  14. Lossless Compression of Broadcast Video

    Martins, Bo; Eriksen, N.; Faber, E.


    We investigate several techniques for lossless and near-lossless compression of broadcast video.The emphasis is placed on the emerging international standard for compression of continous-tone still images, JPEG-LS, due to its excellent compression performance and moderatecomplexity. Except for one...

  15. Maximum Genus of Strong Embeddings

    Er-ling Wei; Yan-pei Liu; Han Ren


    The strong embedding conjecture states that any 2-connected graph has a strong embedding on some surface. It implies the circuit double cover conjecture: Any 2-connected graph has a circuit double cover.Conversely, it is not true. But for a 3-regular graph, the two conjectures are equivalent. In this paper, a characterization of graphs having a strong embedding with exactly 3 faces, which is the strong embedding of maximum genus, is given. In addition, some graphs with the property are provided. More generally, an upper bound of the maximum genus of strong embeddings of a graph is presented too. Lastly, it is shown that the interpolation theorem is true to planar Halin graph.

  16. Greylevel Difference Classification Algorithm inFractal Image Compression

    陈毅松; 卢坚; 孙正兴; 张福炎


    This paper proposes the notion of a greylevel difference classification algorithm in fractal image compression. Then an example of the greylevel difference classification algo rithm is given as an improvement of the quadrant greylevel and variance classification in the quadtree-based encoding algorithm. The algorithm incorporates the frequency feature in spatial analysis using the notion of average quadrant greylevel difference, leading to an enhancement in terms of encoding time, PSNR value and compression ratio.

  17. H.264/AVC Video Compressed Traces: Multifractal and Fractal Analysis

    Samčović Andreja


    Full Text Available Publicly available long video traces encoded according to H.264/AVC were analyzed from the fractal and multifractal points of view. It was shown that such video traces, as compressed videos (H.261, H.263, and MPEG-4 Version 2 exhibit inherent long-range dependency, that is, fractal, property. Moreover they have high bit rate variability, particularly at higher compression ratios. Such signals may be better characterized by multifractal (MF analysis, since this approach describes both local and global features of the process. From multifractal spectra of the frame size video traces it was shown that higher compression ratio produces broader and less regular MF spectra, indicating to higher MF nature and the existence of additive components in video traces. Considering individual frames (I, P, and B and their MF spectra one can approve additive nature of compressed video and the particular influence of these frames to a whole MF spectrum. Since compressed video occupies a main part of transmission bandwidth, results obtained from MF analysis of compressed video may contribute to more accurate modeling of modern teletraffic. Moreover, by appropriate choice of the method for estimating MF quantities, an inverse MF analysis is possible, that means, from a once derived MF spectrum of observed signal it is possible to recognize and extract parts of the signal which are characterized by particular values of multifractal parameters. Intensive simulations and results obtained confirm the applicability and efficiency of MF analysis of compressed video.

  18. Onboard low-complexity compression of solar stereo images.

    Wang, Shuang; Cui, Lijuan; Cheng, Samuel; Stanković, Lina; Stanković, Vladimir


    We propose an adaptive distributed compression solution using particle filtering that tracks correlation, as well as performing disparity estimation, at the decoder side. The proposed algorithm is tested on the stereo solar images captured by the twin satellites system of NASA's Solar TErrestrial RElations Observatory (STEREO) project. Our experimental results show improved compression performance w.r.t. to a benchmark compression scheme, accurate correlation estimation by our proposed particle-based belief propagation algorithm, and significant peak signal-to-noise ratio improvement over traditional separate bit-plane decoding without dynamic correlation and disparity estimation.

  19. D(Maximum)=P(Argmaximum)

    Remizov, Ivan D


    In this note, we represent a subdifferential of a maximum functional defined on the space of all real-valued continuous functions on a given metric compact set. For a given argument, $f$ it coincides with the set of all probability measures on the set of points maximizing $f$ on the initial compact set. This complete characterization lies in the heart of several important identities in microeconomics, such as Roy's identity, Sheppard's lemma, as well as duality theory in production and linear programming.

  20. The Testability of Maximum Magnitude

    Clements, R.; Schorlemmer, D.; Gonzalez, A.; Zoeller, G.; Schneider, M.


    Recent disasters caused by earthquakes of unexpectedly large magnitude (such as Tohoku) illustrate the need for reliable assessments of the seismic hazard. Estimates of the maximum possible magnitude M at a given fault or in a particular zone are essential parameters in probabilistic seismic hazard assessment (PSHA), but their accuracy remains untested. In this study, we discuss the testability of long-term and short-term M estimates and the limitations that arise from testing such rare events. Of considerable importance is whether or not those limitations imply a lack of testability of a useful maximum magnitude estimate, and whether this should have any influence on current PSHA methodology. We use a simple extreme value theory approach to derive a probability distribution for the expected maximum magnitude in a future time interval, and we perform a sensitivity analysis on this distribution to determine if there is a reasonable avenue available for testing M estimates as they are commonly reported today: devoid of an appropriate probability distribution of their own and estimated only for infinite time (or relatively large untestable periods). Our results imply that any attempt at testing such estimates is futile, and that the distribution is highly sensitive to M estimates only under certain optimal conditions that are rarely observed in practice. In the future we suggest that PSHA modelers be brutally honest about the uncertainty of M estimates, or must find a way to decrease its influence on the estimated hazard.

  1. Alternative Multiview Maximum Entropy Discrimination.

    Chao, Guoqing; Sun, Shiliang


    Maximum entropy discrimination (MED) is a general framework for discriminative estimation based on maximum entropy and maximum margin principles, and can produce hard-margin support vector machines under some assumptions. Recently, the multiview version of MED multiview MED (MVMED) was proposed. In this paper, we try to explore a more natural MVMED framework by assuming two separate distributions p1( Θ1) over the first-view classifier parameter Θ1 and p2( Θ2) over the second-view classifier parameter Θ2 . We name the new MVMED framework as alternative MVMED (AMVMED), which enforces the posteriors of two view margins to be equal. The proposed AMVMED is more flexible than the existing MVMED, because compared with MVMED, which optimizes one relative entropy, AMVMED assigns one relative entropy term to each of the two views, thus incorporating a tradeoff between the two views. We give the detailed solving procedure, which can be divided into two steps. The first step is solving our optimization problem without considering the equal margin posteriors from two views, and then, in the second step, we consider the equal posteriors. Experimental results on multiple real-world data sets verify the effectiveness of the AMVMED, and comparisons with MVMED are also reported.

  2. A Lower Bound on Adiabatic Heating of Compressed Turbulence for Simulation and Model Validation

    Davidovits, Seth; Fisch, Nathaniel J.


    The energy in turbulent flow can be amplified by compression, when the compression occurs on a timescale shorter than the turbulent dissipation time. This mechanism may play a part in sustaining turbulence in various astrophysical systems, including molecular clouds. The amount of turbulent amplification depends on the net effect of the compressive forcing and turbulent dissipation. By giving an argument for a bound on this dissipation, we give a lower bound for the scaling of the turbulent velocity with the compression ratio in compressed turbulence. That is, turbulence undergoing compression will be enhanced at least as much as the bound given here, subject to a set of caveats that will be outlined. Used as a validation check, this lower bound suggests that some models of compressing astrophysical turbulence are too dissipative. The technique used highlights the relationship between compressed turbulence and decaying turbulence.

  3. A lower bound on adiabatic heating of compressed turbulence for simulation and model validation

    Davidovits, Seth


    The energy in turbulent flow can be amplified by compression, when the compression occurs on a timescale shorter than the turbulent dissipation time. This mechanism may play a part in sustaining turbulence in various astrophysical systems, including molecular clouds. The amount of turbulent amplification depends on the net effect of the compressive forcing and turbulent dissipation. By giving an argument for a bound on this dissipation, we give a lower bound for the scaling of the turbulent velocity with compression ratio in compressed turbulence. That is, turbulence undergoing compression will be enhanced at least as much as the bound given here, subject to a set of caveats that will be outlined. Used as a validation check, this lower bound suggests that some simulations and models of compressing astrophysical turbulence are too dissipative. The technique used highlights the relationship between compressed turbulence and decaying turbulence.

  4. Algorithm for Compressing Time-Series Data

    Hawkins, S. Edward, III; Darlington, Edward Hugo


    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  5. Building indifferentiable compression functions from the PGV compression functions

    Gauravaram, P.; Bagheri, Nasour; Knudsen, Lars Ramkilde


    Preneel, Govaerts and Vandewalle (PGV) analysed the security of single-block-length block cipher based compression functions assuming that the underlying block cipher has no weaknesses. They showed that 12 out of 64 possible compression functions are collision and (second) preimage resistant. Black...... cipher is ideal. We address the problem of building indifferentiable compression functions from the PGV compression functions. We consider a general form of 64 PGV compression functions and replace the linear feed-forward operation in this generic PGV compression function with an ideal block cipher...... independent of the one used in the generic PGV construction. This modified construction is called a generic modified PGV (MPGV). We analyse indifferentiability of the generic MPGV construction in the ideal cipher model and show that 12 out of 64 MPGV compression functions in this framework...

  6. Compressive Principal Component Pursuit

    Wright, John; Min, Kerui; Ma, Yi


    We consider the problem of recovering a target matrix that is a superposition of low-rank and sparse components, from a small set of linear measurements. This problem arises in compressed sensing of structured high-dimensional signals such as videos and hyperspectral images, as well as in the analysis of transformation invariant low-rank recovery. We analyze the performance of the natural convex heuristic for solving this problem, under the assumption that measurements are chosen uniformly at random. We prove that this heuristic exactly recovers low-rank and sparse terms, provided the number of observations exceeds the number of intrinsic degrees of freedom of the component signals by a polylogarithmic factor. Our analysis introduces several ideas that may be of independent interest for the more general problem of compressed sensing and decomposing superpositions of multiple structured signals.

  7. On Network Functional Compression

    Feizi, Soheil


    In this paper, we consider different aspects of the network functional compression problem where computation of a function (or, some functions) of sources located at certain nodes in a network is desired at receiver(s). The rate region of this problem has been considered in the literature under certain restrictive assumptions, particularly in terms of the network topology, the functions and the characteristics of the sources. In this paper, we present results that significantly relax these assumptions. Firstly, we consider this problem for an arbitrary tree network and asymptotically lossless computation. We show that, for depth one trees with correlated sources, or for general trees with independent sources, a modularized coding scheme based on graph colorings and Slepian-Wolf compression performs arbitrarily closely to rate lower bounds. For a general tree network with independent sources, optimal computation to be performed at intermediate nodes is derived. We introduce a necessary and sufficient condition...

  8. Hamming Compressed Sensing

    Zhou, Tianyi


    Compressed sensing (CS) and 1-bit CS cannot directly recover quantized signals and require time consuming recovery. In this paper, we introduce \\textit{Hamming compressed sensing} (HCS) that directly recovers a k-bit quantized signal of dimensional $n$ from its 1-bit measurements via invoking $n$ times of Kullback-Leibler divergence based nearest neighbor search. Compared with CS and 1-bit CS, HCS allows the signal to be dense, takes considerably less (linear) recovery time and requires substantially less measurements ($\\mathcal O(\\log n)$). Moreover, HCS recovery can accelerate the subsequent 1-bit CS dequantizer. We study a quantized recovery error bound of HCS for general signals and "HCS+dequantizer" recovery error bound for sparse signals. Extensive numerical simulations verify the appealing accuracy, robustness, efficiency and consistency of HCS.

  9. Compressive Spectral Renormalization Method

    Bayindir, Cihan


    In this paper a novel numerical scheme for finding the sparse self-localized states of a nonlinear system of equations with missing spectral data is introduced. As in the Petviashivili's and the spectral renormalization method, the governing equation is transformed into Fourier domain, but the iterations are performed for far fewer number of spectral components (M) than classical versions of the these methods with higher number of spectral components (N). After the converge criteria is achieved for M components, N component signal is reconstructed from M components by using the l1 minimization technique of the compressive sampling. This method can be named as compressive spectral renormalization (CSRM) method. The main advantage of the CSRM is that, it is capable of finding the sparse self-localized states of the evolution equation(s) with many spectral data missing.

  10. Uniaxial Compressive Properties of Ultra High Toughness Cementitious Composite

    CAI Xiangrong; XU Shilang


    Uniaxial compression tests were conducted to characterize the main compressive performance of ultra high toughness cementitious composite(UHTCC)in terms of strength and toughness and to obtain its stress-strain relationships.The compressive strength investigated ranges from 30 MPa to 60 MPa.Complete stress-strain curves were directly obtained,and the strength indexes,including uniaxial compressive strength,compressive strain at peak stress,elastic modulus and Poisson's ratio,were calculated.The comparisons between UHTCC and matrix were also carried out to understand the fiber effect on the compressive strength indexes.Three dimensionless toughness indexes were calculated,which either represent its relative improvement in energy absorption capacity because of fiber addition or provide an indication of its behavior relative to a rigid-plastic material.Moreover,two new toughness indexes,which were named as post-crack deformation energy and equivalent compressive strength,were proposed and calculated with the aim at linking up the compressive toughness of UHTCC with the existing design concept of concrete.The failure mode was also given.The study production provides material characteristics for the practical engineering application of UHTCC.

  11. Super-Spatial Structure Prediction Compression of Medical

    M. Ferni Ukrit


    Full Text Available The demand to preserve raw image data for further processing has been increased with the hasty growth of digital technology. In medical industry the images are generally in the form of sequences which are much correlated. These images are very important and hence lossless compression Technique is required to reduce the number of bits to store these image sequences and take less time to transmit over the network The proposed compression method combines Super-Spatial Structure Prediction with inter-frame coding that includes Motion Estimation and Motion Compensation to achieve higher compression ratio. Motion Estimation and Motion Compensation is made with the fast block-matching process Inverse Diamond Search method. To enhance the compression ratio we propose a new scheme Bose, Chaudhuri and Hocquenghem (BCH. Results are compared in terms of compression ratio and Bits per pixel to the prior arts. Experimental results of our proposed algorithm for medical image sequences achieve 30% more reduction than the other state-of-the-art lossless image compression methods.

  12. Speech Compression and Synthesis


    phonological rules combined with diphone improved the algorithms used by the phonetic synthesis prog?Im for gain normalization and time... phonetic vocoder, spectral template. i0^Th^TreprtTörc"u’d1sTuV^ork for the past two years on speech compression’and synthesis. Since there was an...from Block 19: speech recognition, pnoneme recogmtion. initial design for a phonetic recognition program. We also recorded ana partially labeled a

  13. Photogrammetric point cloud compression for tactical networks

    Madison, Andrew C.; Massaro, Richard D.; Wayant, Clayton D.; Anderson, John E.; Smith, Clint B.


    We report progress toward the development of a compression schema suitable for use in the Army's Common Operating Environment (COE) tactical network. The COE facilitates the dissemination of information across all Warfighter echelons through the establishment of data standards and networking methods that coordinate the readout and control of a multitude of sensors in a common operating environment. When integrated with a robust geospatial mapping functionality, the COE enables force tracking, remote surveillance, and heightened situational awareness to Soldiers at the tactical level. Our work establishes a point cloud compression algorithm through image-based deconstruction and photogrammetric reconstruction of three-dimensional (3D) data that is suitable for dissimination within the COE. An open source visualization toolkit was used to deconstruct 3D point cloud models based on ground mobile light detection and ranging (LiDAR) into a series of images and associated metadata that can be easily transmitted on a tactical network. Stereo photogrammetric reconstruction is then conducted on the received image stream to reveal the transmitted 3D model. The reported method boasts nominal compression ratios typically on the order of 250 while retaining tactical information and accurate georegistration. Our work advances the scope of persistent intelligence, surveillance, and reconnaissance through the development of 3D visualization and data compression techniques relevant to the tactical operations environment.

  14. Image Compression using Space Adaptive Lifting Scheme

    Ramu Satyabama


    Full Text Available Problem statement: Digital images play an important role both in daily life applications as well as in areas of research and technology. Due to the increasing traffic caused by multimedia information and digitized form of representation of images; image compression has become a necessity. Approach: Wavelet transform has demonstrated excellent image compression performance. New algorithms based on Lifting style implementation of wavelet transforms have been presented in this study. Adaptively is introduced in lifting by choosing the prediction operator based on the local properties of the image. The prediction filters are chosen based on the edge detection and the relative local variance. In regions where the image is locally smooth, we use higher order predictors and near edges we reduce the order and thus the length of the predictor. Results: We have applied the adaptive prediction algorithms to test images. The original image is transformed using adaptive lifting based wavelet transform and it is compressed using Set Partitioning In Hierarchical Tree algorithm (SPIHT and the performance is compared with the popular 9/7 wavelet transform. The performance metric Peak Signal to Noise Ratio (PSNR for the reconstructed image is computed. Conclusion: The proposed adaptive algorithms give better performance than 9/7 wavelet, the most popular wavelet transforms. Lifting allows us to incorporate adaptivity and nonlinear operators into the transform. The proposed methods efficiently represent the edges and appear promising for image compression. The proposed adaptive methods reduce edge artifacts and ringing and give improved PSNR for edge dominated images.

  15. Shock compression of nitrobenzene

    Kozu, Naoshi; Arai, Mitsuru; Tamura, Masamitsu; Fujihisa, Hiroshi; Aoki, Katsutoshi; Yoshida, Masatake; Kondo, Ken-Ichi


    The Hugoniot (4 - 30 GPa) and the isotherm (1 - 7 GPa) of nitrobenzene have been investigated by shock and static compression experiments. Nitrobenzene has the most basic structure of nitro aromatic compounds, which are widely used as energetic materials, but nitrobenzene has been considered not to explode in spite of the fact its calculated heat of detonation is similar to TNT, about 1 kcal/g. Explosive plane-wave generators and diamond anvil cell were used for shock and static compression, respectively. The obtained Hugoniot consists of two linear lines, and the kink exists around 10 GPa. The upper line agrees well with the Hugoniot of detonation products calculated by KHT code, so it is expected that nitrobenzene detonates in that area. Nitrobenzene solidifies under 1 GPa of static compression, and the isotherm of solid nitrobenzene was obtained by X-ray diffraction technique. Comparing the Hugoniot and the isotherm, nitrobenzene is in liquid phase under experimented shock condition. From the expected phase diagram, shocked nitrobenzene seems to remain metastable liquid in solid phase region on that diagram.

  16. Compressed sensing electron tomography

    Leary, Rowan, E-mail: [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Saghi, Zineb; Midgley, Paul A. [Department of Materials Science and Metallurgy, University of Cambridge, Pembroke Street, Cambridge CB2 3QZ (United Kingdom); Holland, Daniel J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)


    The recent mathematical concept of compressed sensing (CS) asserts that a small number of well-chosen measurements can suffice to reconstruct signals that are amenable to sparse or compressible representation. In addition to powerful theoretical results, the principles of CS are being exploited increasingly across a range of experiments to yield substantial performance gains relative to conventional approaches. In this work we describe the application of CS to electron tomography (ET) reconstruction and demonstrate the efficacy of CS–ET with several example studies. Artefacts present in conventional ET reconstructions such as streaking, blurring of object boundaries and elongation are markedly reduced, and robust reconstruction is shown to be possible from far fewer projections than are normally used. The CS–ET approach enables more reliable quantitative analysis of the reconstructions as well as novel 3D studies from extremely limited data. - Highlights: • Compressed sensing (CS) theory and its application to electron tomography (ET) is described. • The practical implementation of CS–ET is outlined and its efficacy demonstrated with examples. • High fidelity tomographic reconstruction is possible from a small number of images. • The CS–ET reconstructions can be more reliably segmented and analysed quantitatively. • CS–ET is applicable to different image content by choice of an appropriate sparsifying transform.

  17. Ultraspectral sounder data compression review

    Bormin HUANG; Hunglung HUANG


    Ultraspectral sounders provide an enormous amount of measurements to advance our knowledge of weather and climate applications. The use of robust data compression techniques will be beneficial for ultraspectral data transfer and archiving. This paper reviews the progress in lossless compression of ultra-spectral sounder data. Various transform-based, pre-diction-based, and clustering-based compression methods are covered. Also studied is a preprocessing scheme for data reordering to improve compression gains. All the coding experiments are performed on the ultraspectral compression benchmark dataset col-lected from the NASA Atmospheric Infrared Sounder (AIRS) observations.



    In this paper,the technique of quasi-lossless compression basedon the image restoration is presented.The technique of compression described in the paper includes three steps,namely bit compression,correlation removing and image restoration based on the theory of modulation transfer function (MTF).The quasi-lossless compression comes to a high speed.The quality of the reconstruction image under restoration is up to par of the quasi-lossless with higher compression ratio.The experiments of the TM and SPOT images show that the technique is reasonable and applicable.

  19. FPGA Implementation of 5/3 Integer DWT for Image Compression

    M Puttaraju


    Full Text Available The wavelet transform has emerged as a cutting edge technology, in the field of image compression. Wavelet-based coding provides substantial improvements in picture quality at higher compression ratios. In this paper an approach is proposed for the compression of an image using 5/3(lossless Integer discrete wavelet transform (DWT for Image Compression. The proposed architecture, based on new and fast lifting scheme approach for (5, 3 filter in DWT. Here an attempt is made to establish a Standard for a data compression algorithm applied to two-dimensional digital spatial image data from payload instruments.

  20. An Energy Efficient Compressed Sensing Framework for the Compression of Electroencephalogram Signals

    Simon Fauvel


    Full Text Available The use of wireless body sensor networks is gaining popularity in monitoring and communicating information about a person’s health. In such applications, the amount of data transmitted by the sensor node should be minimized. This is because the energy available in these battery powered sensors is limited. In this paper, we study the wireless transmission of electroencephalogram (EEG signals. We propose the use of a compressed sensing (CS framework to efficiently compress these signals at the sensor node. Our framework exploits both the temporal correlation within EEG signals and the spatial correlations amongst the EEG channels. We show that our framework is up to eight times more energy efficient than the typical wavelet compression method in terms of compression and encoding computations and wireless transmission. We also show that for a fixed compression ratio, our method achieves a better reconstruction quality than the CS-based state-of-the art method. We finally demonstrate that our method is robust to measurement noise and to packet loss and that it is applicable to a wide range of EEG signal types.

  1. Prediction of 28-day Compressive Strength of Concrete from Early Strength and Accelerated Curing Parameters

    T.R. Neelakantan; S. Ramasundaram; Shanmugavel, R.; R. Vinoth


    Predicting 28-day compressive strength of concrete is an important research task for many years. In this study, concrete specimens were cured in two phases, initially at room temperature for a maximum of 30 h and later at a higher temperature for accelerated curing for a maximum of 3 h. Using the early strength obtained after the two-phase curing and the curing parameters, regression equations were developed to predict the 28-day compressive strength. For the accelerated curing (higher temper...

  2. Cacti with maximum Kirchhoff index

    Wang, Wen-Rui; Pan, Xiang-Feng


    The concept of resistance distance was first proposed by Klein and Randi\\'c. The Kirchhoff index $Kf(G)$ of a graph $G$ is the sum of resistance distance between all pairs of vertices in $G$. A connected graph $G$ is called a cactus if each block of $G$ is either an edge or a cycle. Let $Cat(n;t)$ be the set of connected cacti possessing $n$ vertices and $t$ cycles, where $0\\leq t \\leq \\lfloor\\frac{n-1}{2}\\rfloor$. In this paper, the maximum kirchhoff index of cacti are characterized, as well...

  3. Generic maximum likely scale selection

    Pedersen, Kim Steenstrup; Loog, Marco; Markussen, Bo


    The fundamental problem of local scale selection is addressed by means of a novel principle, which is based on maximum likelihood estimation. The principle is generally applicable to a broad variety of image models and descriptors, and provides a generic scale estimation methodology. The focus...... on second order moments of multiple measurements outputs at a fixed location. These measurements, which reflect local image structure, consist in the cases considered here of Gaussian derivatives taken at several scales and/or having different derivative orders....

  4. n-Gram-Based Text Compression

    Vu H. Nguyen


    Full Text Available We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods.

  5. n-Gram-Based Text Compression

    Duong, Hieu N.; Snasel, Vaclav


    We propose an efficient method for compressing Vietnamese text using n-gram dictionaries. It has a significant compression ratio in comparison with those of state-of-the-art methods on the same dataset. Given a text, first, the proposed method splits it into n-grams and then encodes them based on n-gram dictionaries. In the encoding phase, we use a sliding window with a size that ranges from bigram to five grams to obtain the best encoding stream. Each n-gram is encoded by two to four bytes accordingly based on its corresponding n-gram dictionary. We collected 2.5 GB text corpus from some Vietnamese news agencies to build n-gram dictionaries from unigram to five grams and achieve dictionaries with a size of 12 GB in total. In order to evaluate our method, we collected a testing set of 10 different text files with different sizes. The experimental results indicate that our method achieves compression ratio around 90% and outperforms state-of-the-art methods. PMID:27965708

  6. A Study on Homogeneous Charge Compression Ignition Gasoline Engines

    Kaneko, Makoto; Morikawa, Koji; Itoh, Jin; Saishu, Youhei

    A new engine concept consisting of HCCI combustion for low and midrange loads and spark ignition combustion for high loads was introduced. The timing of the intake valve closing was adjusted to alter the negative valve overlap and effective compression ratio to provide suitable HCCI conditions. The effect of mixture formation on auto-ignition was also investigated using a direct injection engine. As a result, HCCI combustion was achieved with a relatively low compression ratio when the intake air was heated by internal EGR. The resulting combustion was at a high thermal efficiency, comparable to that of modern diesel engines, and produced almost no NOx emissions or smoke. The mixture stratification increased the local A/F concentration, resulting in higher reactivity. A wide range of combustible A/F ratios was used to control the compression ignition timing. Photographs showed that the flame filled the entire chamber during combustion, reducing both emissions and fuel consumption.

  7. Indentation of elastically soft and plastically compressible solids

    Needleman, A.; Tvergaard, Viggo; Van der Giessen, E.


    The effect of soft elasticity, i.e., a relatively small value of the ratio of Young's modulus to yield strength and plastic compressibility on the indentation of isotropically hardening elastic-viscoplastic solids is investigated. Calculations are carried out for indentation of a perfectly sticking...... rigid sharp indenter into a cylinder modeling indentation of a half space. The material is characterized by a finite strain elastic-viscoplastic constitutive relation that allows for plastic as well as elastic compressibility. Both soft elasticity and plastic compressibility significantly reduce...... the ratio of nominal indentation hardness to yield strength. A linear relation is found between the nominal indentation hardness and the logarithm of the ratio of Young's modulus to yield strength, but with a different coefficient than reported in previous studies. The nominal indentation hardness decreases...

  8. Auto-shape lossless compression of pharynx and esophagus fluoroscopic images.

    Arif, Arif Sameh; Mansor, Sarina; Logeswaran, Rajasvaran; Karim, Hezerul Abdul


    The massive number of medical images produced by fluoroscopic and other conventional diagnostic imaging devices demand a considerable amount of space for data storage. This paper proposes an effective method for lossless compression of fluoroscopic images. The main contribution in this paper is the extraction of the regions of interest (ROI) in fluoroscopic images using appropriate shapes. The extracted ROI is then effectively compressed using customized correlation and the combination of Run Length and Huffman coding, to increase compression ratio. The experimental results achieved show that the proposed method is able to improve the compression ratio by 400 % as compared to that of traditional methods.

  9. Economics and Maximum Entropy Production

    Lorenz, R. D.


    Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.


    Jan Markowski


    Full Text Available Ultra-high-performance concrete (UHPC sandwich structures with composite coating serve as multipurpose load-bearing elements. The UHPC’s extraordinary compressive strength is used in a multi-material construction element, while issues regarding the concrete’s brittle failure behaviour are properly addressed. A hollow section concrete core is covered by two steel tubes. The outer steel tube is wrapped in a composite material. By this design, UHPC is used in a material- and shape-optimised way with a low dead weight ratio[1] concerning the load-bearing capacity and stability[2]. The cross-section’s hollow shape optimises the construction’s buckling stability while saving self-weight. The composite coating on the column’s outside functions both as a layer increasing the construction’s durability and as a structural component increasing the the maximum and the residual load capacity. Investigations on the construction’s structural behaviour were performed.

  11. Shunting ratios for MHD flows

    Birzvalk, Yu.


    The shunting ratio and the local shunting ratio, pertaining to currents induced by a magnetic field in a flow channel, are properly defined and systematically reviewed on the basis of the Lagrange criterion. Their definition is based on the energy balance and related to dimensionless parameters characterizing an MHD flow, these parameters evolving from the Hartmann number and the hydrodynamic Reynolds number as well as the magnetic Reynolds number, and the Lundquist number. These shunting ratios, of current density in the core of a stream (uniform) or equivalent mean current density to the short-circuit (maximum) current density, are given here for a slot channel with nonconducting or conducting walls, for a conduction channel with heavy side rails, and for an MHD-flow around bodies. 5 references, 1 figure.

  12. Compressed imaging by sparse random convolution.

    Marcos, Diego; Lasser, Theo; López, Antonio; Bourquard, Aurélien


    The theory of compressed sensing (CS) shows that signals can be acquired at sub-Nyquist rates if they are sufficiently sparse or compressible. Since many images bear this property, several acquisition models have been proposed for optical CS. An interesting approach is random convolution (RC). In contrast with single-pixel CS approaches, RC allows for the parallel capture of visual information on a sensor array as in conventional imaging approaches. Unfortunately, the RC strategy is difficult to implement as is in practical settings due to important contrast-to-noise-ratio (CNR) limitations. In this paper, we introduce a modified RC model circumventing such difficulties by considering measurement matrices involving sparse non-negative entries. We then implement this model based on a slightly modified microscopy setup using incoherent light. Our experiments demonstrate the suitability of this approach for dealing with distinct CS scenarii, including 1-bit CS.

  13. Improved SDT Process Data Compression Algorithm


    Process data compression and trending are essential for improving control system performances. Swing Door Trending (SDT) algorithm is well designed to adapt the process trend while retaining the merit of simplicity. But it cannot handle outliers and adapt to the fluctuations of actual data. An Improved SDT (ISDT) algorithm is proposed in this paper. The effectiveness and applicability of the ISDT algorithm are demonstrated by computations on both synthetic and real process data. By applying an adaptive recording limit as well as outliers-detecting rules, a higher compression ratio is achieved and outliers are identified and eliminated. The fidelity of the algorithm is also improved. It can be used both in online and batch mode, and integrated into existing software packages without change.

  14. Convective heat transport in compressible fluids.

    Furukawa, Akira; Onuki, Akira


    We present hydrodynamic equations of compressible fluids in gravity as a generalization of those in the Boussinesq approximation used for nearly incompressible fluids. They account for adiabatic processes taking place throughout the cell (the piston effect) and those taking place within plumes (the adiabatic temperature gradient effect). Performing two-dimensional numerical analysis, we reveal some unique features of plume generation and convection in transient and steady states of compressible fluids. As the critical point is approached, the overall temperature changes induced by plume arrivals at the boundary walls are amplified, giving rise to overshoot behavior in transient states and significant noise in the temperature in steady states. The velocity field is suggested to assume a logarithmic profile within boundary layers. Random reversal of macroscopic shear flow is examined in a cell with unit aspect ratio. We also present a simple scaling theory for moderate Rayleigh numbers.

  15. Multiple and single snapshot compressive beamforming

    Gerstoft, Peter; Xenaki, Angeliki; Mecklenbrauker, Christoph F.


    For a sound field observed on a sensor array, compressive sensing (CS) reconstructs the direction of arrival (DOA) of multiple sources using a sparsity constraint. The DOA estimation is posed as an underdetermined problem by expressing the acoustic pressure at each sensor as a phase-lagged superp......For a sound field observed on a sensor array, compressive sensing (CS) reconstructs the direction of arrival (DOA) of multiple sources using a sparsity constraint. The DOA estimation is posed as an underdetermined problem by expressing the acoustic pressure at each sensor as a phase...... signal-to-noise ratio. The superior resolution of CS is demonstrated with vertical array data from the SWellEx96 experiment for coherent multi-paths....

  16. Information optimal compressive sensing: static measurement design.

    Ashok, Amit; Huang, Liang-Chih; Neifeld, Mark A


    The compressive sensing paradigm exploits the inherent sparsity/compressibility of signals to reduce the number of measurements required for reliable reconstruction/recovery. In many applications additional prior information beyond signal sparsity, such as structure in sparsity, is available, and current efforts are mainly limited to exploiting that information exclusively in the signal reconstruction problem. In this work, we describe an information-theoretic framework that incorporates the additional prior information as well as appropriate measurement constraints in the design of compressive measurements. Using a Gaussian binomial mixture prior we design and analyze the performance of optimized projections relative to random projections under two specific design constraints and different operating measurement signal-to-noise ratio (SNR) regimes. We find that the information-optimized designs yield significant, in some cases nearly an order of magnitude, improvements in the reconstruction performance with respect to the random projections. These improvements are especially notable in the low measurement SNR regime where the energy-efficient design of optimized projections is most advantageous. In such cases, the optimized projection design departs significantly from random projections in terms of their incoherence with the representation basis. In fact, we find that the maximizing incoherence of projections with the representation basis is not necessarily optimal in the presence of additional prior information and finite measurement noise/error. We also apply the information-optimized projections to the compressive image formation problem for natural scenes, and the improved visual quality of reconstructed images with respect to random projections and other compressive measurement design affirms the overall effectiveness of the information-theoretic design framework.

  17. The compression of liquids

    Whalley, E.

    The compression of liquids can be measured either directly by applying a pressure and noting the volume change, or indirectly, by measuring the magnitude of the fluctuations of the local volume. The methods used in Ottawa for the direct measurement of the compression are reviewed. The mean-square deviation of the volume from the mean at constant temperature can be measured by X-ray and neutron scattering at low angles, and the meansquare deviation at constant entropy can be measured by measuring the speed of sound. The speed of sound can be measured either acoustically, using an acoustic transducer, or by Brillouin spectroscopy. Brillouin spectroscopy can also be used to study the shear waves in liquids if the shear relaxation time is > ∼ 10 ps. The relaxation time of water is too short for the shear waves to be studied in this way, but they do occur in the low-frequency Raman and infrared spectra. The response of the structure of liquids to pressure can be studied by neutron scattering, and recently experiments have been done at Atomic Energy of Canada Ltd, Chalk River, on liquid D 2O up to 15.6 kbar. They show that the near-neighbor intermolecular O-D and D-D distances are less spread out and at shorter distances at high pressure. Raman spectroscopy can also provide information on the structural response. It seems that the O-O distance in water decreases much less with pressure than it does in ice. Presumably, the bending of O-O-O angles tends to increase the O-O distance, and so to largely compensate the compression due to the direct effect of pressure.

  18. Compressive Transient Imaging

    Sun, Qilin


    High resolution transient/3D imaging technology is of high interest in both scientific research and commercial application. Nowadays, all of the transient imaging methods suffer from low resolution or time consuming mechanical scanning. We proposed a new method based on TCSPC and Compressive Sensing to achieve a high resolution transient imaging with a several seconds capturing process. Picosecond laser sends a serious of equal interval pulse while synchronized SPAD camera\\'s detecting gate window has a precise phase delay at each cycle. After capturing enough points, we are able to make up a whole signal. By inserting a DMD device into the system, we are able to modulate all the frames of data using binary random patterns to reconstruct a super resolution transient/3D image later. Because the low fill factor of SPAD sensor will make a compressive sensing scenario ill-conditioned, We designed and fabricated a diffractive microlens array. We proposed a new CS reconstruction algorithm which is able to denoise at the same time for the measurements suffering from Poisson noise. Instead of a single SPAD senor, we chose a SPAD array because it can drastically reduce the requirement for the number of measurements and its reconstruction time. Further more, it not easy to reconstruct a high resolution image with only one single sensor while for an array, it just needs to reconstruct small patches and a few measurements. In this thesis, we evaluated the reconstruction methods using both clean measurements and the version corrupted by Poisson noise. The results show how the integration over the layers influence the image quality and our algorithm works well while the measurements suffer from non-trival Poisson noise. It\\'s a breakthrough in the areas of both transient imaging and compressive sensing.

  19. Statistical Mechanical Analysis of Compressed Sensing Utilizing Correlated Compression Matrix

    Takeda, Koujin


    We investigate a reconstruction limit of compressed sensing for a reconstruction scheme based on the L1-norm minimization utilizing a correlated compression matrix with a statistical mechanics method. We focus on the compression matrix modeled as the Kronecker-type random matrix studied in research on multi-input multi-output wireless communication systems. We found that strong one-dimensional correlations between expansion bases of original information slightly degrade reconstruction performance.

  20. Osmotic compressibility of soft colloidal systems.

    Tan, Beng H; Tam, Kam C; Lam, Yee C; Tan, Chee B


    A turbidimetric analysis of particle interaction of model pH-responsive microgel systems consisting of methacrylic acid-ethyl acrylate cross-linked with diallyl phthalate in colloidal suspensions is described. The structure factor at zero scattering angle, S(0), can be determined with good precision for wavelengths greater than 500 nm, and it measures the dispersion's resistance to particle compression. The structure factor of microgels at various cross-linked densities and ionic strengths falls onto a master curve when plotted against the effective volume fraction, phi(eff) = kc, which clearly suggests that particle interaction potential and osmotic compressibility is a function of effective volume fraction. In addition, the deviation of the structure factor, S(0), of our microgel systems with the structure factor of hard spheres, S(PY)(0), exhibits a maximum at phi(eff) approximately 0.2. Beyond this point the osmotic de-swelling force exceeds the osmotic pressure inside the soft particles resulting in particle shrinkage. Good agreement was obtained when the structural properties of our microgel systems obtained from turbidimetric analysis and rheology measurements were compared. Therefore, a simple turbidimetric analysis of these model pH-responsive microgel systems permits a quantitative evaluation of factors governing particle osmotic compressibility.

  1. Compressive full waveform lidar

    Yang, Weiyi; Ke, Jun


    To avoid high bandwidth detector, fast speed A/D converter, and large size memory disk, a compressive full waveform LIDAR system, which uses a temporally modulated laser instead of a pulsed laser, is studied in this paper. Full waveform data from NEON (National Ecological Observatory Network) are used. Random binary patterns are used to modulate the source. To achieve 0.15 m ranging resolution, a 100 MSPS A/D converter is assumed to make measurements. SPIRAL algorithm with canonical basis is employed when Poisson noise is considered in the low illuminated condition.

  2. Objects of maximum electromagnetic chirality

    Fernandez-Corbaton, Ivan


    We introduce a definition of the electromagnetic chirality of an object and show that it has an upper bound. The upper bound is attained if and only if the object is transparent for fields of one handedness (helicity). Additionally, electromagnetic duality symmetry, i.e. helicity preservation upon scattering, turns out to be a necessary condition for reciprocal scatterers to attain the upper bound. We use these results to provide requirements for the design of such extremal scatterers. The requirements can be formulated as constraints on the polarizability tensors for dipolar scatterers or as material constitutive relations. We also outline two applications for objects of maximum electromagnetic chirality: A twofold resonantly enhanced and background free circular dichroism measurement setup, and angle independent helicity filtering glasses.

  3. Maximum mutual information regularized classification

    Wang, Jim Jing-Yan


    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  4. Role of total reactive oxide ratios on strength development in activated fly ash

    Bhagath Singh G.V.P.


    Full Text Available The role of individual reactive components and process variables such as molarity and temperature on alkaline activation of different low-calcium fly ash is explored. The oxide ratios in the activated system, based on the total silica (total SiO2 in the system consisting of the reactive silica contributed by fly ash and the reactive alumina in fly ash are shown to provide consistent results for achieving the highest strength. For a given total SiO2 content in the system, an increase in the sodium content above a certain dosage does not influence the ultimate compressive strength. An optimum (total SiO2 to Na2O ratio, equal to 2.66 is established for achieving maximum strength. The role of temperature within the range of 60°C-85°C is not significant when the molarity of NaOH is high. A N-A-S-H type gel with Si/Al ratio ranging between 2.5 to 3.0 and the Al/Na ratio varying between 1.30 to 0.9 is formed on decreasing the (total SiO2/Na2O ratio from 6.55 to 2.66

  5. The strong maximum principle revisited

    Pucci, Patrizia; Serrin, James

    In this paper we first present the classical maximum principle due to E. Hopf, together with an extended commentary and discussion of Hopf's paper. We emphasize the comparison technique invented by Hopf to prove this principle, which has since become a main mathematical tool for the study of second order elliptic partial differential equations and has generated an enormous number of important applications. While Hopf's principle is generally understood to apply to linear equations, it is in fact also crucial in nonlinear theories, such as those under consideration here. In particular, we shall treat and discuss recent generalizations of the strong maximum principle, and also the compact support principle, for the case of singular quasilinear elliptic differential inequalities, under generally weak assumptions on the quasilinear operators and the nonlinearities involved. Our principal interest is in necessary and sufficient conditions for the validity of both principles; in exposing and simplifying earlier proofs of corresponding results; and in extending the conclusions to wider classes of singular operators than previously considered. The results have unexpected ramifications for other problems, as will develop from the exposition, e.g. two point boundary value problems for singular quasilinear ordinary differential equations (Sections 3 and 4); the exterior Dirichlet boundary value problem (Section 5); the existence of dead cores and compact support solutions, i.e. dead cores at infinity (Section 7); Euler-Lagrange inequalities on a Riemannian manifold (Section 9); comparison and uniqueness theorems for solutions of singular quasilinear differential inequalities (Section 10). The case of p-regular elliptic inequalities is briefly considered in Section 11.

  6. Theoretical Estimate of Maximum Possible Nuclear Explosion

    Bethe, H. A.


    The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)

  7. Maximum life spiral bevel reduction design

    Savage, M.; Prasanna, M. G.; Coe, H. H.


    Optimization is applied to the design of a spiral bevel gear reduction for maximum life at a given size. A modified feasible directions search algorithm permits a wide variety of inequality constraints and exact design requirements to be met with low sensitivity to initial values. Gear tooth bending strength and minimum contact ratio under load are included in the active constraints. The optimal design of the spiral bevel gear reduction includes the selection of bearing and shaft proportions in addition to gear mesh parameters. System life is maximized subject to a fixed back-cone distance of the spiral bevel gear set for a specified speed ratio, shaft angle, input torque, and power. Significant parameters in the design are: the spiral angle, the pressure angle, the numbers of teeth on the pinion and gear, and the location and size of the four support bearings. Interpolated polynomials expand the discrete bearing properties and proportions into continuous variables for gradient optimization. After finding the continuous optimum, a designer can analyze near optimal designs for comparison and selection. Design examples show the influence of the bearing lives on the gear parameters in the optimal configurations. For a fixed back-cone distance, optimal designs with larger shaft angles have larger service lives.

  8. Minimum length-maximum velocity

    Panes, Boris


    We study a framework where the hypothesis of a minimum length in space-time is complemented with the notion of reference frame invariance. It turns out natural to interpret the action of the obtained reference frame transformations in the context of doubly special relativity. As a consequence of this formalism we find interesting connections between the minimum length properties and the modified velocity-energy relation for ultra-relativistic particles. For example, we can predict the ratio between the minimum lengths in space and time using the results from OPERA on superluminal neutrinos.

  9. Compressive sensing in medical imaging.

    Graff, Christian G; Sidky, Emil Y


    The promise of compressive sensing, exploitation of compressibility to achieve high quality image reconstructions with less data, has attracted a great deal of attention in the medical imaging community. At the Compressed Sensing Incubator meeting held in April 2014 at OSA Headquarters in Washington, DC, presentations were given summarizing some of the research efforts ongoing in compressive sensing for x-ray computed tomography and magnetic resonance imaging systems. This article provides an expanded version of these presentations. Sparsity-exploiting reconstruction algorithms that have gained popularity in the medical imaging community are studied, and examples of clinical applications that could benefit from compressive sensing ideas are provided. The current and potential future impact of compressive sensing on the medical imaging field is discussed.

  10. Data compression on the sphere

    McEwen, J D; Eyers, D M; 10.1051/0004-6361/201015728


    Large data-sets defined on the sphere arise in many fields. In particular, recent and forthcoming observations of the anisotropies of the cosmic microwave background (CMB) made on the celestial sphere contain approximately three and fifty mega-pixels respectively. The compression of such data is therefore becoming increasingly important. We develop algorithms to compress data defined on the sphere. A Haar wavelet transform on the sphere is used as an energy compression stage to reduce the entropy of the data, followed by Huffman and run-length encoding stages. Lossless and lossy compression algorithms are developed. We evaluate compression performance on simulated CMB data, Earth topography data and environmental illumination maps used in computer graphics. The CMB data can be compressed to approximately 40% of its original size for essentially no loss to the cosmological information content of the data, and to approximately 20% if a small cosmological information loss is tolerated. For the topographic and il...

  11. Energy transfer in compressible turbulence

    Bataille, Francoise; Zhou, YE; Bertoglio, Jean-Pierre


    This letter investigates the compressible energy transfer process. We extend a methodology developed originally for incompressible turbulence and use databases from numerical simulations of a weak compressible turbulence based on Eddy-Damped-Quasi-Normal-Markovian (EDQNM) closure. In order to analyze the compressible mode directly, the well known Helmholtz decomposition is used. While the compressible component has very little influence on the solenoidal part, we found that almost all of the compressible turbulence energy is received from its solenoidal counterpart. We focus on the most fundamental building block of the energy transfer process, the triadic interactions. This analysis leads us to conclude that, at low turbulent Mach number, the compressible energy transfer process is dominated by a local radiative transfer (absorption) in both inertial and energy containing ranges.

  12. Perceptually Lossless Wavelet Compression

    Watson, Andrew B.; Yang, Gloria Y.; Solomon, Joshua A.; Villasenor, John


    The Discrete Wavelet Transform (DWT) decomposes an image into bands that vary in spatial frequency and orientation. It is widely used for image compression. Measures of the visibility of DWT quantization errors are required to achieve optimal compression. Uniform quantization of a single band of coefficients results in an artifact that is the sum of a lattice of random amplitude basis functions of the corresponding DWT synthesis filter, which we call DWT uniform quantization noise. We measured visual detection thresholds for samples of DWT uniform quantization noise in Y, Cb, and Cr color channels. The spatial frequency of a wavelet is r 2(exp -1), where r is display visual resolution in pixels/degree, and L is the wavelet level. Amplitude thresholds increase rapidly with spatial frequency. Thresholds also increase from Y to Cr to Cb, and with orientation from low-pass to horizontal/vertical to diagonal. We propose a mathematical model for DWT noise detection thresholds that is a function of level, orientation, and display visual resolution. This allows calculation of a 'perceptually lossless' quantization matrix for which all errors are in theory below the visual threshold. The model may also be used as the basis for adaptive quantization schemes.

  13. Compressive Sensing DNA Microarrays

    Richard G. Baraniuk


    Full Text Available Compressive sensing microarrays (CSMs are DNA-based sensors that operate using group testing and compressive sensing (CS principles. In contrast to conventional DNA microarrays, in which each genetic sensor is designed to respond to a single target, in a CSM, each sensor responds to a set of targets. We study the problem of designing CSMs that simultaneously account for both the constraints from CS theory and the biochemistry of probe-target DNA hybridization. An appropriate cross-hybridization model is proposed for CSMs, and several methods are developed for probe design and CS signal recovery based on the new model. Lab experiments suggest that in order to achieve accurate hybridization profiling, consensus probe sequences are required to have sequence homology of at least 80% with all targets to be detected. Furthermore, out-of-equilibrium datasets are usually as accurate as those obtained from equilibrium conditions. Consequently, one can use CSMs in applications in which only short hybridization times are allowed.

  14. Splines in Compressed Sensing

    S. Abhishek


    Full Text Available It is well understood that in any data acquisition system reduction in the amount of data reduces the time and energy, but the major trade-off here is the quality of outcome normally, lesser the amount of data sensed, lower the quality. Compressed Sensing (CS allows a solution, for sampling below the Nyquist rate. The challenging problem of increasing the reconstruction quality with less number of samples from an unprocessed data set is addressed here by the use of representative coordinate selected from different orders of splines. We have made a detailed comparison with 10 orthogonal and 6 biorthogonal wavelets with two sets of data from MIT Arrhythmia database and our results prove that the Spline coordinates work better than the wavelets. The generation of two new types of splines such as exponential and double exponential are also briefed here .We believe that this is one of the very first attempts made in Compressed Sensing based ECG reconstruction problems using raw data.  

  15. [Hyperspectral image compression technology research based on EZW].

    Wei, Jun-Xia; Xiangli, Bin; Duan, Xiao-Feng; Xu, Zhao-Hui; Xue, Li-Jun


    Along with the development of hyperspectral remote sensing technology, hyperspectral imaging technology has been applied in the aspect of aviation and spaceflight, which is different from multispectral imaging, and with the band width of nanoscale spectral imaging the target continuously, the image resolution is very high. However, with the increasing number of band, spectral data quantity will be more and more, and these data storage and transmission is the problem that the authors must face. Along with the development of wavelet compression technology, in field of image compression, many people adopted and improved EZW, the present paper used the method in hyperspectral spatial dimension compression, but does not involved the spectrum dimension compression. From hyperspectral image compression reconstruction results, whether from the peak signal-to-noise ratio (PSNR) and spectral curve or from the subjective comparison of source and reconstruction image, the effect is well. If the first compression of image from spectrum dimension is made, then compression on space dimension, the authors believe the effect will be better.

  16. Parallel Algorithm for Wireless Data Compression and Encryption

    Qin Jiancheng


    Full Text Available As the wireless network has limited bandwidth and insecure shared media, the data compression and encryption are very useful for the broadcasting transportation of big data in IoT (Internet of Things. However, the traditional techniques of compression and encryption are neither competent nor efficient. In order to solve this problem, this paper presents a combined parallel algorithm named “CZ algorithm” which can compress and encrypt the big data efficiently. CZ algorithm uses a parallel pipeline, mixes the coding of compression and encryption, and supports the data window up to 1 TB (or larger. Moreover, CZ algorithm can encrypt the big data as a chaotic cryptosystem which will not decrease the compression speed. Meanwhile, a shareware named “ComZip” is developed based on CZ algorithm. The experiment results show that ComZip in 64 b system can get better compression ratio than WinRAR and 7-zip, and it can be faster than 7-zip in the big data compression. In addition, ComZip encrypts the big data without extra consumption of computing resources.

  17. Application of size effect to compressive strength of concrete members

    Jin-Keun Kim; Seong-Tae Yi


    It is important to consider the effect of size when estimating the ultimate strength of a concrete member under various loading conditions. Well known as the size effect, the strength of a member tends to decrease when its size increases. Therefore, in view of recent increased interest in the size effect of concrete this research focuses on the size effect of two main classes of compressive strength of concrete: pure axial compressive strength and flexural compressive strength. First, fracture mechanics type size effect on the compressive strength of cylindrical concrete specimens was studied, with the diameter, and the height/diameter ratio considered as the main parameters. Theoretical and statistical analyses were conducted, and a size effect equation was proposed to predict the compressive strength specimens. The proposed equation showed good agreement with the existing test results for concrete cylinders. Second, the size, length, and depth variations of a flexural compressive member have been studied experimentally. A series of -shaped specimens subjected to axial compressive load and bending moment were tested. The shape of specimens and the test procedures used were similar to those by Hognestad and others. The test results are curve-fitted using Levenberg-Marquardt’s least squares method (LSM) to obtain parameters for the modified size effect law (MSEL) by Kim and co workers. The results of the analysis show that the effect of specimen size, length, and depth on ultimate strength is significant. Finally, more general parameters for MSEL are suggested.

  18. Maximum twin shear stress factor criterion for sliding mode fracture initiation

    黎振兹; 李慧剑; 黎晓峰; 周洪彬; 郝圣旺


    Previous researches on the mixed mode fracture initiation criteria were mostly focused on opening mode fracture. In this study, the authors proposed a new criterion for mixed mode sliding fracture initiation, which is the maximum twin shear stress factor criterion. The authors studied a finite width plate with central slant crack, subject to a far-field uniform uniaxial tensile or compressive stress.

  19. q-ary compressive sensing

    Mroueh, Youssef; Rosasco, Lorenzo


    We introduce q-ary compressive sensing, an extension of 1-bit compressive sensing. We propose a novel sensing mechanism and a corresponding recovery procedure. The recovery properties of the proposed approach are analyzed both theoretically and empirically. Results in 1-bit compressive sensing are recovered as a special case. Our theoretical results suggest a tradeoff between the quantization parameter q, and the number of measurements m in the control of the error of the resulting recovery a...

  20. Introduction to compressible fluid flow

    Oosthuizen, Patrick H


    IntroductionThe Equations of Steady One-Dimensional Compressible FlowSome Fundamental Aspects of Compressible FlowOne-Dimensional Isentropic FlowNormal Shock WavesOblique Shock WavesExpansion Waves - Prandtl-Meyer FlowVariable Area FlowsAdiabatic Flow with FrictionFlow with Heat TransferLinearized Analysis of Two-Dimensional Compressible FlowsHypersonic and High-Temperature FlowsHigh-Temperature Gas EffectsLow-Density FlowsBibliographyAppendices


    Lyashenko P. A.


    Full Text Available The odometric compression of sand with constant rate of loading (CRL or constant rate of deformation (CRD and continuous registration of the corresponding reaction allows to identify the effect of stepwise changes of deformation (at the CRL and the power reaction (at the CRD. Physical modeling of compression on the sandy model showed the same effect. The physical model was made of fine sand with marks, mimicking large inclusions. Compression of the soil at the CRD was uneven, stepwise, and the strain rate of the upper boundary of the sandy model changed cyclically. Maximum amplitudes of cycles passed through a maximum. Inside of the sand model, the uneven strain resulted in the mutual displacement of the adjacent parts located at the same depth. The growth of external pressure, the marks showed an increase or decrease in displacement and even move opposite to the direction of movement (settlement the upper boundary of the model ‒ "floating" of marks. Marks, at different depths, got at the same time different movements, including mutually contradictory. The mark settlements sudden growth when the sufficiently large pressure. These increments in settlements remained until the end of loading decreasing with depth. They were a confirmation of the hypothesis about the total destruction of the soil sample at a pressure of "structural strength". The hypothesis of the "floating" reason based on the obvious assumption that the marks are moved together with the surrounding sand. The explanation of the effect of "floating" is supported by the fact that the value of "floating" the more, the greater the depth

  2. The optimal polarizations for achieving maximum contrast in radar images

    Swartz, A. A.; Yueh, H. A.; Kong, J. A.; Novak, L. M.; Shin, R. T.


    There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter.

  3. Optimization of PERT Network and Compression of Time

    Li Ping; Hu Jianbing; Gu Xinyi


    In the traditional methods of program evaluation and review technique (PERT) network optimization and compression of time limit for project, the uncertainty of free time difference and total time difference were not considered as well as its time risk. The anthors of this paper use the theory of dependent-chance programming to establish a new model about compression of time for project and multi-objective network optimization, which can overcome the shortages of traditional methods and realize the optimization of PERT network directly. By calculating an example with genetic algorithms, the following conclusions are drawn: (1) compression of time is restricted by cost ratio and completion probability of project; (2) activities with maximal standard difference of duration and minimal cost will be compressed in order of precedence; (3) there is no optimal solutions but noninferior solutions between chance and cost, and the most optimal node time depends on decision-maker's preference.

  4. Compression Deformation Mechanisms at the Nanoscale in Magnesium Single Crystal

    Yafang GUO; Xiaozhi TANG; Yuesheng WANG; Zhengdao WANG; Sidney YIP


    The dominant deformation mode at low temperatures for magnesium and its alloys is generally regarded to be twinning because of the hcp crystal structure.More recently,the phenomenon of a "loss" of the twins has been reported in microcompression experiments of the magnesium single crystals.Molecular dynamics simulation of compression deformation shows that the pyramidal slip dominates compression behavior at the nanoscale.No compression twins are observed at different temperatures at different loadings and boundary conditions.This is explained by the analyses,that is,the {10(1-)2} and {101-1} twins can be activated under c-axis tension,while compression twins will not occur when the c/a ratio of the hcp metal is below (/)3.Our theoretical and simulation results are consistent with recent microcompression experiments of the magnesium (0001) single crystals.

  5. Data compression for the First G-APD Cherenkov Telescope

    Ahnen, M L; Bergmann, M; Biland, A; Bretz, T; Buß, J; Dorner, D; Einecke, S; Freiwald, J; Hempfling, C; Hildebrand, D; Hughes, G; Lustermann, W; Lyard, E; Mannheim, K; Meier, K; Mueller, S; Neise, D; Neronov, A; Overkemping, A -K; Paravac, A; Pauss, F; Rhode, W; Steinbring, T; Temme, F; Thaele, J; Toscano, S; Vogler, P; Walter, R; Wilbert, A


    The First Geiger-mode Avalanche photodiode (G-APD) Cherenkov Telescope (FACT) has been operating on the Canary island of La Palma since October 2011. Operations were automated so that the system can be operated remotely. Manual interaction is required only when the observation schedule is modified due to weather conditions or in case of unexpected events such as a mechanical failure. Automatic operations enabled high data taking efficiency, which resulted in up to two terabytes of FITS files being recorded nightly and transferred from La Palma to the FACT archive at ISDC in Switzerland. Since long term storage of hundreds of terabytes of observations data is costly, data compression is mandatory. This paper discusses the design choices that were made to increase the compression ratio and speed of writing of the data with respect to existing compression algorithms. Following a more detailed motivation, the FACT compression algorithm along with the associated I/O layer is discussed. Eventually, the performances...

  6. Generation new MP3 data set after compression

    Atoum, Mohammed Salem; Almahameed, Mohammad


    The success of audio steganography techniques is to ensure imperceptibility of the embedded secret message in stego file and withstand any form of intentional or un-intentional degradation of secret message (robustness). Crucial to that using digital audio file such as MP3 file, which comes in different compression rate, however research studies have shown that performing steganography in MP3 format after compression is the most suitable one. Unfortunately until now the researchers can not test and implement their algorithm because no standard data set in MP3 file after compression is generated. So this paper focuses to generate standard data set with different compression ratio and different Genre to help researchers to implement their algorithms.

  7. Adaptive Super-Spatial Prediction Approach For Lossless Image Compression

    Arpita C. Raut,


    Full Text Available Existing prediction based lossless image compression schemes perform prediction of an image data using their spatial neighborhood technique which can’t predict high-frequency image structure components, such as edges, patterns, and textures very well which will limit the image compression efficiency. To exploit these structure components, adaptive super-spatial prediction approach is developed. The super-spatial prediction approach is adaptive to compress high frequency structure components from the grayscale image. The motivation behind the proposed prediction approach is taken from motion prediction in video coding, which attempts to find an optimal prediction of structure components within the previously encoded image regions. This prediction approach is efficient for image regions with significant structure components with respect to parameters as compression ratio, bit rate as compared to CALIC (Context-based adaptive lossless image coding.

  8. Property of Corroded Concrete under Compressive Uniaxial Loads

    FAN Yingfang; HU Zhiqiang; ZHOU Jing; LI Xin


    In order to study the compressive property of corroded concrete, accelerated corrosion test were performed on concrete C30.6 corrosive solutions, including hydraulic acid solution (pH=2), hydraulic acid solution (pH=3) were applied as the corrosive medium. 6 series of corrosion tests, including 111 specimens,were carried out. Mechanical properties of all the corroded specimens were tested respectively. Compressive properties of the corroded specimens (e.g. compressive strength, stress-strain relation, elastic modulus etc.) were achieved. Taking the strength degradation ratio and strain energy loss as damage index, effects of the corrosion solution on the compressive property of corroded concrete were discussed in detail. Relationship between the damage index and corrosion state of specimens were achieved.

  9. Ultraspectral sounder data compression using the Tunstall coding

    Wei, Shih-Chieh; Huang, Bormin; Gu, Lingjia


    In an error-prone environment the compression of ultraspectral sounder data is vulnerable to error propagation. The Tungstall coding is a variable-to-fixed length code which compresses data by mapping a variable number of source symbols to a fixed number of codewords. It avoids the resynchronization difficulty encountered in fixed-to-variable length codes such as Huffman coding and arithmetic coding. This paper explores the use of the Tungstall coding in reducing the error propagation for ultraspectral sounder data compression. The results show that our Tunstall approach has a favorable compression ratio compared with JPEG-2000, 3D SPIHT, JPEG-LS, CALIC and CCSDS IDC 5/3. It also has less error propagation compared with JPEG-2000.

  10. Practicality of magnetic compression for plasma density control

    Gueroult, Renaud


    Plasma densification through magnetic compression has been suggested for time-resolved control of the wave properties in plasma-based accelerators. Using particle in cell simulations with real mass ratio, the practicality of large magnetic compression on timescales shorter than the ion gyro-period is investigated. For compression times shorter than the transit time of a compressional Alfven wave across the plasma slab, results show the formation of two counter-propagating shock waves, leading to a highly non-uniform plasma density profile. Furthermore, the plasma slab displays large hydromagnetic like oscillations after the driving field has reached steady state. Peak compression is obtained when the two shocks collide in the mid-plane. At this instant, very large plasma heating is observed, and plasma $\\beta$ is estimated to be about $1$. Although these results point out a densification mechanism quite different and more complex than initially envisioned, these features could possibly be advantageous in part...

  11. A JPEG-based enhanced compression algorithm of digital holograms

    Yu, Hanming; Zhang, Zibang; Zhong, Jingang


    We present a modified version of the general JPEG encoder for digital holograms. Since digital holograms are characterized by most of their information concentrated at first-order term, to compress digital holograms only with their first-order term is available. The proposed algorithm performs 2D-DCT (discrete cosine transform) on digital holograms as the general JPEG, then quantizes and encodes the low-frequency section extracted with an adaptive mask. Compatible with the general JPEG, the compressed holograms can be directly decoded by the general decoders. Our simulation and experimental results show that this algorithm has higher compression ratio than the general JPEG and more accurate retrieved phase while the compression is equal.

  12. Uncommon upper extremity compression neuropathies.

    Knutsen, Elisa J; Calfee, Ryan P


    Hand surgeons routinely treat carpal and cubital tunnel syndromes, which are the most common upper extremity nerve compression syndromes. However, more infrequent nerve compression syndromes of the upper extremity may be encountered. Because they are unusual, the diagnosis of these nerve compression syndromes is often missed or delayed. This article reviews the causes, proposed treatments, and surgical outcomes for syndromes involving compression of the posterior interosseous nerve, the superficial branch of the radial nerve, the ulnar nerve at the wrist, and the median nerve proximal to the wrist. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Image Compression Algorithms Using Dct

    Er. Abhishek Kaushik


    Full Text Available Image compression is the application of Data compression on digital images. The discrete cosine transform (DCT is a technique for converting a signal into elementary frequency components. It is widely used in image compression. Here we develop some simple functions to compute the DCT and to compress images. An image compression algorithm was comprehended using Matlab code, and modified to perform better when implemented in hardware description language. The IMAP block and IMAQ block of MATLAB was used to analyse and study the results of Image Compression using DCT and varying co-efficients for compression were developed to show the resulting image and error image from the original images. Image Compression is studied using 2-D discrete Cosine Transform. The original image is transformed in 8-by-8 blocks and then inverse transformed in 8-by-8 blocks to create the reconstructed image. The inverse DCT would be performed using the subset of DCT coefficients. The error image (the difference between the original and reconstructed image would be displayed. Error value for every image would be calculated over various values of DCT co-efficients as selected by the user and would be displayed in the end to detect the accuracy and compression in the resulting image and resulting performance parameter would be indicated in terms of MSE , i.e. Mean Square Error.

  14. Maximum-entropy probability distributions under Lp-norm constraints

    Dolinar, S.


    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  15. Effect of Different Vane Angles on Rotor- Casing Diameter Ratios to Optimize the Shaft Output of A Vaned Type Novel Air Turbine

    Bharat Raj Singh,


    Full Text Available This paper deals with new concept of compressed air energy storage system using atmospheric air at ambient temperature as a zero pollution power source for running motorbikes. The proposed motorbike is equipped with an air turbine in place of an internal combustion engine, and transforms the energy of the compressed air into shaft work. The mathematical modeling and performance evaluation of such small capacity compressed air driven vaned type novel air turbine is presented here. The effect of isobaric admission and adiabatic expansion of high pressure air for different rotor to casing diameter ratios with respect to different vane angles (number of vanes have been considered and analyzed. It is found that the shaft work output is optimum for some typical values of rotor to casing diameter ratios at a particular vane angle (i.e. no. of vanes. In this study, when casing diameter is considered 100 mm, and rotor to casing diameter ratios are kept 0.70 to 0.55, the average maximum power is obtained to the order of 4.95 kW (6.6 HP which is sufficient to run motorbikes.

  16. Compression and texture in socks enhance football kicking performance.

    Hasan, Hosni; Davids, Keith; Chow, Jia Yi; Kerr, Graham


    The purpose of this study was to observe effects of wearing textured insoles and clinical compression socks on organisation of lower limb interceptive actions in developing athletes of different skill levels in association football. Six advanced learners and six completely novice football players (15.4±0.9years) performed 20 instep kicks with maximum velocity, in four randomly organised insoles and socks conditions, (a) Smooth Socks with Smooth Insoles (SSSI); (b) Smooth Socks with Textured Insoles (SSTI); (c) Compression Socks with Smooth Insoles (CSSI) and (d), Compression Socks with Textured Insoles (CSTI). Reflective markers were placed on key anatomical locations and the ball to facilitate three-dimensional (3D) movement recording and analysis. Data on 3D kinematic variables and initial ball velocity were analysed using one-way mixed model ANOVAs. Results revealed that wearing textured and compression materials enhanced performance in key variables, such as the maximum velocity of the instep kick and increased initial ball velocity, among advanced learners compared to the use of non-textured and compression materials. Adding texture to football boot insoles appeared to interact with compression materials to improve kicking performance, captured by these important measures. This improvement in kicking performance is likely to have occurred through enhanced somatosensory system feedback utilised for foot placement and movement organisation of the lower limbs. Data suggested that advanced learners were better at harnessing the augmented feedback information from compression and texture to regulate emerging movement patterns compared to novices. Copyright © 2016. Published by Elsevier B.V.

  17. Prediction of compressibility parameters of the soils using artificial neural network.

    Kurnaz, T Fikret; Dagdeviren, Ugur; Yildiz, Murat; Ozkan, Ozhan


    The compression index and recompression index are one of the important compressibility parameters to determine the settlement calculation for fine-grained soil layers. These parameters can be determined by carrying out laboratory oedometer test on undisturbed samples; however, the test is quite time-consuming and expensive. Therefore, many empirical formulas based on regression analysis have been presented to estimate the compressibility parameters using soil index properties. In this paper, an artificial neural network (ANN) model is suggested for prediction of compressibility parameters from basic soil properties. For this purpose, the input parameters are selected as the natural water content, initial void ratio, liquid limit and plasticity index. In this model, two output parameters, including compression index and recompression index, are predicted in a combined network structure. As the result of the study, proposed ANN model is successful for the prediction of the compression index, however the predicted recompression index values are not satisfying compared to the compression index.

  18. Performance of a Discrete Wavelet Transform for Compressing Plasma Count Data and its Application to the Fast Plasma Investigation on NASA's Magnetospheric Multiscale Mission

    Barrie, Alexander C.; Yeh, Penshu; Dorelli, John C.; Clark, George B.; Paterson, William R.; Adrian, Mark L.; Holland, Matthew P.; Lobell, James V.; Simpson, David G.; Pollock, Craig J.; hide


    Plasma measurements in space are becoming increasingly faster, higher resolution, and distributed over multiple instruments. As raw data generation rates can exceed available data transfer bandwidth, data compression is becoming a critical design component. Data compression has been a staple of imaging instruments for years, but only recently have plasma measurement designers become interested in high performance data compression. Missions will often use a simple lossless compression technique yielding compression ratios of approximately 2:1, however future missions may require compression ratios upwards of 10:1. This study aims to explore how a Discrete Wavelet Transform combined with a Bit Plane Encoder (DWT/BPE), implemented via a CCSDS standard, can be used effectively to compress count information common to plasma measurements to high compression ratios while maintaining little or no compression error. The compression ASIC used for the Fast Plasma Investigation (FPI) on board the Magnetospheric Multiscale mission (MMS) is used for this study. Plasma count data from multiple sources is examined: resampled data from previous missions, randomly generated data from distribution functions, and simulations of expected regimes. These are run through the compression routines with various parameters to yield the greatest possible compression ratio while maintaining little or no error, the latter indicates that fully lossless compression is obtained. Finally, recommendations are made for future missions as to what can be achieved when compressing plasma count data and how best to do so.

  19. Tension-Compression Fatigue of a Nextel™720/alumina Composite at 1200 °C in Air and in Steam

    Lanser, R. L.; Ruggles-Wrenn, M. B.


    Tension-compression fatigue behavior of an oxide-oxide ceramic-matrix composite was investigated at 1200 °C in air and in steam. The composite is comprised of an alumina matrix reinforced with Nextel™720 alumina-mullite fibers woven in an eight harness satin weave (8HSW). The composite has no interface between the fiber and matrix, and relies on the porous matrix for flaw tolerance. Tension-compression fatigue behavior was studied for cyclical stresses ranging from 60 to 120 MPa at a frequency of 1.0 Hz. The R ratio (minimum stress to maximum stress) was -1.0. Fatigue run-out was defined as 105 cycles and was achieved at 80 MPa in air and at 70 MPa in steam. Steam reduced cyclic lives by an order of magnitude. Specimens that achieved fatigue run-out were subjected to tensile tests to failure to characterize the retained tensile properties. Specimens subjected to prior cyclic loading in air retained 100 % of their tensile strength. The steam environment severely degraded tensile properties. Tension-compression cyclic loading was considerably more damaging than tension-tension cyclic loading. Composite microstructure, as well as damage and failure mechanisms were investigated.

  20. Maximum entropy production in daisyworld

    Maunu, Haley A.; Knuth, Kevin H.


    Daisyworld was first introduced in 1983 by Watson and Lovelock as a model that illustrates how life can influence a planet's climate. These models typically involve modeling a planetary surface on which black and white daisies can grow thus influencing the local surface albedo and therefore also the temperature distribution. Since then, variations of daisyworld have been applied to study problems ranging from ecological systems to global climate. Much of the interest in daisyworld models is due to the fact that they enable one to study self-regulating systems. These models are nonlinear, and as such they exhibit sensitive dependence on initial conditions, and depending on the specifics of the model they can also exhibit feedback loops, oscillations, and chaotic behavior. Many daisyworld models are thermodynamic in nature in that they rely on heat flux and temperature gradients. However, what is not well-known is whether, or even why, a daisyworld model might settle into a maximum entropy production (MEP) state. With the aim to better understand these systems, this paper will discuss what is known about the role of MEP in daisyworld models.

  1. Maximum Matchings via Glauber Dynamics

    Jindal, Anant; Pal, Manjish


    In this paper we study the classic problem of computing a maximum cardinality matching in general graphs $G = (V, E)$. The best known algorithm for this problem till date runs in $O(m \\sqrt{n})$ time due to Micali and Vazirani \\cite{MV80}. Even for general bipartite graphs this is the best known running time (the algorithm of Karp and Hopcroft \\cite{HK73} also achieves this bound). For regular bipartite graphs one can achieve an $O(m)$ time algorithm which, following a series of papers, has been recently improved to $O(n \\log n)$ by Goel, Kapralov and Khanna (STOC 2010) \\cite{GKK10}. In this paper we present a randomized algorithm based on the Markov Chain Monte Carlo paradigm which runs in $O(m \\log^2 n)$ time, thereby obtaining a significant improvement over \\cite{MV80}. We use a Markov chain similar to the \\emph{hard-core model} for Glauber Dynamics with \\emph{fugacity} parameter $\\lambda$, which is used to sample independent sets in a graph from the Gibbs Distribution \\cite{V99}, to design a faster algori...

  2. 76 FR 1504 - Pipeline Safety: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure...


    ...: Establishing Maximum Allowable Operating Pressure or Maximum Operating Pressure Using Record Evidence, and... facilities of their responsibilities, under Federal integrity management (IM) regulations, to perform... system, especially when calculating Maximum Allowable Operating Pressure (MAOP) or Maximum Operating...

  3. Compressive sensing by learning a Gaussian mixture model from measurements.

    Yang, Jianbo; Liao, Xuejun; Yuan, Xin; Llull, Patrick; Brady, David J; Sapiro, Guillermo; Carin, Lawrence


    Compressive sensing of signals drawn from a Gaussian mixture model (GMM) admits closed-form minimum mean squared error reconstruction from incomplete linear measurements. An accurate GMM signal model is usually not available a priori, because it is difficult to obtain training signals that match the statistics of the signals being sensed. We propose to solve that problem by learning the signal model in situ, based directly on the compressive measurements of the signals, without resorting to other signals to train a model. A key feature of our method is that the signals being sensed are treated as random variables and are integrated out in the likelihood. We derive a maximum marginal likelihood estimator (MMLE) that maximizes the likelihood of the GMM of the underlying signals given only their linear compressive measurements. We extend the MMLE to a GMM with dominantly low-rank covariance matrices, to gain computational speedup. We report extensive experimental results on image inpainting, compressive sensing of high-speed video, and compressive hyperspectral imaging (the latter two based on real compressive cameras). The results demonstrate that the proposed methods outperform state-of-the-art methods by significant margins.

  4. Influencing factors of compressive strength of solidified inshore saline soil using SH lime-ash

    覃银辉; 刘付华; 周琦


    Through unconfined compressive strength test,influencing factors on compressive strength of solidified inshore saline soil with SH lime-ash,ratio of lime-ash(1-K),quantity of lime-ash,age,degree of compression and salt content were studied.The results show that because inshore saline soil has special engineering characteristic,more influencing factors must be considered compared with ordinary soil for the perfect effect of solidifying.

  5. Thermal Conductivity at the Interface of CHBr3/NaC1 under Shock Compression

    杨嘉陵; 胡金彪; 谭华; 刘吉平


    A special experiment system has been proposed for studying the thermal physical property under shock compression. The optical radiation was recorded by a high time-resolution pyrometer. The ratio α of sample and window materials under shock compression was studied by using this experimental technique. The thermal conductivity of CHBr3 calculated from α under shock compression is about 103 times larger than that under normal conditions.


    Nishat kanvel


    Full Text Available This paper presents an adaptive lifting scheme with Particle Swarm Optimization technique for image compression. Particle swarm Optimization technique is used to improve the accuracy of the predictionfunction used in the lifting scheme. This scheme is applied in Image compression and parameters such as PSNR, Compression Ratio and the visual quality of the image is calculated .The proposed scheme iscompared with the existing methods.

  7. On the improved correlative prediction scheme for aliased electrocardiogram (ECG) data compression.

    Gao, Xin


    An improved scheme for aliased electrocardiogram (ECG) data compression has been constructed, where the predictor exploits the correlative characteristics of adjacent QRS waveforms. The twin-R correlation prediction and lifting wavelet transform (LWT) for periodical ECG waves exhibits feasibility and high efficiency to achieve lower distortion rates with realizable compression ratio (CR); grey predictions via GM(1, 1) model have been adopted to evaluate the parametric performance for ECG data compression. Simulation results illuminate the validity of our approach.

  8. A Family of Maximum SNR Filters for Noise Reduction

    Huang, Gongping; Benesty, Jacob; Long, Tao;


    This paper is devoted to the study and analysis of the maximum signal-to-noise ratio (SNR) filters for noise reduction both in the time and short-time Fourier transform (STFT) domains with one single microphone and multiple microphones. In the time domain, we show that the maximum SNR filters can...... significantly increase the SNR but at the expense of tremendous speech distortion. As a consequence, the speech quality improvement, measured by the perceptual evaluation of speech quality (PESQ) algorithm, is marginal if any, regardless of the number of microphones used. In the STFT domain, the maximum SNR....... This demonstrates that the maximum SNR filters, particularly the multichannel ones, in the STFT domain may be of great practical value....

  9. Simultaneous encryption and compression of medical images based on optimized tensor compressed sensing with 3D Lorenz.

    Wang, Qingzhu; Chen, Xiaoming; Wei, Mengying; Miao, Zhuang


    The existing techniques for simultaneous encryption and compression of images refer lossy compression. Their reconstruction performances did not meet the accuracy of medical images because most of them have not been applicable to three-dimensional (3D) medical image volumes intrinsically represented by tensors. We propose a tensor-based algorithm using tensor compressive sensing (TCS) to address these issues. Alternating least squares is further used to optimize the TCS with measurement matrices encrypted by discrete 3D Lorenz. The proposed method preserves the intrinsic structure of tensor-based 3D images and achieves a better balance of compression ratio, decryption accuracy, and security. Furthermore, the characteristic of the tensor product can be used as additional keys to make unauthorized decryption harder. Numerical simulation results verify the validity and the reliability of this scheme.

  10. A simple data compression scheme for binary images of bacteria compared with commonly used image data compression schemes.

    Wilkinson, M H


    A run length code compression scheme of extreme simplicity, used for image storage in an automated bacterial morphometry system, is compared with more common compression schemes, such as are used in the tag image file format. These schemes are Lempel-Ziv and Welch (LZW), Macintosh Packbits, and CCITT Group 3 Facsimile 1-dimensional modified Huffman run length code. In a set of 25 images consisting of full microscopic fields of view of bacterial slides, the method gave a 10.3-fold compression: 1.074 times better than LZW. In a second set of images of single areas of interest within each field of view, compression ratios of over 600 were obtained, 12.8 times that of LZW. The drawback of the system is its bad worst case performance. The method could be used in any application requiring storage of binary images of relatively small objects with fairly large spaces in between.

  11. An underwater acoustic data compression method based on compressed sensing

    郭晓乐; 杨坤德; 史阳; 段睿


    The use of underwater acoustic data has rapidly expanded with the application of multichannel, large-aperture underwater detection arrays. This study presents an underwater acoustic data compression method that is based on compressed sensing. Underwater acoustic signals are transformed into the sparse domain for data storage at a receiving terminal, and the improved orthogonal matching pursuit (IOMP) algorithm is used to reconstruct the original underwater acoustic signals at a data processing terminal. When an increase in sidelobe level occasionally causes a direction of arrival estimation error, the proposed compression method can achieve a 10 times stronger compression for narrowband signals and a 5 times stronger compression for wideband signals than the orthogonal matching pursuit (OMP) algorithm. The IOMP algorithm also reduces the computing time by about 20% more than the original OMP algorithm. The simulation and experimental results are discussed.

  12. A Fast Fractal Image Compression Coding Method


    Fast algorithms for reducing encoding complexity of fractal image coding have recently been an important research topic. Search of the best matched domain block is the most computation intensive part of the fractal encoding process. In this paper, a fast fractal approximation coding scheme implemented on a personal computer based on matching in range block's neighbours is presented. Experimental results show that the proposed algorithm is very simple in implementation, fast in encoding time and high in compression ratio while PSNR is almost the same as compared with Barnsley's fractal block coding .

  13. TPC data compression

    Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Roehrich, Dieter; Schaefer, Erich; W. Schulz, Markus; M. Steinbeck, Timm; Stock, Reinhard; Sulimma, Kolja; Vestboe, Anders; Wiebalck, Arne E-mail:


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  14. TPC data compression

    Berger, Jens; Lindenstruth, Volker; Plamper, Patrick; Röhrich, Dieter; Schafer, Erich; Schulz, M W; Steinbeck, T M; Stock, Reinhard; Sulimma, Kolja; Vestbo, Anders S; Wiebalck, Arne


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  15. TPC data compression

    Berger, Jens; Frankenfeld, Ulrich; Lindenstruth, Volker; Plamper, Patrick; Röhrich, Dieter; Schäfer, Erich; Schulz, Markus W.; Steinbeck, Timm M.; Stock, Reinhard; Sulimma, Kolja; Vestbø, Anders; Wiebalck, Arne


    In the collisions of ultra-relativistic heavy ions in fixed-target and collider experiments, multiplicities of several ten thousand charged particles are generated. The main devices for tracking and particle identification are large-volume tracking detectors (TPCs) producing raw event sizes in excess of 100 Mbytes per event. With increasing data rates, storage becomes the main limiting factor in such experiments and, therefore, it is essential to represent the data in a way that is as concise as possible. In this paper, we present several compression schemes, such as entropy encoding, modified vector quantization, and data modeling techniques applied on real data from the CERN SPS experiment NA49 and on simulated data from the future CERN LHC experiment ALICE.

  16. Waves and compressible flow

    Ockendon, Hilary


    Now in its second edition, this book continues to give readers a broad mathematical basis for modelling and understanding the wide range of wave phenomena encountered in modern applications.  New and expanded material includes topics such as elastoplastic waves and waves in plasmas, as well as new exercises.  Comprehensive collections of models are used to illustrate the underpinning mathematical methodologies, which include the basic ideas of the relevant partial differential equations, characteristics, ray theory, asymptotic analysis, dispersion, shock waves, and weak solutions. Although the main focus is on compressible fluid flow, the authors show how intimately gasdynamic waves are related to wave phenomena in many other areas of physical science.   Special emphasis is placed on the development of physical intuition to supplement and reinforce analytical thinking. Each chapter includes a complete set of carefully prepared exercises, making this a suitable textbook for students in applied mathematics, ...

  17. Central cooling: compressive chillers

    Christian, J.E.


    Representative cost and performance data are provided in a concise, useable form for three types of compressive liquid packaged chillers: reciprocating, centrifugal, and screw. The data are represented in graphical form as well as in empirical equations. Reciprocating chillers are available from 2.5 to 240 tons with full-load COPs ranging from 2.85 to 3.87. Centrifugal chillers are available from 80 to 2,000 tons with full load COPs ranging from 4.1 to 4.9. Field-assemblied centrifugal chillers have been installed with capacities up to 10,000 tons. Screw-type chillers are available from 100 to 750 tons with full load COPs ranging from 3.3 to 4.5.

  18. Compression-based Similarity

    Vitanyi, Paul M B


    First we consider pair-wise distances for literal objects consisting of finite binary files. These files are taken to contain all of their meaning, like genomes or books. The distances are based on compression of the objects concerned, normalized, and can be viewed as similarity distances. Second, we consider pair-wise distances between names of objects, like "red" or "christianity." In this case the distances are based on searches of the Internet. Such a search can be performed by any search engine that returns aggregate page counts. We can extract a code length from the numbers returned, use the same formula as before, and derive a similarity or relative semantics between names for objects. The theory is based on Kolmogorov complexity. We test both similarities extensively experimentally.

  19. Adaptively Compressed Exchange Operator

    Lin, Lin


    The Fock exchange operator plays a central role in modern quantum chemistry. The large computational cost associated with the Fock exchange operator hinders Hartree-Fock calculations and Kohn-Sham density functional theory calculations with hybrid exchange-correlation functionals, even for systems consisting of hundreds of atoms. We develop the adaptively compressed exchange operator (ACE) formulation, which greatly reduces the computational cost associated with the Fock exchange operator without loss of accuracy. The ACE formulation does not depend on the size of the band gap, and thus can be applied to insulating, semiconducting as well as metallic systems. In an iterative framework for solving Hartree-Fock-like systems, the ACE formulation only requires moderate modification of the code, and can be potentially beneficial for all electronic structure software packages involving exchange calculations. Numerical results indicate that the ACE formulation can become advantageous even for small systems with tens...

  20. Strength Regularity and Failure Criterion of High-Strength High-Performance Concrete under Multiaxial Compression

    HE Zhen-jun; SONG Yu-pu


    Multiaxial compression tests were performed on 100 mm × 100 mm × 100 nun high-strength high-performance concrete (HSHPC) cubes and normal strength concrete (NSC) cubes. The failure modes of specimens were presented, the static compressive strengths in principal directions were measured, the influence of the stress ratios was analyzed. The experimental results show that the ultimate strengths for HSHPC and NSC under multiaxial compression are greater than the uniaxial compressive strengths at all stress ratios, and the multiaxial strength is dependent on the brittleness and stiffness of concrete, the stress state and the stress ratios. In addition, the Kupfer-Gerstle and Ottosen's failure criteria for plain HSHPC and NSC under multiaxial compressive loading were modified.